text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Dynamic Bandwidth Slicing in Passive Optical Networks to Empower Federated Learning
Federated Learning (FL) is a decentralized machine learning method in which individual devices compute local models based on their data. In FL, devices periodically share newly trained updates with the central server, rather than submitting their raw data. The key characteristics of FL, including on-device training and aggregation, make it interesting for many communication domains. Moreover, the potential of new systems facilitating FL in sixth generation (6G) enabled Passive Optical Networks (PON), presents a promising opportunity for integration within this domain. This article focuses on the interaction between FL and PON, exploring approaches for effective bandwidth management, particularly in addressing the complexity introduced by FL traffic. In the PON standard, advanced bandwidth management is proposed by allocating multiple upstream grants utilizing the Dynamic Bandwidth Allocation (DBA) algorithm to be allocated for an Optical Network Unit (ONU). However, there is a lack of research on studying the utilization of multiple grant allocation. In this paper, we address this limitation by introducing a novel DBA approach that efficiently allocates PON bandwidth for FL traffic generation and demonstrates how multiple grants can benefit from the enhanced capacity of implementing PON in carrying out FL flows. Simulations conducted in this study show that the proposed solution outperforms state-of-the-art solutions in several network performance metrics, particularly in reducing upstream delay. This improvement holds great promise for enabling real-time data-intensive services that will be key components of 6G environments. Furthermore, our discussion outlines the potential for the integration of FL and PON as an operational reality capable of supporting 6G networking.
Introduction
Human life and industry in recent decades has experienced great advantages due to the development of communication systems and services.There exist several examples of these real-time technologies and services introduced in recent years, such as Smart Home Systems, Virtual Reality (VR), and Multiplayer Online Games.Hence, the communication networks need extra improvements to adopt these technologies.The fifth generation (5G) communication network deployed extensively; meanwhile, research and industry are researching the 6G communication network as a mega-network that is heterogeneous.Machine Learning (ML), service interaction, and performance will be increased in the case of 6G.Consequently, the integration of Artificial Intelligence (AI) and ML into 6G infrastructure can be an enormous value in communication networks [1].As the networks mature to become more flexible, intelligent, and autonomous, they are also ready for advanced AI capabilities [2].In this more connected future, the intertwining of AI with 6G technology is revolutionizing the vision of communication networks as well as services.It is predicted that such convergence would render revolutionary changes in interaction with technology (e.g., the comprehensive establishment of network intelligence, flexible distribution resources, and independent decision-making) [3].Federated Learning (FL) is a distributed ML technique that enforces data privacy and enables training models to be learned collaboratively across various decentralized devices such as smartphones, edge devices, and IoT.Training in FL is performed on devices locally and independently using data that are stored therein.However, updates on a global model are aggregated via FL an aggregator or server which constructs the updated version of it back to devices for enhancing their local models.The global model continues to be updated via filtered local updates without any compromise in data privacy.
Optical Access Network will be evolved as part of communication network evolution to fulfill the required Quality of Service (QoS) with their necessary bandwidth demand for customers and industries [4].Passive Optical Networks (PON) (e.g., Ethernet-PON (E-PON), Gigabit-PON (G-PON)) are connected through point-to-multi-point topology where an Optical Line Terminal (OLT) plays as a master device located at the central office and connected with many Optical Network Unit (ONU) through a passive optical splitter.In 6G, the network environment is heterogeneous, and it should uphold the rapid development of the number of connections and hungry bandwidth-demanding applications, which can increase the delay and bypass the delay requirement for delay-sensitive applications.PONs can play a significant role in 6G networks to overcome the rapid demand for highspeed and scalable communication networks.PON can support fronthaul and backhaul by providing cost-effective solutions for 5G and beyond [5].The main purpose of the Dynamic Bandwidth Allocation (DBA) algorithm in PONs is to provide that each ONU grants its demanded bandwidth as much as possible more efficiently with a fair bandwidth distribution between connected ONUs.The interaction between the OLT and ONU is achieved by the Report control message from the ONU to report its demand, and a Gate control message from the OLT to grant an upstream bandwidth to ONUs.Therefore, the DBA should effectively manage the allocation of available bandwidth resources in real time according to ONUs' demands while maintaining fairness and QoS.However, while a new type of traffic emerges (e.g., FL traffic), PON's resource allocation and delay require further development to support 6G.In this direction, only a few research papers exist related to PON's bandwidth utilization for FL that help illustrate DBA's advantages, which describe how bandwidth is allocated in a PON network.For instance, the authors of [6] proposed PON's bandwidth slicing to support FL.Based on FL training parameters, the proposed mechanism allocates the bandwidth to the ONUs in ascending order.However, for multiple FL tasks, more bandwidth slices are required.The authors of [7] introduced a Wavelength and Bandwidth Allocation (DWBA) algorithm for FL application support over EPON.The solution prioritizes FL traffic statically.The question raised from that research is how the DBA differentiates between FL and normal traffic demands and how the ONU manages that traffic (e.g., queuing and reporting).
In the PON standard, the design of the Gate control message is of significant importance.As outlined in the XGS-PON standard [8], the actual number of physical or logical queues implemented at the ONU to manage these data can vary and depend on the ONU's internal architecture, the network's quality of service requirements, and the entity receiving upstream bandwidth allocation, denoted as an allocation ID (Alloc-ID).However, from the OLT's perspective for bandwidth allocation and management, each Alloc-ID represents one logical queue or buffer.This approach allows the OLT to treat all Alloc-IDs as independent entities at the same level of hierarchy for the purposes of upstream bandwidth assignment, providing a fair and organized distribution of bandwidth among all connected ONUs.Because the ONU is assumed to have multiple upstream queues, this can allow the DBA to provide different upstream bandwidths (grants) for an ONU.However, there have been limited research articles that explain how a DBA can utilize more grants for an ONU during a DBA cycle.For instance, the authors of [6] addressed this topic, but they did not comprehensively show this feature.Our article aims to address this gap.Thus, by efficiently using more than one grant during a DBA cycle, we can improve upstream bandwidth in PON for FL traffic.
The motivation for this research comes from the FL integration within 6G-enabled PON.FL allows collaborative training models without transferring raw data, which can provide privacy in 6G networks.The unique characteristics of PON, such as DBA, present a promising platform for implementing FL efficiently.This integration between FL and PON is expected to address the rapid bandwidth demands and improve delay performance, required in supporting real-time and bandwidth intensive applications in 6G.The contributions of this article are summarized as follows: • We introduce the importance of PON in 6G.We also summarize existing work on FL under PON and recent DBA algorithms to support FL.
•
Utilizing this analysis as a foundation, the incoming FL model update traffic is modeled to be aggregated at the ONU, and then the results are transferred to a specified upstream queue of the ONU.
•
We introduce a novel PON DBA considering different queue management at the ONUs and send a Gate control message from the OLT with multi-grants for those queues.
•
We highlight different open challenges for future research and wrap up the article.
The remainder of the paper is organized as follows.In Section 2, we present related work regarding DBA implementation and FL approaches in PON.Section 3 introduces a case study of our proposed DBA-based FL aggregation over PON and its simulation results.Section 4 presents open challenges of FL under PON, and we conclude our paper in Section 5.
Overview of PON-Empowered Federated Learning
A few existing solutions have been implemented to deal with FL for optimal bandwidth utilization in PON.This article classifies the research methods into DBA optimization and FL aggregation.
Overall Process of PON-Empowered FL
Figure 1 shows glimpses of the role of PON in 6G.The figure illustrates a 6G integrated FL hierarchy across a PON.User devices (e.g., smartphones to IoT gadgets) train basic FL models and send updates to the ONUs at 6G base stations.The ONUs then aggregate local model updates and send them to the OLT, which aggregates FL models to the global server.The server refines the collective model and broadcasts the updated version through the OLTs and ONUs to user devices.An extensive survey in [9] presents the background of FL in networking, experimental simulations, and case studies.It also identifies challenges and future research directions in privacy, preserving FL in intelligent networking.
FL Aggregation over PON
FL aggregation's primary objective revolves around developing methodologies that enable the seamless integration and processing of heterogeneous data sources while circumventing the need for centralized data consolidation.
A quintessential example in the work by [10] unveiled an edge-based multi-layer Hierarchical FL architecture that enables the execution of the conventional FL with model aggregation at several tiers.The suggested method is tested through several simulations, and the ultimate accuracy and loss of the models produced were contrasted with those of the conventional FL method.As a result, model aggregations may be carried out even when a set of edge nodes is not constantly linked to the cloud.Additionally, this solution enables model aggregations to communicate with the cloud less frequently, saving communication energy.
One more FL is developed based on the bandwidth-constrained client selection and scheduling presented in [11].A two-step aggregation solution is introduced in which the local model parameters are first aggregated ONUs to enable scalable FL (SFL) over PONs, and then further aggregated by the central server at the central office.By doing this, the upstream bandwidth needed to transmit model parameters remains constant despite the number of IoT devices, resulting in high learning accuracy.The aggregated local models conveyed in PON make it difficult to read the unique local model for each client; hence, the suggested SFL may also enhance privacy.
DBA Optimization
In XGS-PON [8] standards, two kinds of DBA are mentioned: Status Reporting (SR) DBA and Traffic Monitoring (TM) DBA.The SR-DBA relies on the ONU's buffer occupancy reports, whereas the TM-DBA relies on the traffic monitoring of an ONU.However, the implementation of the DBA, as outlined in [8], is subject to the discretion of individual OLT vendors.Thus, the OLT vendors can develop and utilize their own efficient DBA algorithms to meet diverse network environments and performance requirements.Thereby, vendors can introduce unique DBA and enhance network efficiency and service quality for customers.Nonetheless, this can also cause substantial variation in DBA performance and effectiveness across various OLT products due to the distinct proprietary approaches employed in managing DBA functionality.
Regarding DBA in FL, there are a few research approaches in PON enhancement to optimize bandwidth utilization.For instance, the authors of [7] proposed two techniques for DBA and wavelength allocation for PON (one based on statistical multiplexing for guaranteeing QoS for FL traffic across 50 Gbs Ethernet-PON and the second on bandwidth reservation).The aim in [7] was to let PON customers use their requested bandwidth for the FL application's scheduling without compromising the QoS guarantees for other delay-critical applications.The suggested technique uses the well-known Differentiated Service approach to address the FL applications over the Ethernet PON QoS provisioning challenge.The bandwidth guarantee required for FL applications cannot be achieved simply by mapping FL traffic into a DiffServ per-hop behavior (PHB), as FL traffic would compete with traffic from other types of clients in the same PHB.It is possible to differentiate between FL traffic and other burst traffic by creating a PHB tailored for FL.This enables the DBA algorithm to prioritize the bandwidth according to the established policy.
The bandwidth-slicing method to support FL in edge computing is proposed in [6,12].The proposed solution provides minimum communication delay for training traffic.According to the results, bandwidth slicing greatly increases training effectiveness while maintaining high learning accuracy.However, their approach failed to specify how to differentiate between FL traffic and other network traffic.The author of [13] proposed a solution of a client scheduling algorithm to balance the ONU load while securing the required bandwidth for FL traffic.However, their solution did not focus on providing advanced multiple queue management with prioritization to provide efficient bandwidth handling.
Furthermore, the authors of [14] proposed a DBA algorithm with an adaptive predictive model for low delay communications using the XGBoost [15] ensemble learning algorithm.Utilizing this technique, the authors claimed that the solution can maintain low delay communication characteristics despite environmental changes.The dynamic perception algorithm is inspired by reinforcement learning and adjusts according to environmental feedback.However, the adaptability and time-to-converge of this algorithm under various network scenarios have yet to be exhaustively analyzed.The paper does not explicitly mention the number of queues at the ONU or how the proposed algorithms interact with these queues.This can be a notable limitation.
In a TDM-PON, there are some novel DBA algorithms that are implemented to solve delay and bandwidth management.The authors of [16] proposed a learning-based solution named Online Convex Optimization (OCO) DBA.The main focus of [16] was to minimize the upstream delay by learning traffic delay over time.However, they utilized basic queue management.Immediate Allocation with Colorless Grant (IACG) DBA was introduced in [17] to decrease the delay of the fronthaul traffic.The solution efficiently allocates unallocated bandwidth to other ONUs.The Optimized Round-Robin (optimized-RR) was proposed in [18] to meet the restricted delay requirements of mobile fronthaul.The authors of [19] proposed the Efficient Bandwidth Utilization (EBU) DBA algorithm.The EBU allocates unused bandwidth to higher-demand ONUs.An extensive comparison of [16][17][18][19] is performed in Section 3.3 of this article.
Unlike existing bandwidth-based FL algorithms, our proposed model unlocks the power of PON's control message to support the network and FL traffic separately.Furthermore, two problems are described: first, the FL model update aggregation at the ONU, and second, the DBA to support the system.
A Novel DBA-Based FL Aggregation over PON
This article assumes a Time-division Multiplexing (TDM)-PON system where an OLT is connected with multiple ONUs.Suppose that an ONU has two interfaces: one interface to the Customer Premises Equipment (CPE) (i.e., through a wireless antenna as a part of 6G) and a PON interface, as presented in Figure 2.
In Figure 2, we assume a two queue model: queue Q 0 is for network traffic (e.g., burst traffic), and queue Q 1 is for FL local model update traffic.When introducing FL traffic in PON, there can be two important optimizations required in the PON's system described as follows: • FL traffic aggregation at the ONU: Assuming that the ONU can classify the incoming upstream traffic from the CPEs, the traffic optimization problem is FL traffic aggregation.In FL, only model parameter updates are transmitted.Thus, the ONU needs to aggregate these updates from various devices in its domain before sending them upstream to the OLT and eventually to the central server.This can reduce the upstream traffic significantly.• DBA optimization: As illustrated in Figure 2, assuming that the upstream queues at the ONU's PON interface are dedicated to classified traffic (Q 0 for general traffic and Q 1 for FL aggregated model update), the role of the DBA at the OLT is to secure upstream bandwidth for these queues with proper fairness.Thus, the DBA should consider managing more than one queue bandwidth request.As defined in PON's standards (e.g., [8,20]), the ONU sends a report control message to the OLT, reporting its upstream queue status.The OLT uses this information to grant an upstream bandwidth for the designated ONU.The report control message can carry information for different ONU's queues, thanks to [8] standards.The DBA deals with reported queues separately, and issues two grants (one upstream bandwidth for each queue) in a single Gate control message (can carry multiple grants).
For more descriptions of the DBA optimization, the following subsection shows how we can deal with this problem.
Double-Queue DBA Algorithm for FL
In this subsection, we introduce a DBA algorithm that can manage the upstream bandwidth for all ONUs while each ONU has two upstream queues.The DBA solution in PONs must account for the number and nature of ONU queues (unlike in [14]).This requires the model to predict bandwidth grants for each queue separately.In this approach, the bandwidth requirements of each ONU are calculated based on their queue information (as reported from the ONU using a report control message).The OLT then allocates the available bandwidth to each ONU based on these calculations, intending to maximize the overall performance and efficiency of the FL system.
Suppose that there are N ONUs in the PON sharing bandwidth BW on each DBA cycle T c ; therefore, the total allocation decision on each T c is 2N.Assume that the bandwidth allocated to queue j of ONU i is defined as b ij , where j is 0 or 1 and i is the ONU number (i ∈ N).For each queue j of ONU i , suppose that the arrival rate is α ij and each queue has a weight of w ij ; thus, the aim is to maximize the total efficiency for all ONUs based on a logarithmic function controlled by the arrival rate α ij , as presented in Equation ( 1), with the constraint in Equation (3).
Maximize
Therefore, the bandwidth allocated for ONU i must not surpass the maximum bandwidth, as in Equations ( 2) and (3).b i0 + b i1 ≤ BW max,i for i = 1,...,N. ( While logarithmic utility functions have been extensively adopted in wireless resource allocation problems due to their concave properties providing unique optima [21], they also bring inherent fairness into the allocation process, providing that resources are apportioned in a proportional-fair manner [22].This fairness criterion is particularly important in PONs, where diverse services and applications coexist. The optimization problem is a concave maximization problem with convex constraints (i.e., Equations ( 2) and (3) are linear).Solving Karush-Kuhn-Tucker (KKT) conditions can lead to finding the optimal BW allocation (b * ij ).The Lagrangian for the optimization problem is formulated in Equation ( 4) λ and µ i in Equation ( 4) are Lagrange multipliers.We note here that the Lagrangian function for our optimization problem incorporates the objective function and the constraints with associated Lagrange multipliers.The dual problem is obtained by minimizing the Lagrangian with respect to the primal variables b ij while maximizing with respect to the dual variables λ and µ i .Table 1 presents the notations in this paper.Queue for FL local model update traffic Algorithm 1 illustrates that the DBA is initialized to record the historical data (b ij (t − 1), b ij (t − 2), . . . ) and the historical arrival rate (α ij (t − 1), α ij (t − 2), . . .). Figure 3 shows an example of how the bandwidth is managed on a DBA cycle.After the DBA cycle T c0 , all report control messages sent by the ONUs are received at the OLT.The OLT calculates the slot time for each queue of connected ONUs and sends the Gate control message to grant the bandwidth on T c1 .In this figure, we allocated two upstream grants (one for Q 0 and the second for Q 1 (for FL traffic)).The ONU uses the first grant to transmit FL-related frames from Q 1 , as allocated at the beginning of the DBA cycle, and then uses the second grand to transmit the Q 0 frames and, afterward, sends the report control message reporting its current queues status at on T c1 .It has to be noted here that the narrow connection represents the frame transmission between the OLT and ONUs.
Algorithm 1 Double-queue DBA algorithm for FL.Update µ i :
Simulation Model and Setup
We evaluated our proposed method for 8, 16, 32, and 64 ONUs.The data collection from the users has been performed using a uniform distribution method.We used our outstanding OPNET-PON model presented, for example, in [23,24].The key challenge is bridging the gap between a tool that does not inherently support ML (OPNET) and a sophisticated ML-based prediction model.Therefore, we created a basic basic Python script for federated learning aggregation, and then simulate the communication process in OPNET.Thereby, to handle the communication stream and packet handling in OPNET, we utilized a ONPNET's process model to provide that the transmission and reception of data representing the model updates and aggregated models and simulated correctly.Figure 4 represents the client process model (Figure 4a), the ONU process model (Figure 4b), and the OLT process model (Figure 4c).An example of bandwidth allocation of 8 and 16 ONUs scenarios is presented in Figure 5.
We compared our proposed bandwidth allocation solution with the widely recognized SR-DBA, keeping the network conditions consistent.Both models were evaluated under the identical load of FL traffic emanating from 1000 IoT devices.The T c was set at 2 ms for both algorithms.As depicted in Figure 6, our proposed method consistently registered a delay ranging between 2 ms to 4 ms.In contrast, the SR-DBA algorithm showcased an escalating delay trend, particularly evident with a higher number of ONUs.
The reason behind this is that in our proposed solution, the upstream bandwidth stays the same regardless of how many FL IoT devices there are, given the use of the aggregation at the ONU.While the incoming traffic increases when the number of devices increases, the upstream traffic stays constant, similar to the bandwidth requirement of a single aggregated model.Our study shows the high efficiency of FL aggregation at the ONU level regardless of FL device count, as reported in [11].On the other hand, stable bandwidth utilization provides network predictability and can reduce the upstream traffic delay.By giving an upstream grant for FL aggregated traffic on a T c , the proposed solution shows that the proposed solution is faster than the existing solution.The results indicate that our proposed DBA solution outperforms the existing SR-DBA method in terms of delay reduction, which is crucial for supporting real-time applications in 6G environments.Unlike the traditional SR-DBA, which struggles to maintain low delay with increasing ONUs, our model effectively manages the bandwidth by differentiating between FL and normal traffic and using multi-grants for improved efficiency. Figure 7 shows the fairness calculation of the proposed algorithm under different PON sizes (i.e., 8, 16, 32, and 64 ONUs).We utilize the results of Jain's Fairness Index to measure how the bandwidth is fairly distributed for all ONUs.The result indicates that, when under 8 ONUs, 16 ONUs, and 32 ONUs, the proposed DBA algorithm is effectively fair in distributing the bandwidth for all the ONUs, whereas under the 64 ONUs scenario, the fairness is slightly decreased to 0.99 in the fairness index, which is not a significant reduction.This shows that, regardless of the number of ONUs in the PON, the proposed solution can maintain the bandwidth fairly for all ONUs.The logarithmic utility functions used in the proposed solution present inherent fairness into the allocation process and confirm that the bandwidth is allocated in a proportional-fair manner.This comparative analysis underscores the efficacy of our proposed model, particularly in terms of achieving lower delay and bandwidth fairness among the ONUs.
Comparison with Existing DBA Algorithms
Table 2 presents a comprehensive comparison of our proposed solution with some of the existing DBA solutions (i.e., [16][17][18][19]).The proposed DBA solution significantly reduces the upstream delay by optimizing the bandwidth allocation ONUs.This is achieved through the effective management of multiple queues at the ONU and prioritization of FL traffic.This method maximizes overall bandwidth utilization while maintaining fairness among ONUs.The proposed DBA algorithm solves a concave maximization problem with convex constraints, using a Lagrangian function to find the optimal bandwidth allocation for ONUs and ONUs' queues, making sure that each queue is managed effectively.As the number of ONUs increases, the time taken for bandwidth allocation or the data aggregation process might increase, potentially affecting the algorithm's efficiency.Therefore, providing data privacy might introduce additional computational steps in an FL environment, adding to the complexity.Furthermore, for applications that require real-time communication, algorithms should not only be accurate, but also quick.DBA, especially in real-time scenarios, might pose challenges if the allocation algorithm does not keep up with the dynamic needs of the network.Managing multiple queues at the ONU and releasing multi-gate control messages can be complex, requiring sophisticated algorithms to handle them efficiently.Moreover, providing fair bandwidth distribution and maintaining QoS standards might introduce additional constraints to the algorithm, making it more complex.On the other hand, the parameters affecting the outcomes of the DBA algorithm include the arrival rates of traffic (α ij ), the weights assigned to different queues (w ij ), and the total available bandwidth (BW).Further analysis can be employed to understand how these parameters influence the final bandwidth allocation decisions.For instance, we can analyze the impact of varying arrival rates on the allocation efficiency and delay to provide the robustness and fairness of the DBA solution.
The complexity of the proposed DBA algorithm is influenced by the number of ONUs and the number of queues managed by each ONU.The gradient descent method used for updating the bandwidth allocation in each iteration adds to the computational complexity.Additionally, solving the KKT conditions involves iterative updates of the Lagrange multipliers and dual variables, which increases the complexity further.Providing a balance between accuracy and computational efficiency is critical, especially as the number of ONUs increases.Future work should focus on optimizing the algorithm's convergence time and computational overhead.
Coordination and Synchronization
In the FL-based PON system, the environment deals with a lot of CPE, which generates upstream traffic in PON.The control message technique builds the synchronization between CPE and ONU.The control message indicates which CPE is activated for the communication when CPE is connected to ONU.In other words, the control message builds the connection between CPE and ONU for better resource allocation.Strictly synchronizing time may not be necessary because each device performs the local training and global aggregation stages separately and independently.Each device has the ability to update its local model at its own pace and, when ready, interact with the aggregate server.Moreover, the aggregation windows method can determine which CPEs can send their local data to the ONU.Since precise strict synchronization is not required, this offers loose coordination and aids in managing the aggregation process.
Heterogeneity
In a TDM-PON, the downstream bandwidth is usually greater than the upstream bandwidth (the upstream is timely shared with all connected ONUs).However, the end devices must send their updated model parameters in FL, which the upstream bandwidth limitation can constrain.Moreover, these end devices might have different capabilities (e.g., computing power, memory, and bandwidth), which can delay the FL's aggregation process.
Energy Efficiency in PON
Energy efficiency is an important part of PON standards.The energy saving is dedicated to the ONUs, and the energy-efficient techniques are defined for the fiber link interface.Three energy-saving modes are concluded in the latest ITU-T G-988 recommendation [20]: cyclic, Doze, and Watchful sleep modes.
•
Cyclic sleep mode: The ONU transits its fiber links transmitter and receiver power mode between sleep and active cycles.In that case, the duration of each cycle is predefined (managed by the OLT), and could be configured based on the network requirements and the energy-saving algorithm.• Doze mode: the transmitter components of the ONU are turned "OFF", while the receiver is always "ON".• Watchful sleep mode: combines both the cyclic and doze modes by taking advantage of both modes and mitigating their drawbacks.
The energy-efficient mechanism in PON contributes to sustainable and environmentally friendly network operations, and on the other hand, it can provide efficient data communication.Selecting a reasonable energy-saving mode depends on the traffic pattern.For instance, the cyclic sleep mode can work perfectly in residential and business areas during the off-peak hours when there is not much traffic.Applying an energy-saving technique in PON under the FL scenario is an interesting topic.There are many challenges to address, such as developing a predictive energy-efficient mechanism, sleep and wake-up optimization, Coordination and Synchronization, and FL and energy efficiency, which are gaining researchers' traction; therefore, research in this area can produce an outstanding modernization, optimizing resources and introducing PON collaborative learning.
Conclusions
In this paper, we have addressed an advanced intersection of FL from the 6G-powered PON framework.FL can be implemented in a large applicability domain given the high capabilities of 6Gs, such as ubiquitous connectivity.With regard to our work, it is apparent that a comprehensive DBA solution becomes important.Accordingly, this article focuses on two optimizations to embed FL traffic into a PON architecture: (1) the FL traffic aggregation at the ONU can reduce a large portion of upstream traffic, which possibly eliminates up-stream PON traffic delay significantly; and (2) the DBA optimization must secure sufficient upstream bandwidth for the network and FL traffic, maintaining fairness between them.Results show that our proposed solution can provide significant improvement in delay and fairness compared to existing solutions.Moreover, we have discussed several challenges in the integration of the FL-based PON system.The availability of a large number of IoT devices and the ONUs need coordination and synchronization to provide better operations.The significance of Energy Efficiency in PON interestingly overlaps with multiple FL scenarios, revealing additional energy efficiency solutions creation prospects for PON.These challenges, together, shape the future research trajectory in the PON domain with the integration of FL in 6G.
Figure 3 .
Figure 3. Bandwidth allocation on a DBA cycle T c .
1 Figure 5 .
Figure 5. Bandwidth allocation example on a T c for 8 and 16 ONUs in PON.
Figure 7 .
Figure 7. Fairness under different numbers on ONUs.
Table 2 .
Comparison with different DBA algorithms. | 6,953.8 | 2024-08-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Engineering
Optomechanical crystal cavities (OMCCs) are fundamental nanostructures for a wide range of phenomena and applications. Usually, optomechanical interaction in such OMCCs is limited to a single optical mode and a unique mechanical mode. In this sense, eliminating the single mode constraint - for instance, by adding more mechanical modes - should enable more complex physical phenomena, giving rise to a context of multimode optomechanical interaction. However, a general method to produce in a controlled way multiple mechanical modes with large coupling rates in OMCCs is still missing. In this work, we present a route to confine multiple GHz mechanical modes coupled to the same optical field with similar optomechanical coupling rates - up to 600 kHz - by OMCC engineering. In essence, we increase the number of unit cells (consisting of a silicon nanobrick perforated by a circular holes with corrugations at its both sides) in the adiabatic transition between the cavity center and the mirror region. Remarkably, the mechanical modes in our cavities are located within a full phononic bandgap, which is a key requirement to achieve ultra high mechanical Q factors at cryogenic temperatures. The multimode bevavior in a full phononic bandgap and the easiness of realization using standard silicon nanotechnology make our OMCCs highly appealing for applications in the classical and quantum realms.
I. INTRODUCTION
Cavity optomechanics, the scientific field that studies the interaction between light and mechanics in solid cavities 1,2 , usually considers the coupling of a single mechanical mode with a single optical field [3][4][5] . But more complex physics and novel phenomena may arise when considering multiple mechanical modes coupled to a single optical mode [6][7][8] . Among others, the interest in multimode self-oscillating systems has led to emergent phenomena including dynamical topological phases 9 , analog simulators 10 , synchronization [11][12][13][14][15][16][17] and stability enhancement 18 , with the last two applications employing multiple confined mechanical modes coupled through an optical field.
In many of these multimode systems, the involved mechanical modes are not confined into the same physical structure. This is the case, for example, of confined mechanical modes in different optomechanical crystal cavities (OM-CCs) that are coupled via mechanical interaction 15 or coupled micromechanical oscillators that interact through an optical radiation field but are also physically localized at different resonators 12,18,19 . Conversely, multiple mechanical modes confined in the same structure and interacting with one common intracavity optical field have also been studied in the literature for a given number of mechanical oscillators [20][21][22] . Remarkably, most of these systems involve oscillators up to MHz frequencies, so a general route towards multiple GHz mechanical resonances in a single cavity is still missing. Recently, a bullseye optomechanical (OM) resonator enabling multiple mechanical and optical modes has been fabricated and tested 23 , but the vacuum OM coupling rates are about one of magnitude smaller than in silicon OMCCs 3,24 .
For some applications, it would also be interesting to have all these mechanical modes placed in a full phononic bandgap that prevents phonons to escape from the cavity. This could be particularly helpful in quantum applications at cryogenic temperatures, because of the enhancement of the mechanical Q factor in these conditions 25 . A reliable route to multiple phonon modes with large coupling rates would also open the door to versatile all-optical OM-based microwave signal synthesis 24 and processing 26 , which is especially valuable for application in wireless systems, in particular those requiring extreme compactness and lightweight (satellite communications) 27 .
Here, we propose a general method to get multiple mechanical modes with GHz frequencies and placed in a full phononic bandgap of a silicon OMCC. Starting with the design in 28 and demonstrated experimentally in 24 , we go a step further and demonstrate that engineering the central region of the cavity enables to systematically increase the number of mechanical modes whilst ensuring relatively large values of the OM coupling rate (g 0 /2π ≈ 400kHz). Although we use silicon as underlying material, this approach should also be of application when using other high refractive index materials to build the cavities. Such large values of g 0 allows us to easily transduce all the confined modes into the driving optical field. We demonstrate experimentally the existence of up to six modes in a single cavity, though our method could be used to get even more mechanical modes. All these features, together with the fabrication on a silicon chip using standard nanofabrication tools, suggest that our cavities could play a role in the development of multimode cavity optomechanics for different classical and quantum applications.
II. PHOTONIC AND PHONONIC BAND DIAGRAMS OF THE OMCC
Prior to the design of the defect cells and the cavity modes, we perform a broad analysis of the mirror region of the OMCC to optimize both the photonic and phononic bandgaps. Figure 1(a) sketches the unit cell used to build up our cavity. It consists of a 220 nm thick silicon nanobrick drilled with a circular hole and surrounded by lateral stabs or corrugations. The resulting photonic band diagram of this unit cell is depicted in Fig. 1(b), showing the TE-like (in blue) and the TM-like (in orange) bands, respectively. The gray shadowed area corresponds to the light cone and the blue shadowed area, between the third and fourth TE-like bands, denotes the quasi-TE bandgap, which will be used to confine the driving optical mode. It must be noted that this unit cell provides a large tunability in wavelength, as can be seen in Fig. 1(c), which depicts the evolution of the first TE bandgap as a function of the aspect ratio of the unit cell for three different period lattices. A broad tunability for wavelengths between 1200 nm and 2200 nm can be obtained, which provides a great versatility in the design. As our objective is to confine an optical mode around 1550 nm (depicted as a dashed line in Fig. 1(b,c)), from now on we will focus our studies in a period lattice for the mirror part of the cavity with a =500 nm.
Once we have the appropriate lattice period to confine the optical mode at the target wavelength, we study the mechanical properties of the mirror cell. One of the main advantages of this unit cell profile is that it holds a complete phononic bandgap 24,28 , as can be seen in Fig. 2(a). Additionally, an analysis in the phononic transmission spectra for an array of multiple mirror cells is shown in Fig. 2(b). Here, a higher number of mirror cells provides deeper transmission gaps, as expected from the existence of a phononic bandgap. Finally, the evolution of the full phononic bandgap for different aspect ratios is presented in Fig. 2(c). Again, the main advantage can be seen in the tunability of the system ranging almost all the microwave S-band (between 2 and 4 GHz). More details about the mechanical calculations are provided in the Supporting Information (Fig. S1).
III. DESIGN OF THE MULTIMODE OMCC
After obtaining the mirror unit cell through the calculations of the photonic and phononic band diagrams, we have tailored the defect and transition unit cells to be able to confine multiple mechanical modes inside the phononic bandgap. Previous designs just focused on a cavity where the mechanical mode was mostly confined into the lateral corrugations of the defect cell 24,28 . However, other mechanical modes, located in other lateral corrugations, could also lie in the bandgap. Figure 3(a) shows a top view of the silicon OMCC that we consider in this work, including the labeled number of transition cells that will be under study. As shown in Fig. 2, this cavity presents a full phononic bandgap for mechanical modes in at frequencies around 3-4 GHz (chosen by design). In order to understand the existence of multiple GHz mechanical modes in a single OMCC and with the final aim of setting a route towards a general method, we obtain the phononic band diagrams of the different unit cells in the transition region between the cavity center and the mirrors (blue region in Fig. 3(a)). Depending on the total number of transition cells (N T ), there will be a certain number of mechanical modes confined in the OMCC with frequencies in the phononic bandgap. Figure 3(b) shows the evolution of the frequency of the mechanical mode inside the bangap at the Γ point for different unit cells that represent the transition from the central defect (U D ) to the side mirror (U M ), at each constitutive cavity cell. The panels correspond to OMCCs with 6, 9 and 12 transition cells, respectively. This suggests that, when forming the OMCC, mechanical modes will appear at the calculated frequencies (depicted with dots) inside the total bandgap (drawn with a shaded area) and localized in the unit cells that form the transition between the center of the cavity and the mirrors.
Remarkably, as the total number of transition cells (N T ) increases, more mechanical modes appear inside the phononic bandgap, though the increase is not linear. Consequently, an OMCC created with more transition cells should result in more confined mechanical modes. This can be appreciated in Fig. 3(c), which shows the evolution of the Γ point frequen- cies of the mechanical modes inside the full phononic bandgap as a function of N T . Furthermore, the mechanical frequencies get closer as more transition cells are added, since the physical dimensions between successive cells change more smoothly. The next question to address is if all mechanical modes are well coupled with the optical field. To this end we calculate the OM coupling rates g 0 of the different mechanical modes of an OMCC with N T = 12. Figure 4(a) shows the optical mode of this OMCC. The electric field pattern shows a strong localization in the center of the cavity whilst the intensity exponentially decays as we move toward the mirror regions (see Fig. 4(b)). Once the optical mode is obtained, we separately simulate the fundamental mechanical modes of the OMCC. As it was explained before, not all the mechanical modes display large values of g 0 . Hence, for all of them we calculate the OM coupling rate g 0 with the optical field obtained in 4(a) to find the ones that have the largest g 0 . Figure 4(c) depicts the six mechanical modes that have the highest OM coupling rates. We consider both the photo-elastic (PE) as well as the moving interface (MI) effects 2 that contribute to the total g 0 . It is worth noting that the six mechanical modes correspond to the ones predicted in Fig. 3(c). Finally, Fig. 4(d) gathers the mechanical profiles of those modes, which are localized at the lateral corrugation of the defect and transition cells. Indeed, the first mode shows localization in the center of the cavity whilst the mechanical localization is displaced toward the mirror regions for higher-order modes. Noticeably, we get relatively large g 0 values even when the mechanical modes are localized close to the mirror regions and far from the cavity center.
IV. EXPERIMENTAL RESULTS
The optical and mechanical experimental characterization of a set of fabricated OMCCs with different transition cells has been performed with the setup sketched in Fig. 5. Here, a tunable laser is fed into the system and a variable optical attenuator (VOA) and a controller polarizer are used to set the required input power and polarization in the experimental measurements. Then, the optical signal is sent through an optical circulator. From port 2, the optical signal arrives to a tapered fiber loop which couples to the OMCC via evanescent field coupling. The transmitted signal arrives to a low frequency photodetector connected to the oscilloscope to measure the optical response of the system. On the other side, the reflected signal returns to the optical circulator (port 3) and its optical power is regulated by means of a VOA and an erbium doper fiber amplifier (EDFA). Finally, the output signal (with the mechanical modes transduced on the optical drive) is photodetected with a 12 GHz band photoreceiver and processed with a radiofrequency spectrum analyzer (RSA).
Regarding the optical response, the measurements were performed under a low laser input power in order to prevent the appearance of thermo-optic effect, which may result in a bistability asymmetric "saw-tooth" shaped transmission that gives rise to a shift in the optical resonance 29 . For a given design, a set of 12 OMCCs for each number of transition cells was fabricated and characterized. The fabrication process is described elsewhere 24 . Figure 6(a) shows two scanning electron microscope (SEM) images of fabricated OMCCs having 6 and 11 transition cells OMCCs. In comparison with the designed cavity, a fabrication-induced rounding of the lateral corrugations is clearly appreciated. The optical response was studied for each fabricated cavity as shown in Fig. 6. The signal obtained from characterizing each optical mode was a symmetric resonance where a Lorentzian fit was performed to retrieve the optical frequency and quality factor. As shown in Fig. 6(b), the cavity supported two optical modes, whose wavelength decreases as more transition cells are added to the cavity. On the other hand, the quality factor increases with N T for both modes as seen in Fig. 6(c), with some exceptions that can be attributed to fabrication irregularities. Figure 6(d) represents the electric field profile of both optical modes, which are localized at the defect and transition cells. These profiles were obtained by retrieving the real fabricated pattern from Concerning the mechanical response, a typical evolution of the measured normalized radiofrequency (RF) spectra showing the mechanical response as a function of the number of transition cells is depicted in Fig. 10(a). The increase in the number of mechanical modes with N T predicted in the simulations is also observed in the experiments. Although we did not find all predicted modes, probably because fabrication imperfections (as shown in Fig. 6(a)), the measured frequencies are around 4 GHz, which is close to the simulated values and ensures us that the modes come from the lateral corrugations of the defect and transition cells, since those are the only ones that can vibrate at those frequencies in the OMCC. The experimental mechanical modes obtained for other cavities are included in the Supporting Information.
A detailed analysis of the characterization of an OMCC with 10 transition cells is presented in Fig. 8. First, an analysis of the phononic band diagram for the fabricated structure was performed. As one of the main objectives here is to confine the mechanical modes into a total bandgap, we retrieved from the SEM images the real profile for the different unit cells of the mirror cavity and calculated the expected band diagram. The result can be seen in Fig. 8(a), where an inset of a mirror unit cell of the measured OMCC is presented. As expected, the cavity shows a full phononic bandgap around 4 GHz and the measured mechanical modes lies into it, as shown in Fig. 8(b). Here, it can be seen that, even after fabrication imperfections, it is quite in accordance with the mechanical modes predicted with the band diagram shown in Fig. 3(c). However, it must be noted that double peaks as the one corresponding to peak 2 can appear as a result in difference between the fabricated corrugations in cells of the same nominal dimensions.
To ascertain the mechanical mode profiles of each measured peak, we retrieved and simulated the actual fabricated OMCC profile from its SEM image. Figure 8(c) shows the mechanical mode profiles for the retrieved profile showing that, despite the mechanical motion is not totally confined in a single corrugation, as in the nominal cavity in Fig. 4, the position of mechanical displacement moves toward the extremes of the cavity as the frequency decreased, as expected from the numerical modelling.
V. CONCLUSIONS
In this work, we have proposed and demonstrated a method to engineer multiple mechanical modes with GHz frequencies within a full phononic bandgap in a silicon OMCC. The OMCC is formed by drilling circular holes and adding lateral corrugations to a released silicon nanobeam. By increasing the number of cells in the adiabatic transition between the cavity center and the lateral mirrors, more and more mechanical modes appear in the cavity. All the mechanical modes show reasonably large values of g 0 /2π up to 600 kHz which enable efficient transduction into a driving optical signal. Multiple applications can be envisaged, including multimode phonon lasers 8 , frequency up-and down-conversion of multiple wireless signals 26 or building chiral nano-optomechanical networks 7 .
Appendix A: Phononic band diagram and transmission simulations
Simulations of phononic bands were performed with COM-SOL Multiphysics 30 , as in previous works 24 . The evolution of the frequency at at the Γ symmetry point for the studied unit cell changing the parameters from the central defect to the mirror region is presented in Fig. S9(a). Figure S9(b) shows in more detail all the involved bands − including different symmetries − for the mirror unit cell. The most important feature to emphasize is the existence of a complete phononic bandgap where, by proper design, the frequencies of the engineered confined mechanical modes have to be placed. As noted in the main text, this should reduce the phonon leakage of the final structure as it prevents that the confined mechanical mode couples with modes of different symmetries existing in the mirror regions. In these simulations, we set Floquet Periodic Conditions (FPC) at the lateral boundaries of the structure, as shown in Fig. S9(c), and the remaining boundaries were kept as free. The dimension values of the mirror and the unit cell are the same as the ones used in the main text. Regarding the estimation of the number of expected mechanical modes to be confined in cavities with different transition cells, the analysis was similar to the one performed in S9(a). Here, the point is that, as the number of transition cells used to build up the cavity increases, there will be more unit cells with dimensions close to those of the center region with frequencies within the bandgap. Because of that, as presented in Fig. 3(b) in the main text, we can estimate the total number of mechanical modes just by analyzing the eigenfrequencies and eigenvalues of each unit cell. It must be noted, however, that the cavity may also support less localized mechanical modes, as presented in Fig. 3(d), but, as shown in the same figure, the estimation of the number of confined mechanical modes is still correct. Regarding the phononic transmission simulations, an scheme of the boundary conditions of the system is presented in Fig. S9(e). Here, we simulated the transmission coefficient of a signal generated at the load source (set as a boundary source in this case) and received at the receiver area. The transmission coefficient was calculated as the ratio total displacement (u) integrated in the source and the receiver as In the experiments, we were able to test OMCCs with different values of N T , all with the same nominal parameters according to our simulations. Figure S10 shows the measured RF spectra for different values of N T of cavities with the same nominal parameters as those reported in the main text but fabricated with a different e-beam dose. The appearance of multiple mechanical modes within the phononic bandgap is also evident here.
We also measured the mechanical Q factor of the transduced mechanical modes. The results are shown in Fig. S11, which represents the mechanical Q factor of the different mechanical modes for each value of N T . The mechanical quality factor was evaluated from the ratio between the center frequency of the peak and mechanical linewidth. The presented values were obtained through a lorentzian fit of each peak, as can be seen in the green fits in Fig. S10 for different cavity lengths. In all these panels we can also presented in red the total fit envelope of the system. It can be seen that the average value is around 1000, as expected in this kind of cavity when operated at room temperature.
We also compared the response of OMCCs with the same N T and identical nominal values, fabricated with the same ebeam does. The results for different fabricated OMCCs having N T = 6 are depicted in Fig. S12(a) for a set of 6 measurements, for the sake of clarity. It can be observed that the spectral dispersion is low, as seen in Fig. S12(b) that shows the mean value and the standard deviation for a set of 12 cavities.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 4,842.6 | 1969-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
ON CONNECTING WEYL-ORBIT FUNCTIONS TO JACOBI POLYNOMIALS AND MULTIVARIATE ( ANTI ) SYMMETRIC TRIGONOMETRIC FUNCTIONS
The aim of this paper is to make an explicit link between the Weyl-orbit functions and the corresponding polynomials, on the one hand, and to several other families of special functions and orthogonal polynomials on the other. The cornerstone is the connection that is made between the one-variable orbit functions of A1 and the four kinds of Chebyshev polynomials. It is shown that there exists a similar connection for the two-variable orbit functions of A2 and a specific version of two variable Jacobi polynomials. The connection with recently studied G2-polynomials is established. Formulas for connection between the four types of orbit functions of Bn or Cn and the (anti)symmetric multivariate cosine and sine functions are explicitly derived.
Introduction
Special functions associated with the root systems of simple Lie algebras, e.g.Weyl-orbit functions, play an important role in several domains of mathematics and theoretical physics, in particular in representation theory, harmonic analysis, numerical integration and conformal field theory.The purpose of this paper is to link Weyl-orbit functions with various types of orthogonal polynomials, namely Chebyshev and Jacobi polynomials and their multivariate generalizations, and thus to motivate further development of the remarkable properties of these polynomials in connection with orbit functions.
The collection of Weyl-orbit functions includes four different families of functions called C-, S-, S s -and S l -functions [6,15,16,25].They are induced from the sign homomorphisms of the Weyl groups of geometric symmetries related to the underlying Lie algebras.The symmetric C-functions and antisymmetric Sfunctions also appear in the representation theory of simple Lie algebras [32,34]; the S-functions appear in the Weyl character formula and every character of irreducible representations of simple Lie algebra can be written as a linear combination of C-functions.Unlike C-and S-functions, S s -and S l -functions exist only in the case of simple Lie algebras with two different lengths of roots.
A review of several pertinent properties of the Weylorbit functions is contained in [8,15,16,25].These functions possess symmetries with respect to the affine Weyl group -an infinite extension of the Weyl group by translations in dual root lattice.Therefore, we consider C-, S-, S s -and S l -functions only on specific subsets of the fundamental domain F of the affine Weyl group.Within each family, the functions are continuously orthogonal when integrated over F and form a Hilbert basis of squared integrable functions on F [25,27].They also satisfy discrete orthogonality relations which is of major importance for the processing of multidimensional digital data [8,10,27].Using discrete Fourier-like transforms arising from discrete orthogonality, digital data are interpolated in any dimension and for any lattice symmetry afforded by the underlying simple Lie algebra.Several special cases of simple Lie algebras of rank two are studied in [28][29][30].
The properties of orbit functions also lead to numerical integration formulas for functions of several variables.They approximate a weighted integral of any function of several variables by a linear combination of function values at points called nodes.In general, such formulas are required to be exact for all polynomial functions up to a certain degree [3].Furthermore, the C-functions and S-functions of simple Lie algebra A 1 coincide, up to a constant, with the common cosine and sine functions respectively.They, are therefore, related to the extensively studied Chebyshev polynomials and, consequently, to the integration formulas, quadratures, for the functions of one variable [5,31].In [24], it is shown that there are analogous formulas for numerical integration, for multivariate functions, that depend on the Weyl group of the simple Lie algebra A n and the corresponding C-and S-functions.The resulting rules for functions of several variables are known as cubature formulas.The idea of [24] is extended to any simple Lie algebra in [9,25,26].Optimal cubature formulas in the sense of the nodal points that are required are known only for S-and S s -functions.
Besides the Chebyshev polynomials, the Weyl-orbit functions are related to other orthogonal polynomials.For example, orbit functions of A 2 and C 2 coincide with two-variable analogues of Jacobi polynomials [21].It can also be shown that the C-, S-, S s -and S l -functions arising in connection with simple Lie algebras B n and C n become, up to a constant, (anti)symmetric multivariate cosine functions and (anti)symmetric multivariate sine functions [17].Note that these generalizations lead to multivariate analogues of Chebyshev polynomials and are used to derive optimal cubature formulas.Therefore, this fact indicates that it might be possible to obtain such formulas for all families of orbit functions.As for Chebyshev polynomials, it is of interest to study the accuracy of approximations and interpolations using cubature formulas.
This paper starts with a brief introduction to Weyl groups in Section 2. Then there is a review of the relations of the Weyl-orbit functions with other special functions which are associated with the Weyl groups.In Section 3.1, the connection of the C-and S-functions of one variable with Chebyshev polynomials is recalled.In Sections 3.2 and 3.4, we show that each family of Weyl-orbit functions corresponding to A 2 and C 2 can be viewed as a two-variable analogue of Jacobi polynomials [21].In Section 3.4, we also provide the exact connection with generalizations of trigonometric functions [34].
Weyl groups of simple Lie algebras
In this section, we summarize the properties of the Weyl groups that are neeeded for definition of orbit functions.There are four series of simple Lie algebras Note that we use the standard normalization for the lengths of roots, namely α i , α i = 2 if α i is a long simple root.
In addition to the basis of R n consisting of the simple roots α i , it is convenient for our purposes to introduce the basis of fundamental weights ω j given by ω j , α This allows us to express the weight lattice P defined by as Z-linear combinations of ω j .The subset of dominant weights P + is standardly given as We consider the usual partial ordering on P given by µ λ if and only if λ − µ is a sum of simple roots with non-negative integer coefficients.
To each simple root α i there is a corresponding reflection r i with respect to the hyperplane orthogonal to α i , The finite group W generated by such reflections r i , i = 1, . . ., n is called the Weyl group.For the properties of the Weyl groups see e.g.[12,14].
In the case of simple Lie algebras with two different lengths of the roots, we need to distinguish between short and long simple roots.Therefore, we denote by ∆ s the set of simple roots containing only short simple roots and we denote by ∆ l the set of long simple roots.We also define the following vectors:
Weyl-orbit functions
Each type of Weyl-orbit function arises from the sign homomorphism of Weyl groups σ : W → {±1}.There exist only two different sign homomorphisms on W connected to simple Lie algebras with one length of the roots: identity, denoted by 1, and the determinant [8,25].They are given by their values on the generators r i of W as In the case of simple Lie algebras with two different lengths of roots, i.e.B n , C n , F 4 and G 2 , there are two additional sign homomorphisms denoted by σ s and σ l and given as Labelled by the parameter a ∈ R n , the Weyl-orbit function of the variable b ∈ R n corresponding to sign homomorphism σ is introduced via the formula Each sign homomorphism 1, det, σ s , σ l determines one family of complex valued Weyl-orbit functions, called C-, S-,S s -and S l -functions respectively, and denoted by Several remarkable properties of Weyl-orbit functions, such as continuous and discrete orthogonality are usually achieved by restricting a to some subsets of the weight lattice P (see for example [8,10,15,16,27]).Note also that symmetric C-functions and antisymmetric S-functions appear in the theory of irreducible representations of simple Lie algebras [32,34].
It is also convenient to use an alternative definition of the Weyl-orbit functions via sums over the Weyl group orbits [15,16,25].These orbit sums C a , S a+ , S s a+ s , S l a+ l differ from Φ a , ϕ a+ , ϕ s a+ s , ϕ l a+ l only by a constant with |Stab W c| denoting the number of elements of W which leave c invariant.
Case A 1
The symmetric C-functions and antisymmetric Sfunctions of A 1 are, up to a constant, the common cosine and sine functions [15,16], It is well known that such functions appear in the definition of the extensively studied Chebyshev polynomials [5,31].Several types of Chebyshev polynomials are widely used in mathematical analysis, in particular, as efficient tools for numerical integration and approximations.The Chebyshev polynomials of the first, second, third and fourth kind are denoted by T m (x), U m (x), V m (x) and W m (x), respectively.If x = cos(θ), then for any m ∈ Z ≥0 it holds that Therefore, for specific choices of parameter a 1 and 2πb 1 = θ, we can view the Weyl-orbit functions of A 1 as these Chebyshev polynomials.
Case G 2
Since, for G 2 its two simple roots are of different lengths, all four families of C-, S-, S s and S lfunctions are obtained [15,16,25].The symmetric and antisymmetric orbit functions are given by the following formulas for a = a 1 ω 1 +a 2 ω 2 and b The hybrid cases can be expressed as These functions have been studied in [22], under the notation CC k , SS k , SC k and CS k with no. / Weyl Orbit Functions and (Anti)symmetric Trigonometric Functions where the variable H . Indeed, performing the following change of variables and parameters, we obtain the following relations.
In [22], the functions C a , S a+ /S , S s a+ s /S s s and S l a+ l /S l l are expressed as two-variable polynomials in variables and it is shown that they are orthogonal within each family with respect to a weighted integral on the region (see Fig. 2) containing points (x, y) satisfying with the weight function w α,β (x, y) equal to for S l -functions.
Cases B n and C n
It is shown in this section that the C-, S-, S sand S l -functions arising from B n and C n are related to the symmetric and antisymmetric multivariate generalizations of trigonometric functions [17].
The symmetric cosine functions cos + λ (x) and the antisymmetric cosine functions cos − λ (x) of the variable x = (x 1 , . . ., x n ) ∈ R n are labelled by the parameter λ = (λ 1 , . . ., λ n ) ∈ R n and are given by the following explicit formulas, where S n denotes the symmetric group consisting of all permutations of numbers 1, . . ., n, and sgn(σ) is the signature of σ.The symmetric sine functions sin + λ (x) and the antisymmetric sine functions sin − λ (x) are defined similarly, Firstly, consider the Lie algebra B n and an orthonormal basis {e 1 , . . ., e n } of R n such that α i = e i − e i+1 for i = 1, . . ., n − 1 and α n = e n .
If we determine any a ∈ R n by its coordinates with respect to the basis {e 1 , . . ., e n }, then it holds for the generators r i , i = 1, . . ., n − 1 and r n of the Weyl group W (B n ) of B n that r i (a 1 , . . ., a i , a i+1 , . . ., a n ) = (a 1 , . . ., a i+1 , a i , . . ., a n ), Therefore, W (B n ) consists of all permutations of the coordinates a i with possible sign alternations of some of them, and we actually have that W (B n ) is isomorphic to (Z/2Z) n S n [12].This implies that Since det is a homomorphism on W (B n ), we also obtain Similar connections are valid for S s -functions and S l -functions, Since Lie algebras B n and n are dual to each other, we can deduce that the symmetric and antisymmetric generalizations are also connected to the Weyl-orbit functions of C n .In order to obtain explicit relations, one can proceed by analogy with case B n and introduce an orthogonal basis {f 1 , . . ., f n } such that for i = 1, . . ., n − 1 We denote by ãi the coordinates of any point a ∈ R n with to the basis {f 1 , . . ., f n }, i.Thus, proceeding as before, we derive the following.Note that the S s -functions are related to cos − a and the S l -functions are related to sin + a in the case of C n , whereas the S s -functions correspond to sin + a and the S l -functions correspond to cos − a if we consider the simple Lie algebra B n .This follows from the fact that the short (long) roots of C n are dual to the long (short) roots of B n .
Concluding remarks
(1.) Symmetric and antisymmetric cosine functions can be used to construct multivariate orthogonal polynomials analogous to the Chebyshev polynomials of the first and third kind.The method of construction is based on decomposition of the products of these functions and is fully described in [7].
To build polynomials analogous to the Chebyshev polynomials of the second and fourth kind, it seems that the symmetric and antisymmetric generalizations of sine functions have to be analysed.This hypothesis is supported by the decomposition of the products of two-dimensional sine functions which can be found in [11].
( .)Another approach to generalization of the multivariate polynomials related to the Weyl-orbit functions stems from the shifted orthogonality of the orbit functions developed in [2].This generalization encompasses shifts of the points of the sets over which the functions are discretely orthogonal, and also shifts of the labeling weights.As a special case it contains for A 1 all four kinds of Chebyshev polynomials.The existence of analogous polynomials obtained through this approach and their relations to already known generalizations deserves further study. (3.)Besides the methods of polynomial interpolation and numerical integration, the Chebyshev polynomials are connected to other efficient methods in numerical analysis such as numerical solutions of differential equations, solutions of difference equations, fast transforms and spectral methods.The existence and the form of these methods, connected in a multivariate setting to Weyl-orbit functions, are open problems.
A 2 Since
, for A 2 the two simple roots are of the same length, there are only two corresponding families of Weyl-orbit functions, C-and S-functions.For a = a 1 ω 1 +a 2 ω 2 and b = b 1 α ∨ 1 +b 2 α ∨ 2 the explicit formulas of C-functions and S-functions are given by C a (b) = 1 |Stab W a| e 2πi(a1b1+a2b2)
Figure 1 .
Figure 1.The region of orthogonality bounded by the three-cusped deltoid.
Figure 2 .
Figure 2. The region of orthogonality for the case G2.
Figure 3 .
Figure 3.The region of orthogonality bounded by two lines and parabola. | 3,648.4 | 2016-08-31T00:00:00.000 | [
"Mathematics"
] |
OPTIMIZATION OF BIODIESEL PRODUCTION FROM USED OIL OF FISH PROCESOR USING SODIUM METHOXIDE AND SULPHONATED EGG-SHELLS AS CATALYST
Finding a long-term solution to the world's growing dependence on conventional energy sources and the depletion of fossil fuels is what worries scientists the most right now. An alternative that appears promise is biodiesel. It was investigated how to increase the generation of biodiesel from leftover fish processors' oil using sodium methoxide and sulphonated eggshells as a catalyst, based on Box-Behnken design. The greatest yield of 96.81% was achieved at 1:12 oil to methanol ratio, 65 °C reaction temperature, 90 minutes, and 0.5 w/w% catalysts loading. The variables constant, methanol to oil ratio, reaction temperature, reaction time, reaction temperature* reaction temperature, and methanol to oil ratio*catalyst load all had a significant effect on the biodiesel production, according to the response surface regression. The model does a good job of accounting for the link between biodiesel and process variables. Thus, residual fish processor oil might be effectively converted into more advantageous and environmentally benign biodiesel fuel.
INTRODUCTION
Due to the expanding population and rapid industrialization, the use of oils derived from fossil fuels has increased recently.The sustainability of the world's energy supply is continuously threatened by the need for fossil fuels in industries like heating and electricity generation.Additionally, the development of internal combustion engines and the transportation industry are causing a quicker pace of exploitation of petroleum reserves.In addition, using fossil fuels causes environmental pollution.Because of the decline in the usage of fossil fuels and the harm they do to the environment, finding a significant alternative energy source is necessary (Behçet, 2011).Therefore, the necessity to address the issue of fossil fuel shortages and lower the cost of energy must be raised.The only renewable energy source that effectively addresses the issue of the market's escalating energy fuel prices is biomass.Because the Organization of the Petroleum Exporting Countries' (OPEC) production of petroleum was insufficient, a thorough investigation was made to identify other sources of biodiesel (Kahn et al., 2002).According to Edlund et al. (2002), it is viewed as an alternative fuel that tackles the problems of environmental deterioration and a global energy deficit.Additionally, it will take the place of petro-diesel, reducing the amount of pollutants produced by combustion equipment (Lin and Lin, 2006).Biodiesel production sources that could be used include vegetable oil, animal oil, waste oil, waste from plants and animals, agricultural wastes, and municipal waste (Balat, 2008).Recent studies have concentrated on the production of biodiesel from vegetable oil, spent cooking oil, and industrial oil.The resulting biodiesel may be recycled more than once, has a higher flash point, degrades more quickly, and produces less pollution.Since biodiesel has similar physical and chemical qualities to diesel, it can be combined with diesel oil and utilised in engines.(Zhao et al., 2012).For the first generation of biofuel, canola oil, palm oil, jatropha, soya bean, and other plants have been the primary sources of feedstock (Ong et al., 2011) or fats oil (Ma et al., 1998).
However, the first-generation feedstock's sustainability and economic viability come under attack.Environmentalists assert that large-scale biodiesel production's development of oil crop plantations has led to deforestation in several nations, including Indonesia, Malaysia, Argentina, and Brazil (Gao et al., 2011).Competition for arable land for food and fiber plantations, excessive water and fertilizer demand, and subpar agricultural techniques all worsen the situation.Consequently, it was argued that the first-generation feedstock was ineffective since it had an impact on global food markets and food security (Noraini et al., 2014).Second-generation biodiesel derived from waste and inedible crops is being studied as a solution to this issue.The availability of farmland to create the by-products for commercial-scale biodiesel production, however, is the greatest obstacle to the development of second-generation biodiesel.Therefore, alternative raw materials for biodiesel production that has less impact on the food industry need to be explored (Cheng and Timilsina, 2011).In the last ten years, a promising raw source for the manufacturing of biodiesel is algae.. Numerous studies have further demonstrated that due to its excellent fuel economy and environmental index, algae is a better feedstock for biodiesel production than first and second-generation biodiesel (Brennan and Owendi, 2010).Microalgae can be grown on non-agricultural land, which reduces the need for more arable area for oil crops (Gumba et al., 2016).One of the raw components for making biodiesel is thought to be fish waste.According to Eslick et al. (2009), 60-70% of the total amount of fish produced is used for human consumption as well as the creation of fish meal and oil.Given that fish oil is thought to offer significant medical advantages (Sharma et al., 2014).In general, not all fish portions are consumed; some are thrown away.The components, which are not edible and are regarded as trash, include the spine, skin, heads, tails, and stomachs.There are more than 60 different fatty acids in fish oil, according to the literature.Of these, almost 80-85% are classified into four categories of fatty acids, such as (a) C14:0 and C16:0, (b) C16:1 and C18:1, (c) C20:1 and C22:1, and (i.e ) C20:5, C22:5 and C22:6.Fish oil is fortified with eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which makes up more than 90% of all polyunsaturated fatty acids (PUFA).Currently, fish oil is a potential feedstock for the production of biodiesel, but research on the biodiesel produced is limited (Nelson, 2006).According to the study, fish waste has limited uses.Waste is around 50% of all fish processing.The overall oil content ranges from 40% to 65% (Barros et al., 2010).Therefore, there is an opportunity to use this fish waste to convert it into biodiesel.Discarded fish waste has reportedly been found to be able to be turned into useful goods like biodiesel, slurry, and biogas for energy production (Yahyaee et al., 2010)
MATERIALS AND METHODS
The method involved in this study entails sample collection and pretreatment, reagents preparations, procedures for physicochemical analyses and analytical instrumentations.
Pre-treatment of Used Oil
The used oil from the fish processor underwent the following pre-treatment.By using filtration, solid particles, salt, pepper, and spices were eliminated.The oil was treated with sodium thiosulfate (Na2SO4), which forms clumps when it absorbs water.Decantation was used to remove the crystal.To get rid of any leftover contaminants, used oil was combined with nhexane (1:3 oil/n-hexane v/v) (Hossain et al., 2008).
Preparation of Catalysts Preparation of Sulphonated Egg Shell Catalyst
Egg shell sample was gathered in Sokoto's Mana Area.According to the approach described by Babatope and Racheal (2020), sulphonated CaO catalyst was made from used egg shells by calcination.The egg shell was sun-dried after being cleaned with distilled water.The cleaned egg shell was crushed and heated to 900 o C in the furnace 4 hours.The produced calcined sample was repeatedly rinsed in hot distilled water until the pH was 7, and then it was dried for 24 hours at 70 o C in the oven..A sulphonated acid catalyst was prepared from a dried calcined sample by combining it with 98 percent sulphuric acid in a closed autoclave and heating it at 160 o C for 6 hours.Washing with hot distilled water at (70 o C) removed the sulphuric acid that hadn't yet reacted.The catalyst was thoroughly dried in an oven at 70 o C after being washed (Rashid et al., 2019;Muhammad et al., 2017).
Plate 1: Egg Shell Plate 2: Sulphonated Egg Shell Transesterification of Fish Waste Oil 500 cm 3 conical flask was filled with a 17 g oil sample, which was then gradually added to with (100 cm 3 ) sodium methoxide.Heat was applied to the reaction mixture until it reached a temperature of 65 o C for 60 mins with continuous stirring.After 60 minutes The mixture was put into a (500 cm 3 ) separating funnel and given 24 h to settle.Two distinctive layers was observed, the upper layer is the biodiesel; the bottom layer is the glycerol and is drawn out.The same procedure was reported using 100 cm 3 of methanol and 0.5 w/w sulphonated egg shell in replacement of sodium methoxide.Finally, the same procedure was repeated , but with the use of a combined catalyst (sodium methoxide and sulphonated eggshell).
% yield = /Initial volume of oil fed into the processX 100
Experimental Design
On MINITAP 17 statistical software, the experiment was designed using the Box-Behnken response surface method.
The effect of four quantitative variables; reaction temperature Methanol to oil ratio, reaction time, catalyst amount and one categorical factor; catalyst were investigated.The design generated a total of 54 runs (randomized).The optimum parameters were determined for the highest yield of diesel.The maximum yield 102.11% that may be attained when the target is set to 100 is predicted by the optimization plot in Fig. 3.1.The methanol to oil ratio needed to be 1:12, the temperature needed to be 65 o C, the reaction duration needed to be 120 minutes, and the catalyst load needed to be 0.55w/w in order to get the maximum yield as predicted by the statistical model (optimization Plot).The experimental yield was 96.81%, however the optimization plot projected a yield of 102.11%, which is practically identical to it.It demonstrates how regression analysis utilising known catalyst and other reaction factors could optimise biodiesel prediction.
Effect of Operating Variables on Biodiesel
The effects of Methanol to oil ratio, reaction time, reaction temperature, and catalyst load were studied using an optimization plot while the interactions of variables were studied using Contour plot.
Effect of Methanol to oil ratio on Biodiesel Yield.
The esterification and transesterification steps in the manufacture of biodiesel both require the right amount of methanol to alter the equilibrium and increase the creation of fatty acid methyl ester.Figure 1 demonstrates an increase in the methanol to oil ratio from 1:6 to 1:12.The amount of main biodiesel rises, which can be a result of more methanols being available to react with the oil and make biodiesel.The methanol-to-oil ratio of 1:12 produced the highest yield.
Effect of Reaction Temperature on Biodiesel Yield
A picture of how reaction temperature affects biodiesel is shown in Figure 1.The biodiesel yield increases dramatically as the temperature rises from 55 to 65 o C. Transesterification required some initial thermal energy because it was an endothermic reaction (Samart et al., 2009).High temperatures are not recommended, though, because when the temperature raises to the point at which methanol reaches its boiling point, the substance vapourizes and produces many bubbles that stifle the reaction and reduce biodiesel yield (Long et al., 2010).
Effect of Reaction Time on Yield
According to Samart et al., (2009), Figure 4.3 shows that the biodiesel output rapidly increases from 60 to 120 minutes of reaction time until biodiesel equilibrium is attained.After equilibrium is reached, the reaction is no longer reversible.Figure 3 shows that at a constant methanol to oil ratio 9, the yield increases with reaction time such that the highest yield > 90 % can be attained when the reaction time is between 102-120 min provided the reaction catalyst is between 0.50-1.15(w/v).Similarly, the lowest yield of the reaction <80 is attainable only when the reaction time is > 62 min and the catalyst >1.30, in essence, regardless of the amount of catalyst utilized, a higher yield is attainable as the reaction time increases.Figure 5 above shows that the yield of the reaction increases with the volume of methanol to oil ratio and the reaction time.
Effect of Catalyst Load on Biodiesel Yield
Yields > 90 % are obtainable only when the reaction time is > 110 min with a methanol to oil ratio of >10 w/v.Similarly, at a time of 72 min the lowest yield of the reaction is attainable at a methanol to oil ratio > 7 w/v.Hence, as both the time and the volume of methanol to oil ratio increases, the yield of the reaction also increases.
Figure 6: Contour plot of interaction of catalyst load and methanol to oil ratio on biodiesel yield.
Figure above shows that the yield of the reaction increases with the volume of methanol to oil ratio.Yields > 92.5 % are obtainable only when the catalyst is >0.50 w/v with a methanol to oil ratio of >11.Similarly, at a catalyst of >0.60 w/v lowest yield of the reaction is attainable at a methanol to oil ratio > 6.Hence, regardless of catalyst as the volume of methanol to oil ratio increases, higher yield of the reaction also increases.
Figure 7: Contour plot of interaction of reaction temperature and methanol to oil ratio on biodiesel yield.
Figure 7 above shows that a higher yield >90 % is attainable when the methanol to oil ratio is >10 such that the volume of methanol to oil ratio is >11 at a temperature >61 o C. Also, the lowest yield <70 % is obtainable when the temperature is >56 o C and methanol to oil ratio <12 w/v.Hence, as both temperature and methanol to oil ratio increases, biodiesel yield also increases.
Response Optimization and Validation of Used Oil
Based on the model obtained to confirm the optimization results, the Box Behnken design in the minitab statistical tool was able to work as an optimal design for the intended response.With the ideal process conditions of Methanol to oil ratio 12 and 12, temperature 65 and 64.5250, reaction time (min) 120 and 120, and catalyst (w/v) 0.5 and 0.901872 in the validation process, the expected outcomes for optimal solutions were achieved through optimization as shown in Table 3.3However, it predicted a maximum yield of 102.113 and 96.810 with desirability of 1 and 1 respectively.From a validation experiment performed at the values of the process variables predicted in dissolutions 1 and 2, dissolution yields of 102.02% and 97.40% were obtained.The optimized predicted and experimentally validated results for the two solutions, 102.113% and 96.810%, were found to be 102.02% and 97.40%, respectively.Since the predicted yields of 102.113% and 96.810% are similar to the experimental yields of 102.02% and 97.40%, it indicates a relative deviation of 0.093% and 0.59%, respectively, which makes the optimisation model more desirable.
CONCLUSION
The Box-Behnken method proved successful in optimizing the transesterification reaction of used fish processors' oil to biodiesel, yielding an optimal yield of 96.81% at the optimum transesterification conditions of 65 o C, 90 min, 1:12 methanol to oil ratio and 0.5 w/w%.According to the response surface regression, the biodiesel yield was statistically significantly affected by the variables constant, methanol to oil ratio, reaction temperature, reaction time, reaction temperature* reaction temperature, and methanol to oil ratio*catalyst load.
-0.1925 Temperature (0 o C)*Temperature (0 o C) -0.00294Time (mins)*Time (mins) -2.49Catalyst (w/v)*Catalyst (w/v) + 0.031 Methanol to Oil Ratio*Temperature (0 o C) -0.0195 Methanol to Oil Ratio*Time (mins) -3.18 Methanol to Oil Ratio*Catalyst (w/v) -0.0195 Temperature (0 o C)*Time (mins) + 0.023 Temperature (0 o C)*Catalyst (w/v) -0.054Time (mins)*Catalyst (wexamine the impact of transesterification factors, response surface regression was utilized.To determine if a variable or its interaction are statistically significant or not, the p-value was utilized.The variable is considered statistically insignificant if the P-value is greater than 0.05, and vice versa.The models correlation coefficient R2 is 70.30%, which shows that the variable fit the scenario.The variables constant, Methanol to oil ratio, Reaction Temperature, Reaction time, Reaction temperature* Reaction temperature and Methanol to oil ratio*Catalyst Load were statistically significant on Biodiesel yield.The variables catalyst load, Methanol to Oil Ratio*Methanol to Oil Ratio, Methanol to Oil Ratio*Reaction Temperature, Reaction Time(mins)*Catalyst Load and reaction temperature*Catalyst Load have P-Value greater than α-value which shows that they are statistically insignificant on biodiesel yield.
Figure 4 .FJSFigure 2 :
Figure 2: Contour plot of the interaction of catalyst load and reaction temperature on biodiesel yieldFigure2above shows that at a catalyst between 0.5-0.75w/v, the dissolution yield increases with temperature such that yields greater than 90 % can only be attained when the temperature is between 62.7-65 o C. Similarly, the lowest yield was attained when the dissolution temperature is between 55-55.2o C and a catalyst between 0.5-1.50w/v, hence, regardless of the amount of catalyst utilized, a higher yield is obtainable provided the temperature increases.
Figure 3 :
Figure 3: Contour plot of interaction of catalyst load and reaction time on biodiesel yield
Figure 4 :Figure 4
Figure 4: Contour plot of interaction reaction temperature and reaction time on biodiesel yield.
Figure 5 :
Figure 5: Contour plot of interaction of reaction time and methanol to oil ratio on biodiesel yield.
PRODUC… Ayatullahi et al., FJS RESULT AND DISCUSSION Optimization Process of Biodiesel Yield of Used Oil using Sodium Methoxide and Sulphonated Egg Shell as Catalyst
The results of process parameters optimization namely: methanol to oil ratio, temperature, catalyst load and reaction time for biodiesel yield is shown in Table4.6.The highest biodiesel yield of 96.81% was obtained at 1:12 methanol to oil ratio, temperature at 65 o C, time at 90minutes and catalyst load of 1.0 %.While the lowest yield of 60.89 % was obtained at 1:12 methanol/oil ratio, temperature at 55 o C, time at 60 minutes and catalyst load 1 %. | 3,927 | 2023-08-30T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Engineering"
] |
Two-Photon Laser Lithography of Active Microcavity Structures
Fabrication of active fluorescent microstructures with given parameters is an important task of integrated optics. One of the most efficient methods of fabrication of such microstructures is two-photon laser lithography. However, most polymers used in this technology have a relatively low quantum yield of fluorescence. In this work, the properties of microcavity structures obtained by the indicated method from hybrid polymers with addition of various dyes have been studied. The possibility of formation of high-quality microstructures from activated polymers, conservation of their luminescent properties after polymerization under intense laser irradiation, and reduction of the exposure of two-photon laser lithography by two orders of magnitude in the presence of Coumarin-1 dye has been demonstrated. The nonlinear optical microscopy study has shown that the spatial distribution of scattered fluorescence in microcavity structures based on the polymer with the dye corresponds to the excitation of cavity modes or whispering gallery modes.
Fabrication of microstructures with exactly specified geometrical parameters is important for rapidly developing research fields such as integrated optics and biophotonics [1][2][3][4]. Two-photon laser lithography (TPLL) is one of the methods of formation of such structures and performs well in fabrication of microcavities, optical microelements (microprisms and microlenses), waveguides, etc. [5][6][7][8][9][10]. The main advantage of this method is the pronounced locality of the action area compared to other kinds of optical lithography, which ensures the resolution better than 50 nm when using near infrared laser radiation as pump radiation [11]. Active development of TPLL led to the appearance of methods that make it possible to more accurately control the quality of the surface and the shape of the resulting microstructure, which is particularly important for microcavities of whispering gallery modes [12].
One of the main restrictions of the TPLL technology for the formation of active photonic microstructures, i.e., structures where effects of interest occur at frequencies different from the pump radiation frequency, is a low quantum yield of fluorescence of the initial polymer. This problem can obviously be solved by adding dyes or fluorescent nanoparticles (e.g., quantum dots) to the polymer that serve as active elements in the microstructure. A hollow cylinder from the acrylate polymer with Rhodamine B dye [13] and other structures [5,14] have already been fabricated with this approach.
A similar method was used to fabricate active microstructures based on the OrmoComp polymer [15], which belongs to hybrid (organic-inorganic) polymers and is one of the most promising candidates for TPLL [16]. Lithography with this polymer activated by the Pyrromethene 597 dye has already provided disk microcavities with a diameter of about 50 μm and a Q-factor of more than 10 6 [17], which is a high value for cavities of such size. In this work, using two-photon laser lithography with OrmoComp polymer, we fabricate a number of microstructures of various shapes and characteristic sizes up to 25 μm with Coumarin-1 dye, mixture of Rhodamine-640 and Rhodamine-590 dyes (below, microstructures with Rhodamine) in equal mass concentrations, reveal features of two-photon lithography for polymer activated by various dyes, and demonstrate nonlinear luminescent properties of these structures.
Radiation of an Avesta Tif-DP femtosecond Ti:sapphire laser with direct diode pump, a wavelength of 780 nm, a pulse repetition frequency of 80 MHz, and a pulse duration of 60 fs was used as pump radiation for TPLL. Pump radiation was guided by mirrors through a telescope with a magnification factor of 0.5 to an acousto-optical modulator. Then, the diffracted beam passed through the telescope with a magnification factor of 5 joined with a spacial filter and reached
OPTICS AND LASER PHYSICS
an X-Y galvanoscanner located at the focus of the 4F system with a magnification factor of 2. The input lens of a Nikon Plan APO 60x immersion objective with a numerical aperture of 1.4, which was fixed on a piezoelectric translator with a displacement range of 40 μm, was placed at the second focus of this system. The 4F system was placed vertically, and the optical system ensured the total magnification factor that allowed one to match the beam diameter and the size of the input aperture of the objective. A three-coordinate table, which was displaced by means of stepper motors and on which Thorlabs CG15CH cover glass with liquid polymer drop or film was located for printing of microstructures, was placed above the objective. The region of sharp printing by means of the galvanoscanner had dimensions of 100 × 100 × 40 μm; the diameter and height of the voxel were 0.4 and 1 μm, respectively.
To mix dye with OrmoComp polymer, we used OrmoDev developer, which is a mixture of two solvents, isopropanol and methyl isobutyl ketone, in which dyes used in this work were well dissolved. The solution of OrmoDev with dye was mixed with Ormo-Comp in a mass ratio of 1 : 2. The cover glass was preliminarily kept for 30 min in piranha solution for cleaning, was then placed on the table of a centrifuge with a water film, and was dried by centrifuging in an inert atmosphere. Then, OrmoPrime08 adhesion promoter (primer) was deposited on the surface of the dry substrate rotating at a speed of 3000 rpm. The primer was centrifuged for 30 s; then, the structure was dried at a temperature of 150°C for 15 min. The resulting primer film had a submicron thickness. After that, the solution of polymer with dye was deposited on the cover glass with primer, was centrifuged also at a speed of 3000 rpm for 30 s, and was then dried at a temperature of 80°C for 15 min. As a result, we obtained a film with a thickness of 10-15 μm appropriate for TPLL.
Films with mass concentrations of 0.04 and 0.083 of Rhodamine and Coumarin-1 dyes, respectively, in OrmoComp polymer were prepared. The radiation fluences for the high-quality polymerization of microstructures in the process of two-photon laser lithography were determined to be 4 × 10 -5 , 4 × 10 -5 , and 5 × 10 -7 J/voxel for pure OrmoComp, with Rhodamine dye, and with Coumarin-1 dye, respectively. Printing was performed at a pump power of 0.7-10 mW in the waist, the printing rate was 50-1000 μm/s depending on the type of dye, and the print step in the lateral plane was 0.2-0.4 μm and between layers was 0.2-0.5 μm.
It is noteworthy that the addition of Coumarin-1 dye to OrmoComp polymer reduces approximately by two orders of magnitude the exposure time required for two-photon polymerization under TPLL compared to the polymer without dye, which can be used to increase the rate of printing structures. This effect was not observed in the presence of other studied dyes (Rhodamine and Coumarin-30). This is assumingly due to an increase in the absorption of OrmoComp polymer activated with Coumarin-1 dye, whose absorption band almost coincides with the absorption band of the photoinitiator of the main polymer [18,19]. In this case, optical excitation can be efficiently transferred from dye to the photoinitiator, which reduces the radiation fluence necessary for TPLL. This property is promising for the development of high-speed two-photon laser lithography.
To study the linear and nonlinear optical properties of the prepared microstructures, we used a setup similar to that described in [20]. The tunable signal radiation of an Avesta TOPOL-1050-С optical parametric oscillator at wavelengths of 800 and 700 nm with a pulse duration of 150 fs and a pulse repetition frequency of 70 MHz or the 405-nm radiation of the diode laser was used as pump radiation, which ensured the possibility of two-photon and single-photon excitation of photoluminescence, respectively. Probe radiation was focused by a Mitutoyo Plan Apo 100× objective with a numerical aperture of 0.7 into a region with a diameter of about 1 μm on the microstructure. The same objective was used to collect fluorescence from the structure under study. Radiation was detected either integrally by a photomultiplier tube or spectrally resolved by a spectrometer. When the diode laser was used for pumping and single-photon fluorescence was excited in the structure, a focusing lens and an aperture were additionally placed in front of the detector in order to collect the signal from the region on the sample with a diameter of about 1 μm. Studies were performed in the transmission scheme at the focusing of probe radiation on the upper surface (far from the substrate and close to the pump beam) of the structure. Rhodamine under pumping by 800-nm laser radiation. The spectral maximum corresponds to a wavelength of 600 nm. Similar results are observed for single-photon fluorescence. Figure 2 shows maps of the two-photon fluorescence intensity in the (а) microdisk and (b) 5-μmthick micropentagon, which indicate that the distribution of dye in the structure is uniform. It is seen that objects are geometrically regular and correspond to the initial model. The shape of the fabricated structures implies that various cavity modes such as whispering gallery modes, the so-called bow-tie modes, or analogs can be excited in them.
One of the methods of detecting such modes is the analysis of the distribution of scattered fluorescence [21]. Such a distribution for these structures was obtained with a CCD camera in the transmission scheme. It is seen in Figs. 3а and 3b that the fluorescence signal is enhanced near the edge of the microcylinder, which is expected and typical for whispering gallery modes. Both an increase in the fluorescence intensity near the edges of the micropentagon and the existence of a more complex internal signal distribution closer to its center are remarkable (Fig. 3b). We note that fluorescence maps shown in Fig. 3b were obtained under the excitation of the center of microstructures; a change in the geometry of optical excitation results in insignificant differences between fluorescence intensity maps in the micropentagon. Figures 3c and 3d show the calculated distribution of the magnitude of the electric field at the pump frequency in structures with parameters corresponding to experimental ones. Two most typical calculated distributions are presented because the experimental photograph of the pentagon exhibits the superposition of excited cavity modes because of the absence of the spectral selectivity of the CCD camera. It is noteworthy that two main types of cavity modes are observed for the micropentagon: (i) whispering gallery modes propagating over the perimeter of the structure and (ii) bow-tie modes associated with the circulation of radiation inside the structure under reflection from some lateral faces of the pentagon. The simultaneous excitation of these modes determines the form of scattered fluorescence.
Similar studies were performed for microstructures made from OrmoComp polymer with Coumarin-1 dye; in this case, two-photon processes were studied under pumping by 700-nm radiation, for which the absorption coefficient at the double frequency is large. The corresponding spectrum of two-photon fluorescence is shown in Fig. 4. It is seen that the intensity is maximal near a wavelength of 450 nm; the results for single-photon fluorescence are similar.
Microstructures of various shapes-hollow hexagons and cylinders, pentagons, and disks-were fabricated from this mixture of polymer and dye. The maps of the two-photon fluorescence intensity (Fig. 5) demonstrate that the distribution of dye in the polymerized structure is uniform and the resulting geometric parameters correspond to those specified in the 3D model. The spatial distributions of scattered singlephoton fluorescence in microdisks and micropentagon also indirectly confirm the excitation of cavity modes (Fig. 6).
One of the features of structures based on dyes is their photobleaching, i.e., a decrease in the efficiency of fluorescence under intense irradiation, which restricts the possibility of application of corresponding materials in photonics [22]. To study single-and twophoton photobleaching, we examined the kinetics of fluorescence of fabricated microstructures based on polymers with various dyes. Measurements were performed for three wavelengths: 405 nm for single-pho- ton photobleaching, 700 or 800 nm for two-photon photobleaching for achieving more efficient absorption at the double frequency of each of the dyes. Typical curves are presented in Fig. 7. Experimental data were approximated by the two-exponential function (1) where γ i are the photobleaching rates.
The coefficients obtained from the approximations for Rhodamine and Coumarin-1 are summarized in Tables 1 and 2, respectively. These approximations demonstrate a lower photodegradation rate for Rhodamine compared to Coumarin-1 and a much higher quantum yield after the same exposure time (at similar mass concentrations of dyes).
Thus, it has been demonstrated experimentally that active microstructures based on OrmoComp polymer with Coumarin-1 dye and the mixture of Rhodamine-640 and Rhodamine-590 dyes can be formed by two- A e photon laser lithography. It has been shown experimentally that the addition of Coumarin-1 dye to the main OrmoComp polymer reduces the exposure time necessary for using OrmoComp polymer without additions in TPLL by almost two orders of magnitude. The implementation of the method that makes it pos- sible to fabricate microcavity structures in which cavity modes can be excited owing to single-or two-photon fluorescence of dye introduced in polymer has been described.
FUNDING
This work was supported jointly by the Russian Foundation for Basic Research and Consiglio Nazionale delle Ricerche of Italy (project no. 20-52-7819) and by the Interdisciplinary Scientific Educational School "Photonic and Quantum Technologies. Digital Medicine," Moscow State University. | 3,054.8 | 2022-03-01T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Expression of small RNAs of Bordetella pertussis colonizing murine tracheas
Abstract We performed RNA sequencing on Bordetella pertussis, the causative agent of whooping cough, and identified nine novel small RNAs (sRNAs) that were transcribed during the bacterial colonization of murine tracheas. Among them, four sRNAs were more strongly expressed in vivo than in vitro. Moreover, the expression of eight sRNAs was not regulated by the BvgAS two‐component system, which is the master regulator for the expression of genes contributing to the bacterial infection. The present results suggest a BvgAS‐independent gene regulatory system involving the sRNAs that is active during B. pertussis infection.
Bordetella pertussis causes whooping cough, a contagious respiratory disease that has been resurging recently despite high vaccination coverage. 1,2 This organism produces multiple virulence factors, including toxins and adhesins, the expression of which is largely regulated by the BvgAS two-component system, consisting of the sensor kinase BvgS and response regulator BvgA. 3 At 37°C in standard Bordetella media, the BvgAS system activates the transcription of a set of genes (Bvg-activated genes) including various virulence genes. Conversely, this system is inactivated at temperatures lower than 26°C or in the presence of MgSO 4 (40-50 mM) or nicotinic acid (10-20 mM), and B. pertussis eventually does not express the Bvg-activated genes. The former bacterial state is called the Bvg + phase, and the latter is the Bvg − phase. The BvgAS system is considered to play a major role in the expression of genes involved in the pathogenesis of B. pertussis; however, recent in vivo studies found that several Bvg-activated genes were repressed in B. pertussis colonizing the respiratory tracts of mice. 4,5 van Beek et al. also reported that approximately 30% of all genes were differentially expressed between in vitro and in vivo conditions. 4 Furthermore, a B. pertussis clinical strain, the BvgAS system of which was dysfunctional due to a spontaneous mutation in the bvgS gene, was isolated from a pertussis patient. 6 These findings suggest that a complex mechanism, besides the BvgAS system, is involved in the regulation of the bacterial gene expression during the course of infection.
Bacterial small RNAs (sRNAs) are functional noncoding RNA molecules that range between 50 and 500 nucleotides in length. 7 Previous studies identified numerous sRNAs in various pathogenic and commensal bacteria using a computational analysis and laboratorybased techniques, such as microarrays, Northern blotting, and RNA sequencing (RNA-seq). [8][9][10][11] Most sRNAs posttranscriptionally upregulate or downregulate downstream gene expression by affecting the stability and translational efficiency of target messenger RNAs (mRNAs) through base pairing with them. 12,13 A wide variety of physiological processes, including metabolism, stress responses, and the expression of virulence genes, are regulated by sRNAs. [14][15][16][17][18] In B. pertussis, many types of sRNAs have been identified or predicted by an in silico analysis and RNA-seq on the bacteria grown in vitro. 9,11,14 However, it currently remains unclear whether B. pertussis sRNAs are involved in the regulation of in vivo gene expression, which is associated with the establishment of bacterial infection. In the present study, we performed in vivo RNA-seq on B. pertussis colonizing the murine tracheas and identified novel sRNAs that were strongly expressed during colonization.
In vivo expression of sRNAs were analyzed by RNAseq using tracheas of three mice independently infected with B. pertussis-type strain 18323. This organism was grown at 37°C on Bordet-Gengou agar (Becton Dickinson, Franklin Lakes, NJ) containing 1% HIPOLY-PEPTON (Nihon Pharmaceutical, Tokyo, Japan), 1% glycerol, 15% defibrinated horse blood, and 10 µg/mL ceftibuten (BG plate). The bacteria recovered from the colonies on BG plates were suspended in Stainer-Scholte (SS) broth 19 to obtain an OD 650 of 0.2, and cultured at 37°C for 14 hr with shaking. Bacterial CFUs were estimated from OD 650 values according to the following equation: 1 OD 650 = 3.3 × 10 9 CFU/mL. Seven-week-old male C57BL/6J mice (CLEA Japan, Osaka, Japan) were anesthetized with a mixture of medetomidine (Kyoritsu Seiyaku, Tokyo, Japan), midazolam (Teva Takeda Pharma, Nagoya, Japan), and butorphanol (Meiji Seika Pharma, Yokohama, Japan) at final doses of 0.3, 2, and 5 mg/kg body weight, respectively, and intranasally inoculated with B. pertussis 18323 (1 × 10 7 CFU) in 50 μL of SS medium using a micropipette with a needle-like tip. On Day 4 after inoculation, mice were killed with pentobarbital, and the tracheas were excised and frozen in liquid nitrogen. Total RNA was extracted from the tracheas with TRIzol Reagent (Thermo Fisher Scientific, Waltham, MA), treated with RNase-Free DNase (Takara Bio, Shiga, Japan), and then purified with the PureLink RNA Mini Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. Bacterial and murine ribosomal RNAs (rRNAs) were simultaneously depleted from the total RNA using the Ribo-Zero rRNA Removal Kit for Human/Mouse/Rat and Gram-Negative Bacteria (Illumina, San Diego, CA). The quality and quantity of RNA samples were assessed using the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA). Reverse transcription was performed with the rRNA-depleted RNA, SuperScript III Reverse Transcriptase (Thermo Fisher Scientific), and Random Primer N9 (Takara Bio), and double-stranded DNA was then synthesized using DNA polymerase I (Klenow fragment [3′-5′ exo-]; New England Biolabs, Ipswich, MA). The resultant complementary DNA (cDNA) was sheared to approximately 600 bp fragments using Covaris S220 (Covaris, Woburn, MA) and purified with Agencourt AMPure XP beads (Beckman Coulter, Miami, FL). Libraries of the cDNA fragments were then prepared with the KAPA Library Preparation Kit (Kapa Biosystems, Wilmington, MA) and TruSeq adapters (Illumina), and sequenced with a HiSeq 2500 (Illumina) to obtain 101 bp single-end reads. The sequenced reads were mapped to the genomic DNA of B. pertussis 18323 (GenBank: NC_018518.1) using CLC Genomics Workbench, version 8.0.3 (CLC bio, Waltham, MA). All animal experiments were approved by the Animal Care and Use Committee of the Research Institute for Microbial Disease, Osaka University, and conducted according to the Regulations on Animal Experiments at Osaka University. The numbers of total sequenced reads were 54, 143, and 137 million, and 0.06%, 0.72%, and 0.04% of the reads in each sample were aligned to the genome sequence of B. pertussis 18323. A large portion of the reads aligned to the bacterial genome corresponded to protein-, rRNA-, and transfer RNA-coding sequences (95.5%, 99.9%, and 97.3%), whereas the residual reads were aligned to the intergenic regions: the numbers of reads were 1180, 1316, and 836, respectively. We predicted that these noncoding sequences located in the intergenic regions are potential sRNA sequences. Among these sRNA candidates, we selected nine novel sRNAs, for which the number of sequenced reads was more than 20 counts, and designated them B. pertussis sRNA (Bpr) 1-9 according to a previous study 9 (Table 1). Homologous sRNAs to Bpr1-9 were not found in the public databases including BLAST (https://blast.ncbi.nlm.nih.gov/Blast. cgi) and sRNAMap (http://srnamap.mbc.nctu.edu.tw).
In vitro and in vivo expression of Bpr1-9 were compared by qRT-PCR analyses. Total RNA was extracted and purified from the tracheas of mice independently infected with B. pertussis Tohama, a vaccine strain, and two clinical strains (BP139 and BP143 gifted from K. Kamachi, National Institute for Infectious Diseases) 20 in the same manner as B. pertussis 18323. Total RNA was also prepared from the four strains of B. pertussis and Bvg +and Bvg − -locked mutants derived from B. pertussis 18323 grown in vitro using the PureLink RNA Mini Kit and RNase-Free DNase according to the manufacturer's instructions. The Bvg +and Bvg − -locked mutants, which constitutively express the Bvg + and Bvg − phenotypes, respectively, were constructed by the sitedirected mutagenesis of BvgS to replace Arg with His at position 570 and to delete the region of amino acid positions from 542 to 1020, respectively, 21 using double-crossover homologous recombination as described previously. 22 In brief, the plasmids bvgS-C3-pABB-CRS2-Gm and ΔbvgS-pABB-CRS2-Gm 22 were introduced into Escherichia coli DH5α λpir, and then transconjugated into B. pertussis 18323 by triparental conjugation with the helper strain E. coli HB101 harboring pRK2013, 23 Table 2 under the following conditions: initial denaturation at 95°C for 10 min, and 40 cycles at 95°C for 15 s and 60°C for 1 min. qRT-PCR analyses revealed that the expression levels of Bpr4, 5, 8, and 9 in B. pertussis 18323 colonizing murine tracheas were significantly higher (118-, 64-, 9-, and 6-fold, respectively) than those in in vitro-cultured bacteria (Figure 1a). By contrast, no significant differences in the expression of Bpr1-3, 6, or 7 were observed between in vitro and in vivo conditions. Similar results were obtained with B. pertussis Tohama and two clinical strains (Figure 1b). The in vitro expression levels of Bpr1-7 and 9 in B. pertussis 18323 were largely unaffected by the absence or presence of 40 mM MgSO 4 . In addition, the Bvg +and Bvg − -locked mutants equally expressed these sRNAs (Figure 1c). By contrast, the expression of Bpr8 was negligible in B. pertussis 18323 wild-type grown in the presence of 40 mM MgSO 4 (i.e. Bvg − phase condition) and the Bvg − -locked mutant. These results indicate that the expression of Bpr1-7 and 9 is independent of the BvgAS regulatory system, whereas that of Bpr8 is BvgAS dependent.
The presence of Bpr4, 5, 8, and 9 in B. pertussis 18323 was confirmed by rapid amplification of cDNA end (RACE) and Northern blotting. For the determination of the transcription start and termination sites of the Bpr, 5′-and 3′-RACE were performed using a SMARTer RACE 5′/3′ Kit (Takara Bio) according to the manufacturer's instructions. In brief, total RNA was extracted from in vitro-cultured B. pertussis 18323 and polyadenylated by poly(A) polymerase (New England Biolabs). After reverse transcription by SMARTScribe Reverse Transcriptase, the resultant cDNA was used as a template for PCR with Universal primer and each bpr-specific primers ( Table 2). The PCR products were then cloned into linearized pRACE and five individual clones were sequenced. The precise transcription start and termination sites of Bpr4 For production of digoxigenin (DIG)-labeled RNA probes, partial antisense strands of bpr and recA genes were amplified from B. pertussis 18323 using appropriate primers (Table 2), and cloned into the downstream of T7 promoter on pSPT18 (Sigma-Aldrich). The resulting plasmids were linearized with SalI, and DIG-labeled RNA probes were pertussis 18323 was subjected to electrophoresis in a 1.5% denaturing formaldehyde agarose gel, transferred to a positively charged membrane (Hybond-N+; GE Healthcare, Piscataway, NJ), and UV cross-linked to the membrane. The membrane was then independently incubated with DIG-labeled RNA probes for each Bpr and recA, respectively, followed by alkaline phosphate-conjugated sheep anti-DIG immunoglobulin G, and visualized with CDP-Star. Northern blotting using the RNA probes for Bpr4, 8, and 9 detected a single band, whereas Bpr5 migrated as two bands ( Figure 2). The mobility of each Bpr corresponded to that estimated from its length determined by RACE. sRNAs regulate the expression of genes involved in a wide variety of physiological processes in bacteria, including the adaptation to host environments and virulence. [14][15][16][17][18] In B. pertussis, 14 types of sRNAs designated as BprA-N were identified by an in silico analysis and Northern blotting 9 ; however, these sRNAs have not yet been characterized. Recent studies performed an RNA-seq analysis using B. pertussis grown in vitro and identified an sRNA designated as RgtA (repressor of glutamate transport) that was found to reduce the translation of BP3831, a periplasmic amino acidbinding protein of an ABC transporter, by base pairing with the 5′ untranslated region of BP3831 mRNA. 11,14 Although this protein is related to the transport of glutamate, it currently remains unclear whether RgtA is involved in the pathogenesis of B. pertussis. In the present study, we identified nine types of novel sRNAs that were strongly expressed during the bacterial colonization, and demonstrated that the expression of four types of sRNAs (Bpr4, 5, 8, and 9) was stronger in vivo than in vitro. To the best of our knowledge, this is the first study to identify the in vivo strongly expressed sRNAs of B. pertussis. Bpr4, 5, 8, and 9, which were strongly expressed in vivo, may be involved in regulating the expression of genes necessary for the bacterial colonization or infection. In Salmonella enterica serovar Typhimurium, PinT, a PhoP-induced sRNA, was shown to be upregulated by up to 100-fold during the infection, and regulated the expression of the invasion-associated effectors and virulence genes required for intracellular survival. 17 Li et al. also reported that Ysr170, a strongly expressed sRNA in Yersinia pestis invading host cells, contributed to the bacterial intracellular survival. 18 These findings support our hypothesis that the sRNAs strongly induced during infection are involved in the adaptation and/or pathogenesis of B. pertussis. In addition, we found that the expression of Bpr1-7 and 9 was not regulated by the BvgAS system. Although the BvgAS system was previously considered to be the master virulence regulator in B. pertussis, 3 recent studies demonstrated that the expression profiles of Bvg-regulated genes were largely different between in vitro and in vivo conditions, 4,5 suggesting a complex mechanism that regulates in vivo gene expression. Several groups reported the PlrSR two-component system and BspR/BtrA, an anti-σ factor, as accessory regulatory systems downstream of BvgAS, which may play a part in this complex gene regulatory system in vivo. [24][25][26][27] Besides these regulators, the sRNAs identified in the present study may function as another regulator for gene expression during B. pertussis infection. Further research is currently in progress in our laboratory to identify genes whose expression is regulated by the sRNAs and elucidate the mechanisms by which the sRNAs regulate the gene expression. | 3,129 | 2020-03-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Microstructural Evolution and Mechanical Properties of As-Cast Mg-12Zn Alloys with Different Al Additions
In this study, Mg-12Zn magnesium alloys alloyed with Al additions (0, 2, 4, 6, 8 and 10, wt.%) were fabricated by permanent mould casting. The Al content on their microstructure and mechanical properties were systematically examined with an optical microscope (OM), a scanning electron microscope (SEM), an X-ray diffractometer (XRD) and mechanical tests at room temperature. The experimental results indicate that the microstructure of the alloys is mainly composed of α-Mg and semi-continuous or continuous eutectic phases. A higher addition of Al (≥6%) causes the generation of the Mg17Al12 phases. Notably, the grain sizes of the alloys gradually decrease, whilst the partial morphology of some eutectic phases is modified into lamellar structure with increasing of Al addition. Mechanical properties characterization manifested that, the alloys with different Al additions reveal distinguishing tensile properties. Among them, the alloy with 4% Al provides an excellent mechanical properties, i.e., a UTS of 206 MPa and an EL of 7.92%, which is respectively higher 28 MPa and 1.08% than that of ZA120 alloy. The deterioration in the tensile properties for the higher Al-bearing alloys is possibly related to the lamellar structure, coarse and continuous net-work morphology and β-Mg17Al12 phases, respectively.
Introduction
Magnesium and its alloys are considered to be the lightest metallic structural materials at present. They have the advantages of low density, high specific strength and stiffness, good damping, shock absorption and mechanical processing performance, and have been widely used in rail transit, aerospace, electronic communication industry and other fields [1][2][3] . In recent years, the nations of the world attaches great importance to the research of magnesium and magnesium alloys, and has made plans for the research, development and application of magnesium alloys. Among them, there are many studies on Mg-Al-Zn (AZ) series and its multi-component magnesium alloys, such as AZ31, AM50A, AM60B and AZ91, were developed for industry applications. However, the existence of many β-Mg 17 Al 12 phases in these alloys seriously affects the properties of the alloys mentioned. As a result, their widespread applications in engineering are severely restricted. The previously published findings manifest that, the brittle β-Mg 17 Al 12 phase is prone to crack and the crack usually propagates along the interface between β-Mg 17 Al 12 phase and Mg matrix, resulting in large cracks on the fracture surface 4 . Similar investigation has also been mentioned 5 , α-Mg matrix has a HCP structure, while β-Mg 17 Al 12 phase possesses a BCC structure, which causes the interface between α-Mg matrix and β-Mg 17 Al 12 phase to become fragile, leading to the formation of micro-cracks at Mg/β-Mg 17 Al 12 phase interface. It is worth mention that β-Mg 17 Al 12 phase in Mg-Al system alloys will undergo softening and cannot play a role in hindering grain boundaries slide when the service temperature exceeds 393 K 6 . From that it can be concluded that, Mg 17 Al 12 phase is not only sensitive to crack generation, but also a weakened phase in these alloys. Therefore, aiming at improving of microstructure and properties of the Mg-Al-Zn system alloys has become a critical issue at present. Fortunately, it is reported that Mg-Al alloy containing high content of Zn exhibits fine castability and mechanical properties at ambient 7 .
As is known to all, Mg-Zn-Al (ZA) magnesium alloys with high Zn and low Al have been proposed as a low-cost, creep-resistant and diecastable alloy 8,9 , which has a remarkable heat treatment strengthening characteristics. Meanwhile, the main precipitated phases of the alloys include Mg 32 (Al, Zn) 49 and/or MgZn, which have a good strengthening effect. It is further reported 10 that the ZA series alloys show mechanical properties superior to that of the AZ series alloy in both room temperature and high temperature, and have a broad commercial application prospect comparable to that of the AZ system alloys. Currently, plentiful researches have been carried out on the design, microstructure and properties of the ZA series alloys at home and abroad. Abundant investigations on ZA series alloy seem to suggest that it is a promising candidate for developing high performance magnesium alloys. According to our previous study 11 , the Zn/Al ratio has a significant influence on the formation of the phase in ZA series alloys, which affects the properties of the alloys. Wan 12 et al. investigated the microstructure, mechanical properties and creep resistance *e-mail<EMAIL_ADDRESS>of Mg-(8%-12%)Zn-(2%-6%)Al alloys and pointed out that, the ZA82, ZA102 and ZA122 alloys are mainly composed of α-Mg, ε-Mg 51 Zn 20 and τ-Mg 32 (Al,Zn) 49 phase, the alloys ZA84, ZA104 and ZA124 contain α-Mg and τ phases, the ZA86, ZA106 and ZA126 alloys consist of α-Mg, τ precipitates, ϕ-Al 2 Mg 5 Zn 2 eutectics and β-Mg 17 Al 12 compounds. Although the previously mentioned studies about ZA series alloys have been reported abundantly. But up to now, very little information pertaining to the microstructural evolution and mechanical properties of Mg-12Zn based alloys with the addition of Al has been reported so far. Furthermore, it is of great interest to explore the Zn/Al ratio possible cumulative effects on phase composition of ZA series alloys. Hence, based on the reports of ZA series alloys and previous studies, Mg-12Zn-xAl alloys (0, 2, 4, 6, 8 and 10, wt.%) alloyed with different additions of Al were designed and investigated systematically, so as to provide a reference for the research and development of new type ZA alloys.
Experimental procedure
The investigated alloys were prepared from commercial high purity Mg (>99.99%), Zn (>99.999%), Al (>99.99%) ingots. Six sample alloys with the nominal compositions were shown in Table 1 and noted ZA120, ZA122, ZA124, ZA126, ZA128 and ZA1210 (all compositions are in wt.% hereinafter), respectively. The smelting plays a significant role in the alloy fabrication, to ensure the purity of the studied alloys, the Mg, Zn and Al ingots were polished off by a steel brush before preheating. The weight of 2.0 kg of each alloy was melted in a mild steel crucible placed in an electric resistance furnace under a high-purity argon atmosphere and a covering agent RJ-2 protection. After the Mg ingot was completely melted, the Zn and Al ingots were added into the melt at 680 °C. When the liquid melt temperature was slowly rised to approximately 750 °C, and then was held at 750 °C for 20 min to guarantee the homogeneity of alloying elements. Subsequently, the melt was manual stirred by using 2% (ratio to the whole raw metal) C 2 Cl 6 at 730 °C. After thorough refining, the liquid melt was isothermally maintained for 20 min at 710 °C for the settlement of inclusions. Then, the liquid melt was poured into the metallic mold which was coated and preheated to a temperature of about 200 °C with dimensions of length (210 mm), height (130 mm) and breadth (85 mm).
For the best results, the specimens for OM and SEM observation were mechanically polished with grinding and polishing papers, and then etched by a 4 vol % HNO 3 solution at room temperature. The microstructures of the specimens were subsequently observed by an optical microscope (OM, MeF-3) and a scanning electron microscopy (SEM, 450) equipped with an energy dispersive spectroscope (EDS). The phase compositions in the experimental alloys were analyzed by a D/max-2400 X-ray diffraction (XRD) at 40 kV and 40 mA using Cu Kα radiation with a scanning velocity of 5 °/min and a step scanning 2θ from 10° to 90°. The tensile specimens with a gauge section of 15 mm × 2 mm × 3 mm were machined from the bottom of obtained ingots by using a computer numerical-controlled wire-cutting machine, as shown in Figure 1. The mechanical tests were carried out by WDW-100D type electromechanical universal testing machine with a loading rate of 1 mm/s at room temperature. Three specimens were tested for each alloy and the average values were presented as the results of ultimate tensile strength (UTS) and elongation to failure (EL). The transverse fracture surfaces of the tensile specimens were also examined by SEM.
As-cast microstructure
The XRD profiles of the Mg-12Zn alloys containing different additions of Al are presented in Figure 2. It can be seen from Figure 2a that both α-Mg phase and MgZn 2 phase exist in ZA120 alloy. In contrast, the new diffraction peaks of Mg 2 Zn 3 and Mg 32 (Al,Zn) 49 phase are observed for the Al-containing alloys. The ZA122 and ZA124 alloys consist of four phases, i.e., α-Mg, Mg 2 Zn 3 , Mg 32 (Al,Zn) 49 and Mg 7 Zn 3 phases. According to the previous investigation 13 , the Mg 7 Zn 3 binary phase will generate in Mg-Zn-Al alloy when the mass ratio of Zn/Al is more than 2. As for mentioned MgZnAl intermetallic compound, it has a body-centered cubic structure (space group Im‾3, a=1.416 nm 14 ), meanwhile possesses a high melting point and good thermal stability. With Al addition further increasing, as observed from Figure 2 that, additional diffraction peaks of Mg 17 Al 12 phase emerge Table 1. Chemical compositions of the investigated alloys
Alloy code
Nominal compositions (wt.%) ZA120 Mg Mg-12Zn-10Al in ZA126, ZA128 and ZA1210 alloys,and peaks emerge corresponding to the Mg 17 Al 12 phase tends to increase with increasing of Al addition. It is confirmed that Mg 17 Al 12 phase has a body-centered cubic structure with a lattice parameter a=1.06 nm 15 . Despite the above analysis, the reason for the formation of the intermetallic compounds in the Al-bearing alloys is not completely clear and further study is needed. Through the above analysis, it can be summarized that the Zn/Al mass ratio has a significant influence on the phase composition of the investigated alloys. Namely, if the mass ration of Zn/Al is less than 2, the Mg 2 Zn 3 , Mg 32 (Al,Zn) 49 Figure 3, where it can be found that the alloys have a typical dendrite configuration with interphases at interdendritic regions, the volume fraction of which increases with increasing of Al addition in the Al-containing alloys. The resultant microstructures of the alloy are mainly composed of primary α-Mg, non-equilibrium eutectic phase with continuous or semi-continuous precipitated along grain boundaries and tiny irregular granular phases in the interior of the grain. The morphology of eutectic phases of the alloys is gradually evolved from an isolated island into a coarse and continuous netlike, the size and the number of the secondary phases gradually increases, whereas the size of the grains gradually decreases with an increment of Al addition.
It is well known that the size and morphology of dendrite grains are mainly determined by heterogeneous nucleation and solute segregation 16 . As shown in Figure 3a and b, the microstructures of the ZA120 alloy are mainly comprised of coarse α-Mg grains and intermetallic compounds with an isolated island and granular form. When the 2% Al is added to Mg-12Zn alloy, the corresponding microstructural analysis shown in Figure 3c and d. It indicates that the microstructures of the alloy reveal still developed dendritic morphology. Moreover, the eutectic phases in ZA122 alloy are aggregated in some regions, the isolated particles and grain sizes tend to decrease, as compared to the ZA120 alloy. With 4% Al addition, it is observed from Figure 3e and f that the grain size of the ZA124 alloy is dramatically smaller than that of the ZA120 alloy, the morphology of eutectic phase distributed along grain boundary tends to be fine, and the number of eutectic phase is more than that of ZA120 alloy. According to the XRD patterns, the increased phases should be Mg 2 Zn 3 , Mg 32 (Al,Zn) 49 and Mg 7 Zn 3 . In contrast, with 6% Al addition, as illustrated in Figure 3g and h, it is distinct that most of the eutectic phases exist as networks along the grain boundaries, while some particles are dispersed inside the grains. Interestingly, it is necessary to note that the distribution of the secondary phases has prominent dendrite segregation (marked by blue elliptic region), and the number of secondary phases increases obviously. Additionally, one can see that the grain size of the alloy further decreases. When the 8% Al is added to Mg-12Zn alloy, it can be found from Figure 3i and j that the morphology of eutectic phases appears to be similar to that of the ZA126 alloy. Apparently, the grains size decreases considerably while the number of eutectic phases of the ZA128 alloy increases significantly (comparing Figure 3a, c, e, g). Meanwhile, the presentation of dendrite segreation tends to be more obvious, and grain boundaries seem to be demonstration conspicuous trend of broadening. When the addition of Al is further increased to 10%, as shown in Figure 3k and l, where further grain refinement effect and the highest value of the volume fraction of the secondary phase can be obtained with the Al addition higher than 8% in the alloy, meanwhile the grain size of the alloy is the smallest among those of the studied alloys. This is mainly due to a higher addition of Al results in the Al enrichment during the solidification, then induces constitutional undercooling in a diffusion layer ahead of the solid/liquid interface and suppresses grain growth thus leading to the grain refinement. Similarly, the primary grains with different morphologies can still be found. Furthermore, the distribution of the secondary phases penetrates the interdendritic regions and demonstrates coarsely and continuously network. Simultaneously, the evident dendrite segregation (marked by blue elliptic region) still can be observed in the ZA1210 alloy.
It should be emphasized that the phase structure and its component depend mainly on the composition of the alloy. Figure 4 displays the high magnification SEM micorgraph images of the investigated alloys, where the elementary composition of the typical ZA120, ZA124 and ZA1210 alloys was point-analyzed by EDS and the EDS results are presented in Table 2. By comparison, it can be seen from Figure 4, after being modified by adding diffenent Al, the morphology of the secondary phases is altered predominantly. Evidently, the local regions of eutectis compounds of the Al-bearing alloys reveal obvious lamellar structure, whose volum fraction tends to be increasement with increasing of Al addition. Furthermore, the partial gray or black precipitates along the grain boundary were replaced by bright precipitates unevenly dispersed in the edge of the whole eutectic phases.
For the ZA120 alloy without Al, as shown in Figure 4a, the eutectic compound presents a smooth island morphology, which is composed of bright phase and gray one. On the basis of XRD results (Figure 2a) and EDS analysis, the gray precipiate (marked by A) can be confirmed as α-Mg and MgZn 2 phase. The bright precipitate (marked B) is rich in Zn. Combined with the XRD results, the bright one is identified as MgZn 2 phase. With 2% Al addition, it is seen from Figure 4b that the partial eutectic phases of the alloy show lamellar morphology (marked by elliptical circle). Obviously, there are existing three main phases in the eutectic compound: the black, grey and bright phases, respectively. After adding 4% Al, it also observed that from Figure 4c, similar eutectic morphology that of the ZA122 is also exhibited in ZA124 alloy. As seen in Table 2, the black precipitate (marked A) and gray one (marked C) includes Mg, Zn and Al, combining with the XRD results (Figure 2c), they are regarded as α-Mg and Mg 32 (Al,Zn) 49 . The composition of the bright precipitate (marked B) includes Mg, Zn and Al, and the content of Zn is distinctly higher than that in the other positions (marked A and C). It is found, from XRD result (Figure 2c) and EDS point analysis, that they are most likely comprised of the Mg 2 Zn 3 and Mg 7 Zn 3 phases. After 6% and 8% Al is added, as shown in Figure 4d and e, almost half of the eutectic phases show lamellar morphology. Furthermore, the eutectic morphology has a tendency to become coarser.
When the Al addition increased to 10%, as shown in Figure 4f, for the ZA1210 alloy, the eutectic compounds appear mainly in the lamellar morphology. Meanwhile, the distribution of the partial secondary phases has modality characteristics of own, namely, some granular phases are embedded on the surface of the whole eutectic phase. The EDS results reveal that, the gray granular precipitate (marked A) contains Mg, Zn and Al, and the content of Al is higher than that the other ones (marked B and C). Therefore, according to the results of XRD (Figure 2f), it can be validated as Mg 32 (Al,Zn) 49 phase.
With the composition of black precipitate (marked B) corresponds to α-Mg and Mg 17 Al 12 phase. Besides, the bright one (marked C) mainly includes Mg, Zn and Al, the content of Zn is higher than that the other ones (marked A and B). Hence, according to the results of XRD ( Figure 2f) and EDS, it is inferred as Mg 2 Zn 3 phase. Figure 5 displays the variations trend of the room temperature tensile properties when the Al additions varies from 0 to 10%, including the ultimate tensile strength (UTS) and elongation to failure (EL), respectively. As shown in Figure 5, the UTS and EL values firstly achieve an evident improvement with increasing Al addition and obtain maximum value when Al addition increase to 4%. And then they manifest the gradual reductions with more Al addition. It can be seen from Figure 5 that the UTS and EL of the Al-free alloy is 178 MPa and 6.84%, respectively. Distinctly, the alloys with the additions of 2% and 4% reveal relatively higher tensile properties than the binary alloy. This indicating that the addition of 2 ~ 4% Al to the binary alloy is conducive to the tensile properties of the alloys. It is worth mentioning that, the maximum values of UTS (206 MPa) and EL (7.92%) are simultaneously obtained from the 4% Al-containing alloy. Compared with the 2% Al alloy, the UTS and EL of the alloy are increased by 13 MPa and 6.59%, respectively. It is well known that the presence of fine and uniform phases distributed along grain boundaries is easier to act as an effective straddle to the dislocation motion thus improving the properties 17 . It can, thereby, be inferred that the increasement of tensile properties is mainly attributed to the grain refinement strengthening and secondary phase strengthening effects. Also, the fine and homogeneous distributed eutectic phases in the alloy act as obstacles to the movement of dislocations during the deformation process. Nevertheless, it is clearly found that from Figure 5, the addition of Al which is got up to 6% does not maintains continuous increase in UTS and EL except a apparent decrease (from 206 MPa to 171 MPa and from 7.92% to 5.78%), respectively. Indicating that fine grain strengthening does not play a leading role in improving mechanical properties, especially in ZA1210 alloy. It is well accepted that the mechanical properties of the alloy have an important relationship with the distribution, morphology, size and category of eutectic phase 18 . It is believed that the decrease of tensile properties of the Al-containing alloys can be explained by the following three aspects: (1) When the addition of Al exceeds 4%, with increasing of Al addition, the grains of the investigated alloys are gradually refined, but the primary α-Mg grains are almost surrounded by the coarsely and continuously interdendritic eutectic compounds, which can induce generation of the cracks and stress concentration at the interface between intermediate phase and matrix during deformation, thus leading to the deterioration of the tensile properties. (2) A higher addition of Al (≥6%) causes the generation of the Mg 17 Al 12 intermetallics. Duley 19 pointed out that brittle particles often act as stress raisers and thereby, as crack initiation sites during deformation, resulting in inferior mechanical properties. Similar literature has been reported 20 that coarse β-Mg 17 Al 12 phase is commonly considered to be detrimental to the plastic deformation of magnesium alloys. Furthermore, Nie 4 found out that brittle Mg 17 Al 12 phase cracks easily and the cracks usually propagate along the interface of Mg 17 Al 12 phase and Mg matrix. The above outcome means that the Mg 17 Al 12 compounds play an adverse role in the incrseing of tensile properties of the studied alloys. It is reported that the lamellar eutectic microstructures possibly act as crack initiation sites during tensile test, thus leading to the relatively poor tensile properties 21 . (3) When the Al addition exceeds 4%, the local morphology of the eutectic phase is evolved into lamellar shape, which is possibly related to the deterioration of the mechanical properties. However, this needs to be further investigated.
As-cast mechanical properties
By combining above the analysis, it can be summarized that the moderate addition of Al (2 ~ 4%) has an advantageous effect on the tensile properties of the investigated alloys at room temperature. More important, a higher addition of Al (≥6%) causes the grain refinement does not play a dominant role in improving mechanical properties.
The SEM images of tensile fracture surfaces of the studied alloys with different Al additions are shown in Figure 6. It is well accepted that the cleavage fracture, quasi-cleavage fracture and inter-granular fracture are the main fracture modes of magnesium alloys 22 . As shown in Figure 6, a number of cleavage facets with cleavage steps of various sizes, tearing ridges, a few porosities and micro-cracks can be clearly seen, respectively. Furthermore, some river patterns at some places are observed as well, indicating that all the tensile fracture surfaces have mixed fracture characteristics of cleavage and quasi-cleavage fractures. As shown in Figure 6a, some cleavage facets, tearing ridges, visible cracks and a few porosities are observed in the Al-free alloy. The generation of the porosity is ascribed to the developed primary dendrites. According to the report 23 , the porosities induce the origination of cracks, and then the cracks grow and propagate along the dendrite boundaries to the final failure. As shown in Figure 6b, c, the fracture surfaces of the ZA122 and ZA124 alloys too reveale cleavage facets, tearing ridges and a few porosities. This manifesting that the additions of 2 ~ 4% to the binary alloy do not dramatically modify the fracture regime of the alloys. However, the area of the cleavage facets on the fracture surface of the ZA124 alloy tend to increases compared to that of the ZA122 alloy (seening Fig, 6b and c). In short, the ZA120, ZA122 and ZA124 alloys show typical quasi-cleavage fracture and cleavage fracture mode. A higher content Al (≥6%) addition the Mg-12Zn alloy does significantly change the fracture regime of the alloys. As observed in Figure 6d, e and f, the number of cleavage facets and tearing ridges decreases considerably, which are replaced by micro-cracks and numerous fractured secondary phases. This indicated that the fracture regime transformed to brittle fracture. On the basis of existing literature 24 , the Mg 17 Al 12 phases may facilitate the generation of micro-cracks and severe cracking of the particles at the grain boundary due to the their incompatibility with the α-Mg matrix.
Conclusion
(1) The ZA120 alloy mainly consists of α-Mg and MgZn 2 phase. Al addition results in not only the formation of Mg 7 Zn 3 and Mg 2 Zn 3 phase, but also the production of Mg 32 (Al,Zn) 49 phase within the range of 2 ~ 4% Al. In addition, a higher addition of Al (≥6%) causes the generation of the Mg 17 Al 12 compound.
(2) The addition of Al has a prominent effect on the morphology of eutectic phases of the investigated alloys. Namely, after adding Al to the ZA120 alloy, leading to the partial morphology of some eutectic phases is evolved into lamellar structure, and this evolution trend becomes more and more eident with incrseing of Al addition. Furthermore, the lamellar structure play a detrimental role in the tensile properties. (3) The Zn/Al mass ratio has a significant influence on the mechanical properties of the investigated alloys. When the Zn/Al mass ratio more than 2,the addition of Al can enhances the mechanical properties, while the Zn/Al mass ratio less than 2, it deteriorates the mechanical properties. The alloy with Zn/Al ratio 2 exhibits excellent mechanical properties, i.e., a UTS of 206 MPa and an EL of 7.92%, which is respectively higher 28 MPa and 1.08% than that of the free-Al alloy. Therefore, the amount of Al addition to the binary alloy must be limited within a rational range. (4) The microstructural parameters of these alloys, such as grains size, eutectic morphology, secondary phase distribution, and intermetallic compounds, demonstrate conspicuously different from each other. It is considered that the distribution, morphology, composition of eutectic phases, are a omnibus factor that affects the tensile properties of the investigated alloys. | 5,831 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Automated Integration of Geosensors with the Sensor Web to Facilitate Flood Management
Approaches to Managing Disaster - Assessing Hazards, Emergencies and Disaster Impacts demonstrates the array of information that is critical for improving disaster management. The book reflects major management components of the disaster continuum (the nature of risk, hazard, vulnerability, planning, response and adaptation) in the context of threats that derive from both nature and technology. The chapters include a selection of original research reports by an array of international scholars focused either on specific locations or on specific events. The chapters are ordered according to the phases of emergencies and disasters. The text reflects the disciplinary diversity found within disaster management and the challenges presented by the co-mingling of science and social science in their collective efforts to promote improvements in the techniques, approaches, and decision-making by emergency-response practitioners and the public. This text demonstrates the growing complexity of disasters and their management, as well as the tests societies face every day.
Introduction
It is predicted that severe flooding disasters will occur more and more often in the future, due to expanding land use and an increasing number of extreme meteorological events.For monitoring and managing large-scale floods efficiently, up-to-date information is crucial.Geosensors ranging from water gauges and weather stations to stress monitors attached to dams or bridges are used to gather such information.The various kinds of sensors need to be integrated into an interoperable infrastructure so that the measured data can be easily utilized by different disaster relief organizations.The standardized web service framework of the Sensor Web Enablement initiative (section 2.1) can be used as such an infrastructure, since it has shown its suitability in several projects and applications in past years.However, the integration of newly deployed sensors into the Sensor Web has not been straight-forward.That challenge is addressed by this work.A standards-based architecture enabling the on-the-fly integration of environmental sensors into the Sensor Web is presented.This new approach for an on-the-fly integration of geosensors is generic and can be applied to different types of sensors.We demonstrate and evaluate the developed methods by applying them to the real world use case of the German watershed management organization Wupperverband, where the mobile water gauge, G-WaLe, is used as an example for a new geosensor deployed in an ad-hoc manner to densify the measurement network.
The following section 1.1 introduces the topic of flood management and the need for the incorporation of geosensors into a coherent infrastructure to enable interoperable access to up-to-date information.Next, the problem of an interoperability gap between Sensor Web infrastructures and the used geosensors is outlined as the challenge addressed by this work (section 1.2).Section 2 describes the Sensor Web Enablement initiative as well as the mobile water gauge, G-WaLe, that is used to realize the case study in this work.Section 3.2 introduces the Wupperverband, its network of water gauge sensors, and emerging use cases in context of flood management.Next, section 4 presents the standards-based architecture which is applied to realize the case study, as described in section 5.This chapter closes with conclusions and an outlook to future work.
The need for geosensor infrastructures in disaster management
Two days of heavy rain in the mountains of the Erzgebirge in August 2002 caused a dramatic flooding along the German river Elbe.The disaster caused the death of twelve people and a financial damage of over one billion Euros (Elze et al., 2004).This is just one instance for a dramatic flooding disaster in the past years.The Annual Disaster Statistical Review 2007 (Scheuren et al., 2007) states that during the past twenty years seven million people were affected and almost two thousands were killed by flood disasters in Europe.During the last decade, Europe witnessed eight out of twenty of the largest floods ever recorded.Parry et al. (2007) forecast that such kind of flooding disasters will occur more often and more intensely in future.A reason for this is the still expanding land use within critical areas as well as the number of extreme meteorological events which has constantly increased over the past years as a consequence of climate change.
Another region endangered by floods is the drainage area of the river Wupper in the Northwest of Germany.The last floods in this region occurred in August 2007 and caused severe damages (Boch & Schreiber, 2007).Responsible for the watershed management of the Wupper region is the Wupperverband1 organization.In this article, a flooding scenario in the Wupper region is used to illustrate and apply the developed approach (section 5).
To manage disasters such as large-scale floods, but also hurricanes, earthquakes, storms or wild fires, the supply with up-to-date information is crucial to provide decision support for the responsible organisations.In case of floods, information such as water level and precipitation measurements, weather forecasts as well as the state of dams, bridges and other structures along affected watercourses is required.Geosensors and geosensor networks (e.g., networks of stream gauges or weather stations) are valuable means for gathering precise and high resolution data to derive such information.As the National Science and Technology Council Committee on Environment and Natural Resources published in its report on grand challenges for disaster reduction (Subcommittee on Disaster Reduction, 2008), a key to minimize the damage of natural disasters is the provision of sensor data.This data has to be available in near real time, since disasters are time critical situations in which decisions have to be made ad-hoc.Heterogeneous information sources, e.g., different kinds of sensors, have to be integrated on-the-fly into a coherent infrastructure which can be easily utilized by different disaster relief organizations.This infrastructure has to serve as a basis for decision support systems to control and manage emergency situations.It has to enable discovery, browsing, querying and usage of geospatial information as well as processing capabilities.
Today, sensors are becoming smaller, cheaper, more reliable, and more power efficient.Sensors are increasingly used in early warning systems and disaster management (Shepherd & Kumar, 2005).The kinds of sensors utilized in these applications may be stationary or mobile, either on land, water or in the air and could gather data in an in-situ or remote manner.Due to this variety, a coherent infrastructure is necessary to integrate heterogeneous sensors and to enable interoperable access to their functionality.The idea of the Sensor Web describes such an infrastructure for sharing, finding and accessing sensors and their data across different applications (Nittel, 2009).The Sensor Web is to geosensors what the World Wide Web (WWW) is to general information sources -an infrastructure allowing users to easily share their sensor resources.It encapsulates the underlying layers, the network communication details, and heterogeneous sensor hardware, from the applications built on top of it.
The Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC)2 develops standards for Web service interfaces and data encodings which can be used as building blocks for a Sensor Web (section 2.1).SWE incorporates different models for describing sensors and representing sensor observations.Further, it defines Web service interfaces leveraging the models and encodings to allow accessing of sensor data, tasking of sensors and alerting based on gathered sensor observations.The SWE standards serve the functionality to integrate sensors into Spatial Data Infrastructures (SDI).The integration of sensor assets into SDIs makes it possible to couple available sensor data with other spatio-temporal resources (e.g., maps, raster as well as vector data) on the application level which maximizes the information effectiveness for decision support.Due to this integration, Sensor Webs and their comprised geosensors represent a real-time link of Geoinformation Systems (GIS) into the real world.
In recent years, these SWE standards have demonstrated their practicability and suitability in various projects (e.g., (Chung et al., 2009;Jirka et al., 2009;Schimak & Havlik, 2009)) and applications (e.g., (Aasa et al., 2008;Bröring, Jürrens, Jirka & Stasch, 2009;Foerster et al., 2009;Fruijtier et al., 2008)).However, there is still a fundamental challenge currently unresolved, the ability to dynamically integrate sensors in an on-the-fly manner.The integration of sensors into the Sensor Web with a minimum of human intervention is not possible with the given methods and designs.Especially in the above described disaster situations it is required to enable a live deployment of sensor networks and an ad-hoc integration of those sensors into the Sensor Web to allow multiple parties an easy access and usage of the geosensors.Those emergency scenarios require the incorporation of various sensor types.This article addresses the challenge of integrating geosensors on-the-fly towards a true plug & play of sensors with the Sensor Web.
The interoperability gap between sensors and the Sensor Web
Dynamically integrating geosensors on the Sensor Web requires advanced concepts.Generally, the SWE standards focus on interacting with the upper application level, since they are designed from an application-oriented perspective.The interaction between the Sensor Web and the underlying sensor layer has not yet been sufficiently addressed.In consequence, a gap of interoperability between these two layers arises.This interoperability gap results from the fact that both layers are designed with different objectives and approaches.The Sensor Web is based on the WWW and its related protocols.Geosensors, on the other hand, communicate based on lower-level protocols.These protocols follow rarely standards for instrument communication, such as IEEE 1451 (Lee, 2000), but instead are usually manufacturer dependent; see for example protocol specifications for typical oceanographic sensors from Sea-Bird-Electronics (2010), WETLabs (2010), or HOBILabs (2008).
From an application perspective, the SWE services encapsulate associated geosensors and hide their lower-level communication protocols.So far, the integration of a geosensor with the Sensor Web involves two major steps: first, driver software needs to be implemented which converts measured data from the native sensor protocol to higher level Sensor Web protocols, and second, the geosensor description has to be manually registered at a Sensor Web service.I.e., proprietary bridges have to be manually built between each pair of SWE service and sensor type (figure 1).This approach is cumbersome and leads to an extensive adaption effort for linking the two layers.Since the price of sensor devices is decreasing rapidly, these adaption efforts become the key cost factor in developing large-scale sensor network systems (Aberer et al., 2006).Besides those infrastructural deficits in the current Fig. 1.The three layers of the geosensor infrastructure stack and the interoperability gap.
Sensor Web design, there is a mismatch between low-level data structures used in sensor protocols and the high-level information models of the Sensor Web.This issue relates to semantic challenges which have to be tackled to enable an automatic integration of sensors as discussed in (Bröring, Janowicz, Stasch & Kuhn, 2009).The main difficulty lies in mapping the relationship between the different Sensor Web concepts used for modeling sensors, observations, and features to the constructs of the lower sensor layer.An example challenge is to guarantee that the output of a geosensor, e.g., a value symbol gathered by an anemometer for the observable wind direction, complies not only syntactically but also semantically with a certain characteristic of a real world entity's representation residing on the Sensor Web level.Currently, these matchings have to be established and maintained manually by an administrator.The Sensor Web is missing mechanisms which ensure a correct semantic matching without user interaction to enable an automatic registration of sensors.
In future, Sensor Webs could be set up for certain geographic regions.SWE services, which build this Sensor Web, are only interested in sensors within that particular region, provide access to their data or enable their tasking.Various sensors of different type could register and upload their observations.Taking into account mobile sensors moving in and out of these regions, the above described problems become even more pressing.Methods enabling an automatic integration of sensors are needed to tackle those kinds of use cases.
Overall, the described obstacles are currently hindering an on-the-fly integration of sensors with minimal human intervention.In case of the Wupperverband's sensor network, serving as an application scenario in this article, these issues lead to a huge effort when integrating new geosensors or adjusting the existing infrastructure to optimize the monitoring of geo phenomena.To enable a timely reaction in disaster situations and to supply decision makers with necessary information, the demand for solutions coping with these problems is immense.
Hence, this work combines results of our previous work to design an architecture that facilitates the connection of the sensor layer and the Sensor Web layer.We apply this architecture here to enable the on-the-fly integration of a mobile water gauge, a sensor system that can be used to support flood management.The architecture incorporates first the Sensor Bus (Bröring, Foerster, Jirka & Priess, 2010), an intermediary layer that introduces a publish/subscribe mechanism between the Sensor Web and underlying geosensor networks.This is required to make services aware of new sensors appearing on the Sensor Web.Second, a driver mechanism for sensors is incorporated in our approach -the Sensor Interface Descriptor (SID) concept (Bröring, Below & Foerster, 2010).The SID model extends OGC's SensorML standard to describe the protocol of a particular sensor type in a declarative way.By means of a generic SID interpreter, the native sensor protocol can be translated to the SWE protocols.
Background
This section provides information on the Sensor Web Enablement initiative and its specifications (section 2.1).Further, the G-WaLe sensor system is described.These football-sized mobile buoys are capable of observing water level and are used in this work to demonstrate the developed architecture for an on-the-fly integration of geosensors (section 2.2).
Sensor Web enablement initiative
The goal of the Sensor Web is to allow Web-based sharing, discovery, exchange and processing of sensor observations, as well as task planning of sensor systems (Nittel et al., 2008).The Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC) defines standards which can be utilized to build such a Sensor Web (Botts et al., 2008).SWE standards make sensors available over the Web through standardized formats and Web service interfaces by hiding the sensor communication details and the heterogeneous sensor protocols from the application layer (Bröring, Echterhoff, Jirka, Simonis, Everding, Stasch, Liang & Lemmens, 2011).
The main Web services of the SWE framework are the Sensor Observation Service (SOS) and the Sensor Planning Service (SPS).The SOS (Bröring, Stasch & Echterhoff, 2010;Na & Priest, 2007) provides interoperable access to sensor data as well as sensor metadata.To control and task sensors, the SPS (Simonis, 2007) can be used.A common application of SPS is to define simple sensor parameters such as the sampling rate but also more complex tasks such as mission planning of satellite systems.Apart from these Web service specifications, SWE incorporates information models for observed sensor data, the Observations & Measurements (O&M) (Cox, 2007) standard, as well as for the description of sensors, the Sensor Model Language (SensorML) (Botts, 2007).
SensorML specifies a model and encoding for sensor related processes such as measuring or post processing procedures.Physical as well as logical sensors are modeled as processes.The functional model of a process can be described in detail, including its identification, classification, inputs, outputs, parameters, and characteristics such as a spatial or temporal description.Processes can be composed by process chains.
O&M defines a model and encoding for observations.An observation has a result (e.g., 3.52 m) which is an estimated value of an observed property (e.g., water level), a particular characteristic of a feature of interest (e.g., the Wupper river at section 42).The result value is generated by a procedure, e.g., a sensor such as a water gauge described in SensorML.These four central components are linked within SWE.
G-WaLe -A mobile water gage
The G-WaLe sensor system consists of mobile buoys capable of observing the water level by measuring the position in three-dimensional space via satellite positioning systems.The key parts of the G-WaLe system (Beltrami, 2007) are the sensing devices, so-called floaters.These geosensors can be stationary anchored within a river or can be placed on demand within a flooded area.A floater is equipped with a satellite navigation receiver, a battery, memory, as well as a communication unit.The positioning data received from the satellites is internally stored and regularly transmitted via GSM or radio to a central receiver station (see figure 3).To increase the accuracy of the measurements, local reference stations can be incorporated in the positioning process, so that a positioning accuracy of around 10 cm can be achieved.In Germany, the SAPOS system3 can be utilized for that.The water level is derived from the vertical component of the position measurement.Once the position data is transmitted from the G-WaLe floater to the receiver station, the data are accessible on an FTP server.
Flood management in the Wupper region
In the following, the Wupperverband and its role in flood management is outlined.Subsequently, an overview of the administered drainage area and potential flooding scenarios within that geographic region are outlined.Based on these considerations, the section finalizes with a presentation of different hypothetic applications for an on-the-fly integration of geosensors with regards to flood management.These example use cases serve as basis throughout the further work.
Existing water gage network
The Wupperverband is a statutory corporation whose members are for instance municipalities, water distribution companies, or industrial firms.The main responsibilities of the Wupperverband are the provision of drinking water, the operation of sewage treatment plants, the water level management including the backfilling in case of low waters, as well as the maintenance and ecological development of the river systems.Additionally, an important activity of the Wupperverband is the flood protection as well as the monitoring and warning in case of floodings.To accomplish these functions large amounts of measurement data (Beltrami, 2007).
have to be gathered, processed, analyzed and maintained.In the hydrology domain, the Wupperverband collects measurement parameters such as the stream flow amount, water level, groundwater level, or the amount of seeping water.Meteorological parameters which are of interest include precipitation data, air temperature, barometric pressure, humidity or wind direction and speed (Sat et al., 2005).
The Wupperverband is responsible for the management of over 3.000 bodies of water with an overall length of about 2.000 kilometers in the catchment area of the Wupper, which has a size of about 815 square kilometers (see Figure 4 4 ).The terrain of the region rises from West to East which results in a heterogeneous precipitation distribution.With maxima of about 1.400 mm per year, the average annual precipitation in the Wupper drainage area can be considered as very high.Another important characteristic of the region is the population density, with 1.169 inhabitants per square kilometer, is around five times higher compared to the rest of Germany.The overall population of the region is 950.000(Wupperverband, 2001).To monitor the parameters of interest within the catchment area of the Wupper, the Fig. 4. The Wupper drainage area Wupperverband operates over 50 weather stations and about 60 water gauge stations.Those water gauges are fixed installations and of various kind (e.g., staff gauges, ultrasonic gauges or radar gauges).At some gauges the water level needs to be looked up manually or a data logger needs to be read out on-site.Others support a remote transmission of the gathered data for example via telephone or GPRS .
Since the region is densely populated, large parts of the watercourses are channeled, the river banks within settlement areas are consolidated and partly outside of settlement areas, too.Thus, infrastructure objects, such as bridges or dams, reduce the discharge capacity considerably in places.The experiences of the disastrous flood at the watercourses of the Erzgebirge mountains in August 2002 have shown that such kinds of bottlenecks are often incapable of coping with the water volume of hundred-year floods (Elze et al., 2004).Thus, these objects are potentially endangered to get damaged which may cause further destructions.
In case of floodings, a potential risk goes out from industry facilities, such as plants or pipelines.Such facilities can particularly be found in the area of the lower Wupper and within the area where the Wupper empties into the Rhine.Other examples of endangered facilities are sewage treatment plants which are only capable of coping with a certain water level.These kinds of facilities demand a specific protection by technical emergency services.Also, gas storage tanks on the property of private households are at risk in flooding situations.These kinds of risks may result in subsequent water pollution, chemical spills, or industrial fires.The map in Figure 5 shows the region around the mouth of the Wupper into the Rhine.The turquoise areas show the floodplains of the river system officially determined by the public authorities.The plains endangered by regularly occurring floodings are drawn in light blue.To achieve an interoperable access to geosensors, e.g., water gauges and weather (Spies & Heier, 2008).Thus, the gathered hydrology and weather data are provided via Internet in a standardized way.The interoperable interfaces allow applications of the Wupperverband to work with internal as well as externally provided services.Hence, sensor data served by cooperating organizations (e.g., neighboring watershed management organizations) can be included into the decision making process.On the other hand, the information systems of third party organization are also enabled to work with the sensor data offered by the Wupperverband (Bröring & Meyer, 2008).
The central task of the Wupperverband in the context of flood protection is to create precipitation discharge models for the regional watercourse system.Based on those models, statistically possible flood scenarios (e.g., twenty-year or hundred-year floodings) and their consequences are simulated.The results of these simulations are the foundation for catalogs of countermeasures which aim at minimizing the damage in case of floodings.These are usually long-term measures.Real-time reactions on flood situations are currently not the focus of the Wupperverband.In fact, systems for the real-time management of disasters such as floods are still topic of research.An example for such a system has been developed within the SoKNOS project (Stasch et al., 2008).The objective of that project has been the research and development of concepts which effectively support governmental and industrial organizations in the area of public security.A service oriented approach for an emergency management system has been elaborated which helps technical services in handling disaster situations.The requirements for such emergency management systems are exceptional high, especially concerning scalability and reliability in disaster situations.The methods developed within this research support the development of such systems.However, a fully functional system fulfilling those extremely high requirements of emergency management is not the direct outcome.
Flooding use case
The following hypothetical scenario description is based on the real world flooding of the river Eschbach within the catchment area of the Wupper which happened in August 2007 and caused severe damages in the region (Boch & Schreiber, 2007).
Heavy rainfall is dominating the weather conditions over major parts of western Germany for the past days.This has lead to serious high waters especially along the Lower Rhine and its tributary rivers.During this tense situation, a massive local thunderstorm and heavy rainfalls occur over the greater region of the Wupper drainage area.With over 70 liters per square meter and hour, the measured precipitation amount is at certain weather stations statistically less frequent than every 100 years.Because the soil in the region has already contained much moisture, it is quickly saturated and incapable of absorbing further water.This leads to very high discharge rates along the watercourses of the Wupper region.
Of high importance is that emergency measures are conducted at the right places to assure the protection of critical objects such as dams, bridges or industry facilities.Up-to-date sensor data are the foundation for a sensible decision making.Therefore, the organization for disaster relief cooperates with the Wupperverband and requests water level measurements for all parts of the river basin.These data are necessary to compute the degree of danger of individual river reaches and to create exact situation awareness.
Certain parts of the river courses are not densely enough covered with pre-installed water level gauges.So, the emergency management decides to deploy new mobile geosensors on-the-fly at those reaches of the river.These geosensors increase the temporal and spatial density of precipitation, water level or stream flow measurements.The gathered data serve as input for exact stream flow models in order to compute risk estimations and forecasts.For this scenario, we assume that the Wupperverband is already capable of performing those computations in real-time.
As mentioned above, a local Sensor Web infrastructure is already in place and in productive use for the affected region.It is managed and maintained by the Wupperverband.This infrastructure enables the interoperable access to available water gauges and weather stations and their collected data.The information systems used by the emergency management rely on data provided by this Sensor Web.The newly deployed geosensors have to be made available within this infrastructure in an ad-hoc manner, so that operational applications of the emergency management can directly utilize their collected observations.Immediately after the deployment of the new geosensors in the field, they have to be accessible via the Sensor Web.The existing, persistent water gauges and precipitation sensors of the Wupperverband are partly not equipped with remote data transmission and require a manual readout of the last measured values.Other sensors report their data automatically but with a slow rate.Thunderstorms with heavy rain are short-term events which require a quick reaction time.So, an ad-hoc integration of suitable sensors deployed at endangered locations and transmitting data frequently to a base station can significantly enhance the monitoring and management of the flooding situation.An example for a mobile water gauge sensor, that can be used in the outlined scenario and serves as a test object within this work, is the G-WaLe sensor system as described in section 2.2.
Another example scenario, where an on-the-fly integration of new geosensors facilitates their usage, is construction site monitoring.Constructions built next to or in a water body have to be equipped with multiple kinds of sensors, e.g., to gain information about the quality of the water fed into the river.This use case is not as time critical as disaster management.The construction process is a priori known, at least several weeks before it starts.However, an easy and quick integration of new geosensors into a coherent infrastructure would also facilitate this scenario.
An architecture for the on-the-fly integration of geosensors
In this section, we present an architecture that enables the on-the-fly integration of sensors with the Sensor Web.We apply this architecture to the flood management use case and the G-WaLe sensor system in the following section 5.
Sensor Bus -A publish/subscribe mechanism for the Sensor Web
An automated on-the-fly integration of sensors and Sensor Web services requires a mechanism that enables a sensor to publish its availability as well as its measured data and enables a service to subscribe for sensors to subsequently receive their data.We realize such a publish / subscribe mechanism here by introducing an intermediary sensor integration layer between the sensor layer and the Sensor Web layer; see figure 6.This intermediary layer is externally designed as a logical bus -the Sensor Bus, as developed by Bröring, Foerster, Jirka & Priess (2010).In this article, we make use of the Sensor Bus and combine it with a generic driver mechanism (section 4.2) to apply it to the flood management use case and the G-WaLe sensor.In the following the concept of the Sensor Bus is outlined.Aligned with the message bus pattern (Hohpe & Woolf, 2003), the Sensor Bus incorporates (1) a common communication infrastructure, (2) a shared set of adapter interfaces, and (3) a well-defined message protocol.The common communication infrastructure is realized upon an underlying messaging technology.The Sensor Bus is independent of the underlying messaging technology which can therefore be exchanged.It can, for example, be realized with instant messaging systems such as XMPP5 or IRC6 , but also using Twitter as shown by Bröring, Foerster, Jirka & Priess (2010).Services as well as sensors can publish messages to the bus and are able to subscribe to the bus for receiving messages in a push-based communication style.The different components (i.e., sensors and Sensor Web services) can subscribe and publish to the Sensor Bus through adapters.Those adapters convert the service or sensor specific communication protocol to the internal bus protocol; see figure 7).A Fig. 7. Components of the Sensor Bus.detailed analysis of interactions between the sensor layer and the Sensor Web layer, which emerge when introducing the Sensor Bus as an intermediary layer, is conducted by Bröring, Foerster & Jirka (2010).Those interactions are realized through particular bus messages.The bus contains two different kinds of channels to exchange such messages (figure 7).First, the management channel is used to register a component at the bus and publish its metadata, i.e. sensor characteristics or service requirements.Second, there are communication channels, where sensors publish their measurements.Each communication is dedicated to a particular observed property (e.g., temperature or water level).The most important bus messages realize the following functionalities: • connecting a sensor and passing a sensor description (encoded in SensorML) • subscribing a service for specific sensors by defining required sensor characteristics • publishing data measured by a sensor • directing sensors / services to bus communication channels A detailed description of the message protocol of the Sensor Bus and a proof-of-concept implementation can be found at Bröring, Foerster, Jirka & Priess (2010).
Sensor interface descriptors -A generic driver mechanism for geosensors
Before a geosensor can be integrated with the Sensor Web, a driver is required which understands the native sensor protocol and offers a well-defined interface that makes the functionality of the sensing device available to the outside.Since there are numerous kinds of environmental sensors with various interfaces available, we propose the usage of a generic driver mechanism for sensors.The Sensor Interface Descriptor (SID) model described in our previous work (Bröring & Below, 2010;Bröring, Below & Foerster, 2010) can be used to provide this functionality.The SID model supports the declarative description of sensor interfaces.It is designed as a profile and extension of OGC's SensorML standard (section 2.1).An instance of the SID model, designed for a particular type of sensor, defines the precise communication protocol, accepted sensor commands, or processing steps for transforming incoming raw sensor data.Based on that information, a so-called SID interpreter is able to establish the connection to the sensor and translates between the sensor protocol and a target protocol.For this work, we have developed a generic SID interpreter which acts as a sensor adapter and converts data received from a sensor in order to transfer it to the Sensor Bus; see figure 8. SID interpreters can be built independently of particular sensor technology since they are based on the generic SID model.Figure 9 depicts an excerpt of this model.The blue colored, SID specific classes extend the beige colored classes defined in SensorML.The SID is strictly encapsulated within the InterfaceDefinition element of a SensorML document.Since the SID is designed for a certain type of sensor and not for a particular sensor instance, this encapsulation makes the interface description independent of the rest of the SensorML document.Consequently, it is easily exchangeable and can also be reused in SensorML documents of other sensors of the same type.
The SID model extends the elements of the Open Systems Interconnection (OSI) reference model (ISO/IEC, 1996) which are already contained in SensorML and associated with the interface definition.The OSI model is the basis for designing network protocols and therefore consists of a number of layers.On the lowest layer, the physical layer, the structure of the raw incoming and outgoing sensor data stream is described.This includes the definition of block identifiers and separator signs within the data stream.Next, encoding and decoding steps can be applied to the raw sensor data.Therefore, according processes can be specified and attached to the data link, network, transport, and session layer.Such processing steps Fig. 8. SID interpreter as a sensor adapter for the Sensor Bus are for example character escaping or checksum validation which are necessary for reliable communication with sensors.Finally, the application layer can be used to define commands accepted by a sensor, including their parameters, pre-and postconditions, as well as response behavior.Those command definitions can for example be used by a Sensor Planning Service (section 2.1) to provide an interoperable interface for tasking.A detailed description of the model can be found at Bröring & Below (2010); Bröring, Below & Foerster (2010).Sensor interfaces and communication protocols are often complex.Consequently, the design and manual creation of SID instances is not straightforward.Hence, a visual SID creator has been developed (Bröring, Bache, Bartoschek & van Elzakker, 2011).This graphical tool (see figure 10) supports users in describing the sensor interface and generate SID instances for their geosensors.The creator can be used by sensor manufacturers to create SIDs for their Approaches to Managing Disaster -Assessing Hazards, Emergencies and Disaster Impacts products and provide them to clients for an easy integration of their geosensors with the Sensor Web.Alternatively, this tool can help owners of sensors to create SIDs if they are not already available from the sensor manufacturer.
Application
This section applies the above presented architecture to the use case of a flood in the Wupper region where G-WaLe sensors need to be dynamically deployed to increase the density of the measurement network (section 3.2).A local Sensor Web infrastructure is already in place and built upon the Sensor Bus.New geosensors need to be integrated in an on-the-fly manner.As a proof-of-concept, we demonstrate in the following the integration of the G-WaLe sensor with a Sensor Observation Service.
To connect the G-WaLe sensor to the Sensor Bus we utilize the generic sensor adapter which incorporates an SID interpreter (section 4.2).The SID for the G-WaLe sensor has been designed using the SID creator tool that follows the wizard user interface pattern (Tidwell, 2006).Figure 10 shows the page of the wizard that allows to define how to physically connect to the sensor, e.g., via USB, serial connection, or with an indirect file system connection, as chosen here.The G-WaLe floater devices store measurements in a data file on an FTP folder.These measurement files are read out by the SID interpreter.To enable the SID interpreter to understand the structure of the data file, i.e. the sensor protocol, the page of the SID creator shown in figure 10 further allows to define the structure of the data file.A G-WaLe data file Fig. 10.Sensor protocol defined in SID Creator.contains a timestamp in GPS weeks and milliseconds, the measured elevation in meters, as well as the accuracy of the measurement in meters.Each row of the file represents a data block and ends with the carriage return.The fields of the block are divided by the tab character.An example is shown in listing 1.This structure is defined in the SID creator as shown in figure 10.The signs for block and field separation are specified in ASCII code, i.e., <CR> for a carriage return and <HT> for a tab character.
Listing 1. Example of a G-WaLe data file; each line contains tokens for measured GPS week, GPS milliseconds, elevation (m), and accuracy (m).
1570 547200000 9 3 .8 3 1 0 .0 4 1570 548100000 9 7 .1 6 0 0 .0 4 1570 549000000 9 3 .8 0 4 0 .0 4 1570 549900000 9 1 .5 2 9 0 .0 4 While other sensor protocols contain multiple kinds of data block structures, that can also be defined in the SID creator, the G-WaLe data file uses only one kind of data block structure that contains the four measurement fields.Those fields are named in the SID creator (e.g., the third field is called Elevation), so that they can be referenced during further processing of the data.Listing 2 shows the according SID code generated by the SID creator.
Listing 2. Excerpt of the generated SID file.Once the sensor adapter is started, it sends the message for connecting the sensor to the Sensor Bus and advertises its characteristics.Depending on the phenomenon observed by the sensor, internal management components of the Sensor Bus direct the sensor to the appropriate communication channel.There, the sensor adapter publishes the data measured by the G-WaLe sensor.
Similar to the registration of a sensor, a service adapter subscribes a service, here a Sensor Observation Service (SOS), at the Sensor Bus and defines the sensor characteristics in which the service is interested.For example, a service can declare interest in geosensors which observe the property Elevation.Then, an internal management component of the Sensor Bus points the service to each sensor that matches the requested characteristics and directs the service to the communication channel used by the sensor.Subsequently, the service adapter registers the sensor at the SOS by calling the RegisterSensor operation.
As soon as the sensor adapter publishes measurements in the communication channel, the service adapter inserts that data as observations into the SOS.Therefore, the service adapter transforms the received data to InsertObservation requests and sends it to the SOS.An example of such a request is shown in Listing 3. x l i n k : h r e f =" h t t p :// sweet .j p l .nasa .gov /1.1/ property .owl# E l e v a t i o n "/> < f e a t u r e O f I n t e r e s t > <sa : SamplingPoint gml : id =" p1" > <sa : sampledFeature x l i n k : h r e f =""/ > <sa : p o s i t i o n > <gml : Point > <gml : pos srsName =" urn : ogc : def : c r s : EPSG: 4 3 2 6 " > 5 2 .6 4 7.12 </gml : pos> </gml : Point > </sa : p o s i t i o n > </sa : SamplingPoint > </ f e a t u r e O f I n t e r e s t > < r e s u l t x s i : type ="gml : MeasureType " uom="m" > 9 3 .8 3 1 </ r e s u l t > </Observation > </s o s : InsertObservation > Henceforth, the data is stored by the SOS and available to clients via its standardized interface.It can be accessed and retrieved in a pull-based manner.An example of such a client application which allows accessing and displaying sensor data from an SOS is shown in figure 11 and available as open source at 52 • North7 .
Similarly to registering an SOS, a Sensor Alert Service or Sensor Event Service (section 2.1) can be subscribed at the Sensor Bus to provide data in a push-based manner.Those push-based services receive the incoming data, filter it by certain predefined criteria and forward it to interested clients.
Once available on the Sensor Web, the elevation measurements coming from the G-WaLe sensor can be related to the measurements of water gauges within the same river, and can be used for determination of the moment the wave of the flood passes the mobile water gauge.
Conclusions & outlook
In this chapter, we stress the need for methods that enable an easy and flexible integration of geosensors with the Sensor Web, as a coherent information infrastructure.Thereby, an on-the-fly integration needs to be automated to minimize administration and configuration efforts.Such a facilitated integration of geosensors can support various use cases, in particular disaster management.Here, a flood management use case in the region of the German river Wupper is described, where heavy rain events have caused a local flood.Additional water gauges are needed to increase the measurement density and improve flood management.We demonstrate how the G-WaLe sensor, a mobile water gauge, can be used and dynamically integrated with the Sensor Web by means of the developed architecture.
The architecture consists of (1) the Sensor Bus, a message bus that realizes a publish / subscribe mechanism between sensors and web services and (2) the Sensor Interface Descriptor (SID) concept, a generic driver mechanism for geosensors.Both components are available as open source software at 52 • North8 .The presented approach is generic and in related articles we have shown that it can also facilitate the integration of radiation sensors (Bröring, Below & Foerster, 2010), a basis for managing nuclear disasters, or the integration of oceanographic sensors to fight oil spills or harmful algae plumes (Bröring, Maué, Janowicz, Nüst & Malewski, 2011).
The SID concept enables the operation of geosensors without the necessity of manually implementing a driver for the instrument.Instead, an SID file is created which describes the structure of the sensor's protocol.This SID creation is supported by the SID creator tool.Any SID interpreter implementation that follows the SID specification (Bröring & Below, 2010) can execute the SID file and can communicate with the geosensor.Of course, the current design of the SID model does not accommodate every possible sensor protocol, but a broad variety of manufacturer specific protocols is already covered.Possible extensions of the SID specifications will broaden the range of supported protocols.
For the future, we are particularly interested in applying the developed approach in countries, such as Pakistan where floods have caused enormous damage in the recent past.In under-developed regions, where no static water gauge network is maintained by the state, a system that enables the on demand deployment and on-the-fly integration of water gauge sensors would significantly support flood management.
Fig. 2 .
Fig. 2. Deployment of a G-WaLe floater in a river.
Fig. 5 .
Fig. 5. Potential flooding zones of the lower Wupper station, the Wupperverband has built up a local Sensor Web.The services of the SWE initiative have been used to encapsulate those sensor systems for seamlessly integrating them into the existing Spatial Data Infrastructure of the Wupperverband(Spies & Heier, 2008).Thus, the gathered hydrology and weather data are provided via Internet in a standardized way.The interoperable interfaces allow applications of the Wupperverband to work with internal as well as externally provided services.Hence, sensor data served by cooperating organizations (e.g., neighboring watershed management organizations) can be included into the decision making process.On the other hand, the information systems of third party organization are
Listing 3 .
Example of a simplified SOS InsertObservation request.<sos : InsertObservation s e r v i c e = 'SOS ' v e r s i o n = ' 1 .0 : t i m e P o s i t i o n > </samplingTime> <procedure x l i n k : h r e f =" h t t p :// myserver .org/ sensor/G−WaLe−1"/> <observedProperty | 9,730 | 2012-03-14T00:00:00.000 | [
"Computer Science"
] |
David Armstrong on the Metaphysics of Mathematics
This paper has two components. The first, longer component (sec. 1-6) is a critical exposition of Armstrong’s views about the metaphysics of mathematics, as they are presented in Truth and Truthmakers and Sketch for a Systematic Metaphysics . In particular, I discuss Armstrong’s views about the nature of the cardinal numbers, and his account of how modal truths are made true. In the second component of the paper (sec. 7), which is shorter and more tentative, I sketch an alternative account of the metaphysics of mathematics. I suggest we insist that mathematical truths have physical truthmakers, without insisting that mathematical objects themselves are part of the physical world.
David Armstrong on the Metaphysics of Mathematics
Thomas M. E. Donaldson This paper has two components. The first, longer component (sec. 1-6) is a critical exposition of Armstrong's views about the metaphysics of mathematics, as they are presented in Truth and Truthmakers and Sketch for a Systematic Metaphysics. In particular, I discuss Armstrong's views about the nature of the cardinal numbers, and his account of how modal truths are made true. In the second component of the paper (sec. 7), which is shorter and more tentative, I sketch an alternative account of the metaphysics of mathematics. I suggest we insist that mathematical truths have physical truthmakers, without insisting that mathematical objects themselves are part of the physical world.
A prime number is a "Sophie Germain prime" if (2 + 1) is also prime. It is conjectured that there exist infinitely many Sophie Germain primes. I don't know whether this conjecture is true, but what I do know is that there exist some Sophie Germain primes: 2 is an example; 3 is another; (2, 618, 163, 402, 417 × 2 1290000 − 1) is a third, or so I am told. Now it is obvious that every Sophie Germain prime is a number; and it follows that there exist some numbers-or so it seems. But what kind of a thing is a number? This is a difficult question, but one point at least seems clear: numbers and other mathematical entities are "abstract," in the sense that they have no causal powers and no location in spacetime. We are told that the number zero was discovered in India, but it would be a mistake to go to India now to look for it-and not because it has subsequently been moved. You can't trip over the number three. The polynomial ( 2 -3 + 2) can be split into two factors, ( -2) and ( -1), but not by firing integers at it in a particle accelerator. The empty set has no gravitational field. And so on.
And so, to labour the point, it seems that some abstract entities exist.
And yet David Armstrong began his last book by endorsing what he called "naturalism": I begin with the assumption that all that exists is the space-time world, the physical world as we say. […] [This] means the rejection of what many contemporary philosophers call "abstract objects," meaning such things as numbers or Platonic Forms or classes, where these are supposed to exist "outside of" "or"extra to" spacetime. (2010,1) Despite this naturalism, Armstrong did not reject any part of mainstream mathematics. Indeed, he insisted that the truths of orthodox pure mathematics are necessary and a priori (2010, Ch. 12).
In this paper, I explore Armstrong's attempt to reconcile his denial of abstract entities with his commitment to orthodox mathematics. To be more specific, the paper has three goals.
1. Armstrong wrote a vast amount on mathematics, and this writing is spread among many papers and books. Some of this work is complex, and Armstrong changed his mind on certain important questions. My first goal in this paper is to describe-clearly, briefly, and in one place -Armstrong's mature views on the metaphysics of mathematics, including relevant aspects of his work on the metaphysics of modality.
To prevent the discussion from sprawling, I focus particularly on Armstrong's account of cardinal number, as it is presented in his last two books: Truth and Truthmakers (2004, "T&T") and Sketch for a Systematic Metaphysics (2010, "SSM"). Sec. 1 and section 2 describe Armstrong's views about the cardinal numbers; section 3 -section 6 focus on modality. 2. My second goal is to present some novel-and, I believe, definitiveobjections to Armstrong's views on the metaphysics of mathematics. These objections are presented in section 5 and section 6. 3. My third goal is to recommend a different way of thinking about the metaphysics of mathematics-an approach which will, I hope, appeal to people who admire Armstrong's work. Briefly, I will suggest that we insist that every mathematical truth has a truthmaker in the physical world, without also insisting that mathematical objects themselves are physical things. The proposal is presented in more detail in section 7.
Cardinal Numbers as Concrete Entities
The claim that numbers have no spatial location is familiar to metaphysicians, but it sometimes comes as a surprise to students. "But there are three pens on my desk right now!" they exclaim, implying that the number three itself is within arm's reach. According to Armstrong, the surprised students are on to something. While he rejected "Platonic forms" which exist outside spacetime, Armstrong did believe that there are properties which exist within the particulars that instantiate them (SSM, ch. 2). For Armstrong, there exists a property is red which exists within London buses, ripe tomatoes, and male cardinals. As he sometimes put it, properties are "immanent" rather than "transcendent". He inferred that all properties are instantiated. Uninstantiated properties have no place in spacetime, and so no place in Armstrong's philosophical system (SSM,(15)(16). He made the same claim about relations (SSM, 23).
For Armstrong, a cardinal number is a relation between a particular and a property. Specifically, the cardinal number κ is a relation that a particular bears to a property P just in case has, as mereological parts, exactly κ particulars which instantiate P. A normal octopus bears the one relation to the property is an octopus, and the eight relation to the property is a limb. The mereological sum of two normal octopuses bears the two relation to the property is an octopus and the sixteen relation to the property is a limb. And so on. 1, 2 On Armstrong's view, then, the surprised student is correct to think that cardinal numbers exist within the "spacetime world". It turns out, then, that one can coherently maintain that numbers exist and that there are no abstract entities.
This is a striking result-and yet, a problem looms. As I said, Armstrong insisted that properties and relations exist only when they are instantiated. On this view, the number 10 10 10 exists only if the spacetime world happens to contain 10 10 10 particulars. And it is far from clear that the spacetime world is that large. I call this "the problem of size".
It is tempting to reply to this objection by insisting that space is infinitely divisible. Discussing the problem of size as it arises for Aristotle, Jonathan Barnes writes: Physical objects are, in Aristotle's view, infinitely divisible. That fact ensures that, even within the actual finite universe, we shall always be able to find a group of objects, for any […] If the universe consisted simply of a single sphere, it would also contain two objects (two hemispheres), three objects (three third-spheres) and so on. We shall never run short of numbers of things […]. (1985,122) This cannot be considered a satisfactory solution to Armstrong's problem, however. For one thing, it is far from clear that Aristotle was correct in thinking that space is infinitely divisible: those who study quantum gravity have been known to speculate that space is in fact discrete. And so the proposed solution is somewhat "hostage to fortune." More importantly, even if the proposed account does secure the existence of large finite numbers such as 10 10 10 , it still leaves the existence of transfinite cardinals open to doubt. Standard mathematical descriptions of spacetime (which do imply infinite divisibility) entail that the set of spacetime points has cardinality ℶ 1 . Such accounts leave it unclear whether larger cardinal numbers (e.g. ℶ 2 , ℶ 3 , or even ℶ ) are instantiated in the physical world-and yet these larger cardinals are very much a part of orthodox mathematics. 3 In passing, I note that Armstrong faced a problem of size in his account of set theory too. While Armstrong identified cardinal numbers with relations, he identified sets with individuals. The central claim in Armstrong's account was David Lewis's "brilliant insight" (T&T, 120): the mereological parts of a set are precisely its non-empty subsets. For example, Lewis's claim implies that the mereological proper parts of {a, b, c} are {a}, {b}, {c}, {a, b}, {b, c}, and {c, a}. This implies that every singleton set is mereologically simple. Now the set theorists tell us that for each cardinal number κ, there are at least κ singletons. Thus, Armstrong is stuck with the claim that, within the spacetime world, there are least κ mereologically simple individuals, for each κ. And this seems highly doubtful. Perhaps one could plausibly argue that there ℶ 1 mereological simples by saying that each spacetime point is mereologically simple and there are ℶ 1 of those. But it is hard to see any justification for the claim that there are ℶ 2 , or ℶ 3 , or even ℶ mereological simples in the spacetime world. Once again, the physical universe seems to be too small to accommodate the ontology of mathematics. 4
Armstrong's Possibilism
Armstrong was aware of the problem of size. He responded by claiming that larger cardinal numbers exist in posse though not in esse: Armstrong called this possibilism. The doctrine is not easy to interpret. In the above passage, Armstrong seems to suggest that a mathematical entity exists provided that it could be instantiated -even if it is in fact not instantiated. However, in this same passage, Armstrong contrasts his possibilism with the "Platonist" claim that there are uninstantiated mathematical entities.
The following quotation gives us a clue about what Armstrong meant: We say '7 + 5 = 12', but this can rendered more transparently, though more boringly, as ⟨Necessarily, if there are seven things, and five further things, then the sum of these things are twelve things⟩. (T&T, 101) What this passage suggests is that, for Armstrong, while the sentence "7 + 5 = 12" appears to describe a relation among three mathematical entities (viz. the numbers seven, five and twelve) it is in fact a generalization about pluralities of marbles, pebbles, sticks, or whatever. More generally, the possibilist maintains that while pure mathematics appears to describe a domain of special mathematical entities, it in fact consists of modal statements-statements about what is necessary or possible. Now Armstrong did not develop this proposal systematically; instead, he endorsed Geoffrey Hellman's modal structuralism: I recognize, of course, that asserting here this […] doctrine of mathematical existence is to a degree a matter of hand-waving. I have not the logico-mathematical grasp to defend it in any depth. That has been done, in particular by Geoffrey Hellman. (T&T,117) The reader who wants a thorough discussion of modal structuralism should consult Hellman (1989). For now, a back-of-the-envelope summary will be sufficient. The modal structuralist claims that the theory of the natural numbers is not a description of some particular sequence of entities; rather, the theory concerns all possible models of the Peano axioms. 5 For example, when a mathematician asserts that there are infinitely many prime numbers, what is really meant is something like this: 6 It is necessary that, in any model of the Peano axioms, the domain contains infinitely many prime elements.
Notice that this statement does not imply that there is a model of the Peano axioms somewhere in spacetime.
When a modal structuralist mathematician asserts that, necessarily, every model of the Peano axioms contains infinitely many prime elements, she will of course wish to rule out the suggestion that this is true "vacuously"-that is, simply because models of the Peano axioms are impossible. Thus, a modal structuralist mathematician will claim that models of the Peano axioms are possible.
Hellman discusses in some detail how to extend this approach beyond number theory, and into applied mathematics. We need not look into these details. For our purposes, the key point is that the appeal to modal structuralism allows Armstrong to say that "10 10 10 is even" is true (when properly interpreted) 5 The Peano axioms are the standard axioms in the theory of the natural numbers. Among them are such claims as "Zero is a number", and "If is any natural number, + 0 = 0". 6 The version of modal structuralism that I so quickly sketch here is hermeneutic rather than revolutionary (for this distinction, see J. P. Burgess and Rosen 1997, 6-7). This is, I think, the correct interpretation of Armstrong's position. For Hellman's position, see (1998). without committing himself to the questionable thesis that 10 10 10 physical objects exist. 7 The attractions of the approach are obvious, but Armstrong's possibilism brings with it a new problem. Armstrong was a truthmaker maximalist -he believed that every true proposition has a "truthmaker," that is, an entity in the spacetime world which is sufficient (and perhaps more than sufficient) to explain the truth of the proposition. 8 Armstrong was thus stuck with the formidable task of identifying truthmakers for complex modal truths like those described above. It is my contention that Armstrong did not succeed at this task, as I shall explain in the next three sections. 9
Armstrong's Entailment Principle
Before we look at Armstrong's discussion of truthmaking and modality, we must consider his Entailment Principle, which is crucial to his account. Some notation will be helpful: I will put a sentence between angled brackets to represent the corresponding proposition. For example, ⟨Sam is dancing⟩ is the proposition that Sam is dancing. Here is the Entailment Principle, as it is formulated in Sketch for a Systematic Metaphysics: The Entailment Principle (SSM Version). If entails , then any truthmaker for ⟨ ⟩ must be a truthmaker for ⟨ ⟩ too. (SSM,(65)(66) For example, since entails ¬¬ , it is a consequence of Armstrong's entailment principle that any truthmaker for ⟨ ⟩ must also be a truthmaker 7 Hellman's modal structuralism involves second-order quantification, and it is worth thinking about how such quantifiers should be interpreted within Armstrong's metaphysical system. One approach is to say that the second-order variables range over properties (including "second-rate" properties-see footnote 2). Some restriction of the usual comprehension axiom will be needed to accommodate Armstrong's contention that there are no uninstantiated properties. For a version of modal structuralism that does not require second order quantification, see Berry (2018). 8 The parenthetical "and perhaps more than sufficient" is there to indicate that Armstrong's was an inexact conception of truthmaking, to use Kit Fine's terminology-see (2017). 9 Fox (1987) endorses a purely modal conception of truthmaking. According to Fox, is a truthmaker for just in case it is necessary that if exists then is true. On this approach, it is easy to identify truthmakers for purely mathematical truths. Since the truths of pure mathematics are necessary, given Fox's purely modal conception of truthmaking, anything whatever is a truthmaker for any purely mathematical truth. Armstrong himself vigorously rejected this approach, insisting that truthmakers must be relevant to the propositions they make true (T&T, 11). For more on this theme, see Cameron (2018). for ⟨¬¬ ⟩. In this example, the Entailment Principle is plausible. However, there is an important objection to this formulation of the principle. As Restall (1996) has pointed out, this simple version of the Entailment Principle conflicts with a popular and appealingly simple (though not undisputed) account of truthmaking and disjunction: The Disjunction Principle. makes true the proposition ⟨ ∨ ⟩ if and only if makes true ⟨ ⟩, or makes true ⟨ ⟩, or both.
To see the conflict, consider the following argument: Let and be any two true sentences, and suppose that is a truthmaker for ⟨ ⟩. Then since entails ( ∨ ¬ ), must also be a truthmaker for ⟨ ∨ ¬ ⟩. By the Disjunction Principle, must be a truthmaker either for ⟨ ⟩ or for ⟨¬ ⟩. But by hypothesis, ⟨ ⟩ is true so ⟨¬ ⟩ is false and so ⟨¬ ⟩ has no truthmakers. So must be a truthmaker for ⟨ ⟩.
This little argument appears to show for any two true sentences and , any truthmaker for ⟨ ⟩ is also a truthmaker for ⟨ ⟩-a result which completely trivializes truthmaker theory.
In Truth and Truthmakers, Armstrong gives a more sophisticated version of the Entailment Principle which is not subject to the same objection: The Entailment Principle, (T&T Version). If entails* , then any truthmaker for ⟨ ⟩ must be a truthmaker for ⟨ ⟩ too. (T&T, 10) Here, entailment* is some non-classical entailment relation, to be specified. By insisting that need not entail* ( ∨ ¬ ), we can maintain a version of the Entailment Principle without having to conclude, absurdly, that all propositions expressed by true sentences have the same truthmakers. 10 We have seen that the "SSM version" of the Entailment Principle conflicts with the Disjunction Principle, and that one can maintain the Disjunction Principle by endorsing the "T&T version" instead. If a truthmaker theorist wishes instead to maintain the simpler, "SSM version" of the Entailment 10 It is not easy to say exactly what entailment* is. Restall (1996) has proposed that entailment* "is nearly, but not quite, the first degree entailment of relevant logic". Linnebo (2017) has suggested that entailment* includes first-order intuitionistic entailment without identity. Thankfully, we need not settle this question here.
Principle, she may choose to reject the Disjunction Principle-and indeed philosophers have presented independent reasons for rejecting this principle. 11 We need not settle this dispute here. Suffice it to say that appeals to the "SSM version" of the Entailment Principle are subject to dispute. 12
Armstrong on Truthmaking and Possibility
Having briefly looked at the Entailment Principle we are ready to consider Armstrong's account of truthmaking and modality. Let's start with possibility. Suppose that the sentence expresses a contingently true proposition; what then are the truthmakers for ⟨♦ ⟩ and ⟨♦¬ ⟩? ⟨♦ ⟩ is comparatively straightforward. Since ⟨ ⟩ is true, Armstrong argued, it must have a truthmaker, . Since entails ♦ , will be a truthmaker for ⟨♦ ⟩ as well, by the Entailment Principle. 13 ⟨♦¬ ⟩ is rather more difficult. Armstrong introduced his "possibility principle" (T&T, 84) to deal with the problem: Possibility Principle. If ⟨ ⟩ is a contingent truth and is a truthmaker for ⟨ ⟩, then is a truthmaker for ⟨♦¬ ⟩.
The principle is not attractive on its face. As Pawl (2010) has pointed out, Armstrong's being legged is a truthmaker for ⟨Someone has legs⟩, but it is hardly plausible that Armstrong's being legged is a truthmaker for ⟨Possibly, nobody has legs⟩. But Armstrong claimed that the Possibility Principle is a consequence of the Entailment Principle. He presented the following argument (T&T, 84, notation slightly modified): (3): in no standard modal logic is it true that ⟨ ⟩ entails ⟨♦¬ ⟩ (special cases aside). So the Possibility Principle is implausible on its face, and the argument Armstrong gave for it is unconvincing. I think we should conclude that the principle should be rejected. 14 Happily, Armstrong also offered another and more attractive account of truthmaking and possibility. The idea is this. Suppose that we have (separately) two slices of bread, fifteen slices of cheese, and two slices of tomato. These things could have constituted a cheese and tomato sandwich-although in fact they don't. Plausibly, they together form a truthmaker for the proposition that a cheese and tomato sandwich could exist. Armstrong wrote: Consider, in particular, the cases where the entities in question do not exist, where they are mere possibilities. It is, let us suppose, true that ⟨it is possible that a unicorn exists⟩. What then is a minimal truthmaker for this truth? The obvious solution is combinatorial. The non-existent entity is some non-existent (but possible) combination out of elements that do exist. The phrase "non-existent combination" may raise eyebrows. Am I committing myself to a Meinongian view? No, I say. The elements of the combination are, I assert, the only truthmakers that are needed for the truth that this combination is possible. (T&T, 91-92) I think that this is a very attractive account of what the truthmakers are for some truths about possibility (including truths about unicorns and tomato sandwiches). 15 However, it is doubtful that the combinatorial approach provides us with sufficient truthmakers for all the possibility claims made by the modal structuralist mathematician. The modal structuralist will assert that second-order ZFC could have had a model, but it seems unlikely that such a model could be created by recombining physical objects, because it seems unlikely that there are enough physical objects to go around. If, for example, there are only ℶ 3 physical objects, we will not by combining them 14 Armstrong (2007) recognized the error in his argument for the possibility principle and went on to offer a new argument for the Possibility Principle. For criticism of this later argument, see Pawl (2010). 15 For some criticisms of Armstrong's "combinatorialist" theory of modality, see Wang (2013). be able to produce a set with ℶ 4 elements, but every model of second-order ZFC contains sets with ℶ 4 elements-and indeed much larger sets to boot. The problem of size has reemerged in a new form.
Armstrong on Truthmaking and Necessity (Part 1: Truth and Truthmakers)
Let's turn to Armstrong's discussion of propositions about necessity. Since our concern is Armstrong's philosophy of mathematics, we need not discuss all of Armstrong's views about truthmaking and necessity. Instead, we'll focus on what he had to say about truthmakers for the theorems of mathematics. Armstrong's discussions of this topic in Truth and Truthmakers and Sketch for a Systematic Metaphysics are very different. In this section, I'll consider chapter eight of Truth and Truthmakers, leaving the later book until section 7. Armstrong (T&T,99,111) suggested that the numbers themselves constitute truthmakers for some arithmetical truths. For example, seven, five and twelve may together form a truthmaker for ⟨7 + 5 = 12⟩. 16 For Armstrong, so long as seven, five and twelve exist they must be related in this way, and so nothing beyond their existence is needed to explain their being so related. This relation between the three numbers is "internal" to them. This is an important idea, and I will return to it in section 6. But this is not on its own a complete solution to the problem at hand. Consider for example the proposition ⟨ℶ + ℶ = ℶ ⟩. This is a theorem of orthodox mathematics, and so Armstrong would surely accept that the proposition is true, when given its proper modal interpretation. But what is its truthmaker? Surely ℶ itself can be a truthmaker only if it exists. However, as we saw in section 1 and section 2, it is doubtful for Armstrong that ℶ exists. 17 Later in the chapter, Armstrong discussed analytic truths. He wrote: A traditional view, which has many supporters, is that [analytic] truths are true solely in virtue of the meanings of the terms in which they are expressed. (T&T, 109) Armstrong went on to say that "[t]he phrase 'in virtue of' inevitably suggests truthmakers." So Armstrong proposed that if a sentence S is analytic, the proposition it expresses is made true by the meanings of the words in S. For example, ⟨A father is a male parent⟩ is made true by the meanings of "a", "father", "is", "a", "male" and "parent". Now Armstrong suggested-somewhat tentatively-that statements in mathematics about what is necessary are analytic. 18 On this view, the meanings of mathematical terms make true all such statements.
I do not dismiss completely the claim that mathematical truths are analytic. 19 However, Armstrong's version of this thesis is insufficient to solve the problem at hand. Let be some true proposition from pure mathematics. Armstrong believed that the theorems of pure mathematics are necessarily true. So we can ask what the truthmaker for would have been, had there been no language-users. How might Armstrong reply? Surely it is not adequate to say that the meanings of English words would have been the truthmakers-for English words would not have existed in the absence of English speakers. 20 If Armstrong replies that would not have had a truthmaker, he would be stuck with the surely unwanted conclusion that it is possible for a proposition to be true without a truthmaker. And so the proper Armstrongian conclusion is that would have had a different set of truthmakers, had there been no language-users. But then we are left with the question of what these truthmakers would have been -and until this question is answered, Armstrong's account is incomplete. 18 Armstrong wrote: "There may be something mechanical, something purely conceptual, purely semantic, in the deductive following-out of proofs of the existence of the possible. (See the account of analytic truth to come in 8.9.)" (T&T, 102). Note that this quotation is from the chapter on necessary truths in T&T. So I take it that what Armstrong is (tentatively) suggesting here is that truths in mathematics about what is necessary are analytic. 19 When Armstrong says that "a traditional view" is that analytic truths are "true solely in virtue of the meanings of the terms in which they are expressed," his wording seems to derive from the introduction to Ayer (1946). I think that few philosophers of mathematics today would defend Ayer's view in all its details. However, there are still philosophers who endorse views which resemble Ayer's position in important respects. See for example Rayo (2013). 20 Perhaps some philosophers will insist that words (or their meanings) are necessarily existing abstract objects. However, I take it that Armstrong would not take this line. As we've seen, Armstrong rejected necessary abstracta.
Armstrong on Truthmaking and Necessity (Part 2: Sketch for a Systematic Metaphysics)
By the time he wrote Sketch for a Systematic Metaphysics, Armstrong had decided to reject his earlier suggestion that mathematical truths are analytic, saying that such a view implies that mathematics is "too arbitrary or conventional" (SSM, 91). But he suggested an alternative approach, which we will now consider. We should begin by looking at Armstrong's metaphysics of law. Armstrong claimed that a law is a relation between properties (SSM, 35). Here is a toy example. Suppose that it is a law that being dehydrated causes headaches. For Armstrong, this means that a certain relation (viz. , the nomic relation) obtains between two properties (viz. the property is dehydrated, and the property has a headache). Armstrong would symbolize this as follows: 21 (is dehydrated, has a headache) Now Armstrong claimed that laws have "instantiations". Our law, for example, is instantiated whenever someone is dehydrated and, consequently, has a headache. A law, on this view, is itself a property. And we have already seen that Armstrong was happy to posit "immanent" properties. This led Armstrong to the view that every law is instantiated. He wrote: If laws are a species of universal, then, according to me at least, they have to be instantiated at some place and time. Well, we talk of laws being instantiated, do we not? (The points where the laws are 'operative'.) So this instantiation of laws is the instantiation of a special sort of universal. (Note that this would require every law to be somewhere instantiated in space-time.) […] One consequence of this is that there cannot be laws that are never instantiated. (SSM, 41) Now Armstrong suggested that it is certain mathematical and logical laws which make true the necessities of mathematics.
He began his discussion of this proposal by appealing to his Entailment Principle, arguing that truthmakers for the axioms of a mathematical theory must also be truthmakers for the theorems (SSM,90). This manoeuvre is suspect. While it is known that the theorems of orthodox mathematics are entailed by the axioms (that's what makes them theorems, after all) it is far from clear that the axioms entail* the theorems. 22 And it gets worse. To complete his account, Armstrong still needed to specify truthmakers for the axioms of our mathematical theories. To do this, he appealed to his theory of laws: We do, of course, have to recognize that introducing the Entailment Principle drives us back to consider the axioms from which mathematical systems are developed.
[…] I suggest that we should postulate laws in logic and mathematics (non-contradiction, excluded middle in logic, Peano's axioms for number, or whatever laws logicians and mathematicians wish to postulate). In the light of the nature of proof just argued for we might suggest that such laws might be all we needed to postulate in the way of an ontology for logical and mathematical entities. (SSM,(90)(91) It is not credible, as Armstrong suggests here, that the Peano axioms are "laws" in Armstrong's sense. For example, one of the Peano axioms states that the natural numbers are unending in the sense that every natural number has a successor. 23 For Armstrong, this statement may not be true when taken at face value. For Armstrong, as we've seen, the existence of very large natural numbers is doubtful, and it is at least possible that there is a largest natural number, which has no successor. To circumvent this point, Armstrong will presumably insist on a modal reinterpretation of the axiom. On this view, the axiom, properly interpreted, states that, necessarily, every model of the Peano axioms is unending.
The corresponding Armstrongian law would then have to be: 22 Suppose, for example, that Linnebo (2017) is correct and entailment* coincides with intuitionistic entailment. Then consider some statement which is provable classically but not intuitionistically from the prevailing axioms. (For example, might be ( ∨ ¬ ), where is some statement independent of the prevailing axioms.) Assuming, with Armstrong, that the inferences of classical logic are all truth-preserving, and the axioms of orthodox mathematics are true, we can conclude that is true. However, because it is not entailed* by the prevailing axioms, we cannot identify a truthmaker for it using the proposed method. 23 The "successor" of a natural number is the number that comes immediately after it, when the natural numbers are arranged in the customary fashion. So for example the successor of nineteen is twenty.
(is a model of Peano arithmetic, is unending)
But this is problematic, because Armstrong believed that all laws are instantiated in the physical world-and it is far from clear that this law is instantiated. It may be that the physical world is finite. However, every model of Peano arithmetic is infinite. And so it may be that there are no physical models of Peano arithmetic, in which case the above-mentioned law is uninstantiated. We might be able to avoid this problem by arguing on empirical grounds that there are infinitely many physical objects. For example, we might appeal to the common (though admittedly contested) assumption in physics that space is infinitely divisible. However, the problem that I have just described will reassert itself when we turn our attention from Peano arithmetic to other branches of mathematics which posit a greater number of entities. The most extreme case is set theory. Any model of second-order ZFC would have to have a truly vast domain, containing ℶ elements and more. There is no empirical reason to think that there exist that many physical objects. So we are left with the conclusion that the Armstrongian laws corresponding to the axioms of set theory are uninstantiated.
To avoid these problems, an Armstrongian would have to list a number of basic principles for mathematics which express laws that are instantiated in the physical world, and argue that they entail* the truths of mathematics. I don't know that this impossible, but it is far from obvious that it can be done. And even if it could be done, the problem of identifying truthmakers for facts about what is possible would remain.
An Alternative Approach
Let's review. Armstrong believed that mathematical entities are located within the physical world. For example, wherever there is a pair of things, there is the number two. However, Armstrong realized that the physical world is not large enough to accommodate all the entities posited by modern pure mathematics. So he adopted a modal interpretation of mathematics. For Armstrong, pure mathematics tells us not about what is, but about could be and must be. However, Armstrong believed that every truth has a truthmaker within the physical world, and so he was left with the unenviable task of identifying truthmakers for modal truths within the physical world. I have argued that he did not succeed. 24 In this final section, I would like to put forward an alternative approach-an approach which will, I hope, appeal to those impressed by Armstrong's metaphysical system. 25 Armstrong accepted a version of the methodological principle known as Occam's razor. He rejected mathematical Platonism largely for this reason. A "Platonic realm of numbers," he wrote, is an "ontological extravagance" (T&T, 100). 26 However, Armstrong did not use his razor to excise supervenient entities. Supervenient entities, he thought, are an "ontological free lunch". For example, he did not think that universalism in mereology is objectionably unparsimonious: Whatever supervenes or, as we can also say, is entailed or necessitated, is not something ontologically additional to the subvenient, or necessitating, entity or entities.
[…] The terminology of "nothing over and above" seems appropriate to the supervenient.
[…] If the supervenient is not something ontologically additional, then this gives charter to, by exacting a low price for, an almost entirely permissive mereology. Do the number 42 and the Murrumbidgee River form a mereological whole? […] The whole, if it exists, is certainly a strange and also an uninteresting object. But if it supervenes on its parts, and if as a consequence of supervening it is not something more than its parts, then there seems no objection to recognizing the whole. So in this essay permissive mereology, unrestricted mereological composition, is embraced. (1997,(12)(13) On an uncharitable interpretation of this passage, Armstrong's view was that if the existence of necessitates the existence of , then is "nothing over and above" . But this is hardly plausible. Perhaps God exists necessarily, but it would be grossly immodest for me to claim that God is nothing over and above me. Perhaps I could not have had different parents, in which case my existence necessitates theirs. But they would quite properly take exception to the suggestion that they are nothing ontologically additional to me. 27 24 It is worth noting in passing that Armstrong's theory of propositions was problematic in rather similar ways. On this point, see McDaniel (2005). 25 For a very different approach, see Read (2010). 26 Armstrong also had epistemological reasons for rejecting Platonism (SSM, 2). For lack of space, I do not discuss epistemology in this paper. 27 For a more detailed discussion of these points, see Schulte (2014). Cameron (2008) has suggested a more promising way of developing Armstrong's idea that supervenient entities are "free". To put it briefly, Cameron's proposal is as follows. Compare the following two propositions: ⟨ exists.⟩ (where is a marriage, between Ashni and Ben) ⟨ exists.⟩ (where is an electron) The former proposition is made true by certain patterns of human activityinvolving perhaps Ashni, Ben, a registrar, some pieces of paper, and some metal rings. Ashni and Ben's marriage is a derivative entity: its existence is explained by facts about things other than itself. The electron is not derivative. The electron's existence is not explained by facts about other things; itself is the only truthmaker for the proposition ⟨ exists⟩. 28 More generally, Cameron's proposal is this. When is fundamental, the only truthmaker for ⟨ exists⟩ is itself. When is derivative, ⟨ exists⟩ has a truthmaker other than itself. 29 Cameron adds that it is derivative entities in this sense that are an "ontological free lunch," to use Armstrong's phrase. In effect, Cameron replaces the familiar slogan "Do not multiply entities beyond necessity" with a variant: "Do not multiply fundamental entities beyond necessity." Since mereological compounds are non-fundamental, Cameron infers, mereological universalism is not objectionable on grounds of parsimony. 30 Cameron briefly suggests an application of this idea to impure set theory. He proposes that that an impure set is "nothing over and above" its elements, so there is no objection on grounds of parsimony to positing all those impure sets that can be built up from basic elements whose existence can already be established. On this view, there is no need to re-interpret set theory in a "possibilist" manner. We maintain that all the sets posited by set theorists really 28 Sharp-eyed readers will note that in this section I assume an explanatory conception of truthmaking, according to which, when is a truthmaker for , explains the truth of . For discussion, see Cameron (2018). 29 I have actually modified Cameron's proposal in a small way. Cameron's view is that when is derivative, is not a truthmaker for ⟨ exists⟩. I find this claim puzzling (How could fail to make true ⟨ exists⟩?) and since it is inessential to my argument, I omit it. 30 Suppose that and are fundamental objects, and that ( + ) is their mereological sum.
According to Cameron and collectively make true ⟨( + ) exists⟩. For Cameron, this proposition has no single truthmaker; rather there are some things which together make the proposition true. This is a subtlety of Cameron's view which I ignore in the main text, for simplicity. do exist, although they are not fundamental. Let's develop this Cameronian proposal in more detail. Why does the set {Jill, Joe} exist? I suggest that it exists because Jill exists, and because Joe exists-and that is all. Nothing more is needed. And so, I suggest, any truthmaker for ⟨Jill exists⟩ and ⟨Joe exists⟩ will also be a truthmaker for ⟨A exists⟩, where A is {Jill, Joe}. More generally: 1. If is a truthmaker for ⟨ exists⟩, for each in a non-empty set , then is a truthmaker for ⟨X exists⟩ also. 31 So much for propositions about the existence of sets. But a complete truthmaker theoretic account of the sets will also include an account of what the truthmakers are for other propositions, including propositions about the identity and distinctness of sets, and propositions about what is an element of what.
Let's start with identity. Suppose that someone asks us why Joe is identical to Joe-that is, we are asked why Joe is identical with himself. This is a very peculiar question. The best answer to it that I can come up with goes like this. For Joe to bear the identity relation to himself, it suffices that he exists. Self-identicality is not some additional characteristic that requires further explanation. Joe exists, and so he is self-identical. And that is that. If this is right, I suggest, any truthmaker for the proposition ⟨Joe exists⟩ must also be a truthmaker for ⟨Joe = Joe⟩. In general, any truthmaker for ⟨ exists⟩ will also be a truthmaker for ⟨ = ⟩. 32 Something similar is plausible in the case of non-identity. If, bizarrely, we are asked why it is that Jill is not identical with Joe -if we are asked why they are two people and not one-all we can say in reply is that to be non-identical 31 What about the empty set? One might be tempted to avoid the problem by denying that the empty set exists. This proposal, as Hazen (1991) argues, is less radical than it might first seem, and Armstrong did in some places express scepticism about the empty set (T&T, 114). However, given Armstrong's usual hostility towards philosophically motivated reforms to standard mathematical practice, I think it desirable, from an Armstrongian point of view, to preserve the empty set. So here is an alternative, inspired by Kit Fine's well-known discussion of zero-grounding (2012). Armstrong generally supposed that a truthmaker will always be a single thing. But we might want to allow that a proposition can be made true by two things acting in concert, or three things, or four things, or more. For example, we might say that , and collectively make true ⟨{a, b, c} exists⟩. Taking this line of thought still further, we could argue that in some unusual cases a proposition is made true by zero things; as we might put it, such propositions are trivially made true. On this view, we may say that ⟨∅ exists⟩ is trivially made true. 32 For more detailed discussion of the question of how truths of identity are to be explained, see A. Burgess (2012) and Shumener (2017).
the two Bidens need only exist. Jill exists. She is one person. Joe exists too. He is another. And that is all. There is nothing extra that Jill and Joe need to do or to be in order to be distinct-existing is enough. And so, I suggest, any truthmaker for ⟨Jill exists⟩ and ⟨Joe exists⟩ is a truthmaker also for ⟨Jill ≠ Joe⟩. More generally, if and exist and are distinct, any truthmaker for ⟨ exists⟩ and ⟨ exists⟩ must also be a truthmaker for ⟨ ≠ ⟩. I want to recommend a similar treatment of the relations of membership and non-membership. If we are asked why Joe is an element of his singleton, there is nothing we can say except that, for this to be so, it suffices that Joe and his singleton exist. No more is needed. And if we are asked why Joe is not an element of {Jill}, we can say only that it is enough that Joe and {Jill} exist. More generally, I suggest, if is an element of Y, then any truthmaker for ⟨x exists⟩ and ⟨Y exists⟩ is a truthmaker too for ⟨x ∈ Y⟩. And if is not an element of Y, though they both exist, any truthmaker for ⟨x exists⟩ and ⟨Y exists⟩ is a truthmaker too for ⟨x ∉ Y⟩.
Let me put all of this in a rather different way. Let's say that a relation is "strongly internal" if and only if the following condition is met: Necessarily, for any and , if bears to then (1) bears to at any world at which and both exist, and (2) at every such world, any truthmaker for ⟨ exists⟩ and ⟨ exists⟩ is also a truthmaker for ⟨ bears to ⟩. 33 If is a strongly internal relation and bears to , then no explanation for this is required, beyond whatever is needed to account for the fact that the relata exist. My proposal is that the relations of identity, non-identity, membership and non-membership are strongly internal in this particular sense. 34 In summary: 1. If is a truthmaker for ⟨ exists⟩, for each in a non-empty set , then is a truthmaker for ⟨X exists⟩ also. 33 Armstrong said that a relation is internal if "given just the terms of the relation, the relation between them is necessitated" (T&T, 9). That is, given any relation R, R is internal (in Armstrong's sense) just in case the following is necessary: For any and , if bears to , then at every world at which and exist, bears to . Clearly, any strongly internal relation is also internal in Armstrong's sense. The converse, however, is open to dispute. Suppose arguendo that God exists necessarily. Then the relation and are such that God exists is internal, in Armstrong's sense. But it is doubtful that this relation is strongly internal, for it is hardly plausible that any truthmaker for ⟨Joe exists⟩ and ⟨Jill exists⟩ mustalso be a truthmaker for ⟨God exists⟩. 34 Note that strongly internal relations need not be universals -they may be "second-rate" properties.
In saying that non-membership is strongly internal, I do not assert that it is a genuine universal.
2. If a relation is "strongly internal", then whenever bears to , any truthmaker for ⟨ exists⟩ and ⟨ exists⟩ is also a truthmaker for ⟨ bears to ⟩. 3. The relations of identity, non-identity, membership and nonmembership are strongly internal.
We can go further still, into the uncountable. Given our account, will be a truthmaker for ⟨S* exists⟩, where S* is the set of non-empty subsets of S . S* is an uncountable set. And will be a truthmaker for ⟨S** exists⟩, where S** is the set of non-empty subsets of S*-a set even larger than S*. And proceeding in this way, we can locate in the physical world truthmakers for propositions concerning sets at all levels of the vertiginous set-theoretic hierarchy, including sets of arbitrarily high cardinality.
And what of Armstrong's claim that all entities exist "somewhere, somewhen" (SSM, 15)? Well, some readers may find it edifying to insist that a set is located wherever its elements are. 35 On this view, you are co-located with your singleton, and its singleton, and its singleton, and so on ad infinitum. I offer no objection to this proposal. But I find it hard to see how to justify the claim that sets have spatial locations, and more importantly it seems to me that we need not endorse this claim to earn the title "naturalist." We insist that all fundamental objects are physical, and that all truths have physical truthmakers-and this is naturalism enough.
Back to the cardinal numbers. According to the current proposal, even if the fundamental objects are rather few, nevertheless the sets are fantastically numerous. This allows us to maintain Armstrong's original account of cardinal number without having to worry about the problem of size, and without recourse to possibilism. Given the current proposal, for example, ℶ is instantiated in the hierarchy of sets, even if there are only finitely many fundamental entities. If we add that it is not possible for there to be nothing, 36 we are left with the conclusion that the cardinal numbers exist necessarily.
Of course, a thorough truthmaker-theoretic account of mathematics would also cover functions, complex numbers, matrices, ordinal numbers, graphs, and all the other mathematical creatures. You will probably be relieved to hear that I don't intend to deal with all these topics now. It's time for a cup of tea, after all. But I hope that my discussion of sets and cardinals is sufficient to motivate cautious optimism about Armstrongian naturalism-despite the errors of detail that we have identified in Armstrong's discussions of the metaphysics of mathematics.* | 10,922.8 | 2022-11-18T00:00:00.000 | [
"Philosophy"
] |
Biermann battery as a source of astrophysical magnetic fields
: A large number of galaxies have large-scale magnetic fields which are usually measured by the Faraday rotation of radio waves. Their origin is usually connected with the dynamo mechanism which is based on differential rotation of the interstellar medium and alpha-effect characterizing the helicity of the small-scale motions. However, it is necessary to have initial magnetic field which cannot be generated by the dynamo. One of the possible mechanisms is connected with the Biermann battery which acts because of different masses of protons and electrons passing from the central object. They produce circular currents which induce the vertical magnetic field. As for this field we can obtain the integral equation which can be solved by simulated annealing method which is widely used in different branches of mathematics
Introduction
A wide range of galaxies have large-scale magnetic fields which have the typical strength of several microgauss (Arshakian et al. 2009). First observational confirmations of their existence were obtained several decades ago while studying cosmic rays (Bochkarev 2011). After that the magnetic fields have been studied using the synchrotron emission (Ginzburg 1959), and nowadays most of the research is done by the Faraday rotation mesurements of the polarization plane of the radiowaves. When such waves pass through the medium with regular magnetic field structures, the polarization plane rotates proportionally to the induction of the field and squared wavelength (Zeldovich et al. 1983). As for the Milky Way, most of the sources of the polarized can be associated with the pulsars. First research works were based on quite small amounts of the sources, but they established the basic features of the magnetic field in our Galaxy (Manchester 1973;Andreasyan and Makarov 1989;Han and Qiao 1994). Nowadays there are more than thousand pulsars (Andreasyan et al. 2016) which can be used to study the field. It is necessary to emphasize that the sources with large rotation measure (RM > 200 rad m −2 ) play the most important role for the magnetic field study (Andreasyan et al. 2020). It has been shown that the magnetic field of our Galaxy has so-called reversals which are connected with different directions of the field. As for another galaxies, there are large databases of the sources of different nature which allow us to study the magnetic field of more than one hundred objects (Opperman et al. 2012).
From the theoretical point of view, the magnetic field of galaxies is usually described by the so-called dynamo mechanism (Arshakian et al. 2009). It describes transition of the energy of the turbulent motions to the energy of the magnetic field (Sokoloff 2015). The basic drivers of the dynamo are connected with the differential rotation and alpha-effect. Differential rotation is based on changing angular velocity of the objects (which is smaller in the outer parts), produces the azimuthal magnetic field and makes it larger. Alpha-effect shows the helicity of the turbulent motions of the interstellar medium and describes the transition of the azimuthal magnetic field to the radial one. Both of these mechanisms compete with the turbulent diffusion which tries to destroy the large-scale structures of the magnetic field. If the dynamo-number (describing this effect) is large enough, we can say that the magnetic field will grow according to the exponential law (Arshakian et al. 2009). The rate of this growth can be calculated using eigenvalue problem (Mikhailov 2020) or numerically (Moss 1995;Mikhailov 2014). However, one of the most important problem is connected with the initial conditions. The dynamo mechanism should have some seed field and it cannot describe the field growth from the zero one. The initial magnetic field can be explained using the small-scale dynamo which is connected with the properties of the turbulent motions. Unfortunately even this mechanism requires seed fields. So, we should take some principally different approach, which is not connected with the dynamo action.
One of the most perspective explanations of the magnetic field was supposed by Biermann in the middle of the previous century (Biermann and Schluter 1951;Harrison 1970). It is based on the flows from the center of the galaxy. It contains protons and electrons which interact with the rotating medium. They have principally different masses, so the electrons move with the velocity which is close to the surrounding medium, and protons "lag behind" according to their large mass (Andreasyan 1996). This effect produces non-zero circular currents which induce vertical magnetic fields. First estimates of the field produced by the Biermann mechanism were quite moderate, so this effect was undeservedly forgotten. Nowadays it is widely recognized as a basic source of the interstellar magnetic field. So it is quite necessary to give not only the typical value of the field, but to study its radial structure which can be quite useful for the next stages of the field evolution (Arshakian et al. 2009). The field generated by the Biermann mechanism can be the initial field for the small-scale dynamo, and after that the generated small-scale field can be the source for the initial field for the step connected with large-scale dynamo (Beck et al. 1996).
The description of the magnetic field generated by the Biermann mechanism leads us to the Fredholm integral equation of the second kind (Mikhailov and Andreasyan 2021). The solution of this problem is connected with illposed problem which is widely known in mathematical physics. It can be solved using the Tikhonov regularization (Goncharsky et al. 1985;Tikhonov et al. 1995). After that the inversion of the regularized functional is used. The problem of inversion of matrices which should be done here is a quite "expensive" operation. However, nowadays the methods of machine learning are used for a lot of similar problems in different branches of mathematics and physics and are much more simple to be described and to be realized on computer (Shamin 2019).
Here we used the simulated annealing method which is widely used, mostly in problems of control sciences (van Laarhoven 1987;Granville et al. 1994;Shamin 2019). Here we have the integral equation which can be reduced to a problem of minimization of some functional. It can be done by the iteration algorythm which is connected with random perturbations of the approximations of the solution. One of the basic points is connected with using "bad" changes at some stages of the process.
Firstly in this paper we present the basic equations which describe the motion of particles and show the generation of the field by singular circular current. After that, we obtain the integral equation which describes the structure of the magnetic field (Mikhailov and Andreasyan 2021). Finally, we describe the simulated annealing method and the solution of the problem. We present the basic figures which show the results for the magnetic field radial structure and for the evolution of the iterative approximations.
The particles motion and equation for the magnetic field
If we describe the motion of the particle from the central part of the object, its motion can be described by the equaition (Mikhailov and Andreasyan 2021): where r is the radius vector of the particle, , v is the velocity, m is its mass, f is the total force which is connected with the gravitation, interaction with the medium and the pressure of the radiation, B is the magnetic field and q is the charge. We shall assume that the radial velocity V is constant, and the typical processes in azimuthal direction are much faster than in the radial one. So the azimuthal part of the motion equation is the most important. As for the force we shall have the approximation (Mikhailov and Andreasyan 2021): where R is the distance from the center, ω is the angular velocityof the particle and Ω is the angular velocity of the medium. As for the typical time of the interaction τ ∼ m 2 we have different values for protons and electrons. The equation for the angular velocity will be (Mikhailov and Andreasyan 2021): It can be solved as: where ωq = Ω 1 − qτVB mcRΩ 1 + 2Vτ/R .
The angular velocity will become close to ωq with typical timescale τ 1+2Vτ/R which is quite small for our problem.
So we can say that the particles will move with angular velocities (Mikhailov and Andreasyan 2021): where q = e for protons and q = −e for electrons. Each pair of particles is connected with a circular current (Mikhailov and Andreasyan 2021): corresponds to protons and corresponds to electrons. It can be shown that taking into account the typical values of the parameters of the particles, Each circular current generates the axisymmetric magnetic field at the distance r from the center (Mikhailov and Andreasyan 2021 where Φ is the function which is defined as: If we have the density of these particles n(R), each differentially thin layer [R, R + dR] × [−h, h] × [0, 2π] (in cylindrical coordinates, h is the half-thickness of the disc) will produce tthe field: The magnetic field B will make the particles of the main part of the medium move with angular velocity ± qB 2mqc . Each proton -electron pair will produce the currrent These currents will produce extra magnetic field The magnetic field will be: dB(r) = dB 1 (r) + dB 2 (r).
Integrating both parts we will obtain the equation from the inner radius R min to the outer one Rmax , we shall obtain (Mikhailov and Andreasyan 2021): If we assume that n(r) = The field is measured in units of 4n0R min heVτp Ω cRmax . The field can be found by minimizing the functional:
Simulated annealing method
The integral equation can be solved using the simulated annealing method, which is one of the most simple methods of machine learning and widely used in control sciences (van Laarhoven 1987). The previous works showed (Mikhailov and Andreasyan 2021) that the typical magnetic field can be approximated as: The zero approximation can be used taking A 0 = D 0 = 0, so B 0 (r) = 0.
After that we will take small random perturbations ∆A and ∆D constructing the next approximation: , we shall pass to the next iteraction. If U[B n+1 ] > U[Bn], we will return to An and Dn with probability However, with probability we should take A n+1 and D n+1 . As for the "temperature" Tn we have the law: T n+1 = 0.9Tn . Figure 1 shows the values of the functional and its evolution for different iteractions. In the ideal case it could become zero, but it comes to some minimal value. It is connected with the inaccuracy of the algebraic model of the magnetic field. Figure 2 describes the magnetic field for different λ. We can see that the magnetic field will enlarge for smaller values of λ.
Conclusion
In this paper, we have studied the process of the magnetic field generation using the Biermann mechanism. The structure of the field is described by the integral equation. It is solved using the simulated anneaing method which can be associated with simplest methods of artificial intelligence and machine learning. It is more fast and effective than the classical methods of solving integral equation. However, the results are nearly the same (Mikhailov and Andreasyan 2021).
The magnetic field generated by the Biermann battery mechanism is quite small, and depending on the type of the object, we can obtain the fields which have the typical magnitudes of 10 −27...17 G (Mikhailov and Andreasyan 2021). So, it is necessary to stress that if we take in as an initial magnetic field for the mean field dynamo, it won't be possible to have the values of 10 −6 G that are measured in observations. Also the magnetic field produced by this mechanism is oriented vertically and its projection to the eigenfunction of the mean field dynamo operator is close to zero. So it is quite useful to combine the mean field dynamo mechanism and the small-scale dynamo which is based on turbulent effects. The typical timescale for the small-scale dynamo in galaxies is about 10 7 years. So, if the field generated by Biermann battery mechanism will be the initial field for the turbulent dynamo, it will reach the equipartition value of 10 −6 G during times of less than 10 9 years (Beck et al. 1996). This field will have random orientation, but the number of turbulent cells will be finite and has the order of 10 4 . So the mean value of the magnetic field will be non-zero. According to simple statistical laws it will have the order of 10 −8 G. It can be the inititial field for the mean field dynamo, which has the typical timescale of 10 9 years and can describe the growth of the magnetic field, which will have the regular component of 10 −6 G during several Gyr.
This appoach can also be quite interesting to study the magnetic fields of another objects, such as accretion discs which surround black holes (Shakura and Sunyaev 1973) or can be associated with eruptive stars (Andreasyan et al. 2021). Previously it has been shown that the magnetohydrodynamical processes in such objects can be desribed by the mechanisms which are nearly the same as the ones for the galactic discs (Moss et al. 2016;Boneva et al. 2021). So we can apologize that the Biermann battery can be the source of the magnetic field and it can play even a more important role than for the galaxies. Of course, we should take into account different spatial lenghtscales. However, out model which uses the dimensionless variables can be simply adopted for such cases. | 3,180.8 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Nonequilibrium Schwinger-Keldysh formalism for density matrix states: analytic properties and implications in cosmology
Motivated by cosmological Hartle-Hawking and microcanonical density matrix prescriptions for the quantum state of the Universe we develop Schwinger-Keldysh in-in formalism for generic nonequilibrium dynamical systems with the initial density matrix. We build the generating functional of in-in Green's functions and expectation values for a generic density matrix of the Gaussian type and show that the requirement of particle interpretation selects a distinguished set of positive/negative frequency basis functions of the wave operator of the theory, which is determined by the density matrix parameters. Then we consider a special case of the density matrix determined by the Euclidean path integral of the theory, which in the cosmological context can be considered as a generalization of the no-boundary pure state to the case of the microcanonical ensemble, and show that in view of a special reflection symmetry its Wightman Green's functions satisfy Kubo-Martin-Schwinger periodicity conditions which hold despite the nonequilibrium nature of the physical setup. Rich analyticity structure in the complex plane of the time variable reveals the combined Euclidean-Lorentzian evolution of the theory, which depending on the properties of the initial density matrix can be interpreted as a decay of a classically forbidden quantum state.
The purpose of this paper is to construct the Schwinger-Keldysh in-in formalism [1,2] for expectation values and correlation functions in a rather generic nonequilibrium system with the initial state in the form of arXiv:2309.03687v1[hep-th] 7 Sep 2023 a special density matrix.This density matrix is itself assumed to be determined by the dynamical content of the system.The motivation for this construction comes from the scope of ideas of quantum cosmology suggesting that the initial state of the Universe should be prescribed not from some ad hoc and freely variable initial conditions like in a generic Cauchy problem, but rather intrinsically fixed by the field theory model of the Universe.The pioneering implementation of these ideas was the prescription of the Harle-Hawking no-boundary cosmological wavefunction [3,4], no-boundary connotation indicating the absence of the notion of the initial Cauchy (boundary) surface of spacetime.Such a pescription replaces the existence of this surface by the requirement of regularity of all fields at all spacetime points treated in the past as regular internal points of spacetime manifold.
Applied to a wide class of spatially closed cosmological models this prescription qualitatively leads to the picture of expanding Friedmann Universe with the Lorentzian signature spacetime nucleating from the domain of a Euclidean space with the topology of a 4-dimensional hemisphere, the Euclidean and Lorentzian metrics being smoothly matched by analytical continuation in the complex plane of time coordinate.This picture allows one to avoid initial singularity in the cosmological evolution and, in particular, serves as initial conditions for inflationary scenarios.This is because it implies a pure vacuum state of quantum matter perturbations on top of a quasi-exponentially expanding metric background, both the background and this vacuum state being generated by tunneling from the classically forbidden (underbarrier) state of the Universe, described by the Euclidean spacetime with the imaginary time.Correlation functions of quantum cosmological perturbations in this vacuum state have a good fit to nearly flat red-tilted primordial spectrum of the cosmic microwave background radiation (CMBR) [5,6] and other features of the observable large scale structure of the Universe [7].
Limitation of this no-boundary concept consists in the fact that it covers only the realm of pure quantum states.Moreover, it prescribes a particular quantum state which in the lowest order of the perturbation theory yields a special vacuum state.In fact, the idea of Hartle-Hawking no-boundary initial conditions came from the understanding that the vacuum state wavefunction Ψ [φ(x)] of a generic free fields model in flat spacetime can be built by the path integral over the field histories ϕ(τ, x) on a half-space interpolating between a given 3-dimensional configuration φ(x) on the boundary plane of τ = 0 and the vanishing value of these fields at the Euclidean time τ → −∞.Beyond perturbation theory, in the models with a bounded from below spectrum of their Hamiltonian this procedure yields the lowest energy eigenstate.Thus, the Hartle-Hawking no-boundary wavefunction is the generalization of this distinguished state to a special case of curved spatially closed spacetime, which can be formulated even though the notion of nontrivially conserved energy does not exist for such a situation.
A natural question arises how to generalize this picture to the physical setup with the density matrix replacing this distinguished pure state.The attempt to do this encounters the problem of constructing the set of physical states |ψ⟩ along with the set of their weights w ψ participating in the construction of the density matrix ρ = ψ w ψ |ψ⟩⟨ψ|.This problem looks unmanageable without additional assumptions, but the simplest possible assumption -universal microcanonical equipartition of all physical states -allows one to write down the density matrix in a closed form provided one has a complete set of equations which determine a full set of |ψ⟩.These are the Wheeler-DeWitt equations Ĥµ |ψ⟩ = 0 which are quantum Dirac constraints in gravity theory selecting the physical states [8], µ being the label enumerating the full set of Hamiltonian and diffeomorphism constraints, which includes also a continuous range of spatial coordinates.The density matrix becomes a formal operator projector on the subspace of these states, which can be written down as an operator delta functions the factor Z being a partition function which provides the normalization tr ρ = 1 [9].Important feature of this formal projector is that a detailed construction of the delta function of noncommuting operators Ĥµ (which form an open algebra of first class constraints) leads to the representation of this projector in terms of the Batalin-Fradkin-Vilkovisky or Faddeev-Popov path integral of quantum gravity [9,10] and makes it tractable within perturbation theory.
In contrast to the Hartle-Hawking prescription formulated exclusively in Euclidean spacetime this density matrix expression is built within unitary Lorentzian quantum gravity formalism [11].Euclidean quantum gravity, however, arises in this picture at the semiclssical level as a mathematical tool of perturbative loop expansion.The partition function Z of the density matrix (its normalization coefficient) should be determined by the above path integral over closed periodic histories, and the dominant semiclassical contribution comes from the saddle points -periodic solutions of classical equations of motion.The practice of applications to concrete cosmological models shows, however, that such solutions do not exist in spacetime with the Lorentzian signature, but can be constructed in Euclidean spacetime.The deformation of the integration contour in the complex plane of both dynamical variables and their time argument suggests that these Euclidean configurations can be taken as a ground for a dominant contribution of semiclassical expansion.This gives rise to the following definition of the Euclidean path integral density matrix.
Let the classical background have at least two turning points and describe the periodic (classically forbidden or underbarrier) motion between them in imaginary Lorentzian time (or real Euclidean time τ ).Then the two-point kernel ρ E (φ + , φ − ) = ⟨φ + | ρE |φ − ⟩ of the density matrix in question is defined by where S E [ϕ] is the Euclidean action of the field perturbations ϕ(τ ) on top of the given background, defined on the period of the Euclidean time, τ − ≤ τ ≤ τ + , the functional integration runs over field histories interpolating between their values φ ± -the arguments of the density matrix kernel.Z is the partition function given by the path integral over the periodic histories with the period providing the normalization tr ρE = 1.Hermiticity of this density matrix, which in view of its reality reduces to its symmetry ρ E (φ + , φ − ) = ρ E (φ − , φ + ), implies that the background solution is a bounce that has a reflection symmetry with respect to the middle turning point at τ++τ− 2 , and the turning points τ ± are in fact identified.Up to a normalization the expression (1.2) is the evolution operator of the Schroedinger equation in imaginary time, t = −iτ , with the quantum Hamiltonian ĤS (τ ) calculated on top of the non-stationary background.The Hamiltonian operator here is written down in the Schroedinger picture (which is indicated by the subscript S) and explicitly depends on the Euclidean time because of this non-stationarity, so that the evolution operator is the Dyson chronological τ -ordered exponent Because of the properties of the turning points (zero derivatives of the background field) the Euclidean background can be smoothly matched at τ ± with the classically allowed and real background solution of equations of motion parameterized by real Lorentzian time t.The evolution of quantum perturbations on this Lorentzian branch of the background is then driven by the unitary version of the t-ordered exponent (1.4) dt ĤS (t) (1.5) with the Hermitian time-dependent Hamiltonian which is evaluated on this Lorentzian background.In the cosmological context, when the spatial sections of spacetime of S 3 -topology are represented by circles of a variable scale factor, the graphical image of the combined Euclidean-Lorentzian evolution operator Û (T, 0)ρ E Û † (T, 0) is depicted on Fig. 1.It shows the Euclidean spacetime instanton with the topology R 1 × S 3 , R 1 = [τ − , τ + ], bounded at the turning points τ ± by two minimal surfaces Σ ± with a vanishing extrinsic curvature.This instanton represents the density matrix ρE and connects the Lorenzian spacetime branches.These branches correspond to the unitary and anti-unitary evolution from Σ ± in some finite interval of the Lorentzian time 0 ≤ t ≤ T . 1 The pictorial representation of the cosmological partition function Z in view of cancellation of unitary evolution factors, tr Û (T, 0)ρ E Û † (T, 0) = tr ρE = 1, contains only the Euclidean part of Fig. 1.It is represented by the closed cosmological instanton with the identified surfaces Σ + = Σ − and their 3-dimensional field configurations φ + = φ − (following from the identification of the arguments in tr ρE = dφ ρ E (φ, φ)).The origin of this instanton having a donut topology S 1 × S 3 is shown on Fig. 2.
The Euclidean space bridge incorporates the density matrix correlations between the fields on opposite Lorentzian branches, which only vanish for the density matrix of the pure state factorizable in the product of the wavefunction Ψ (φ + ) and its complex conjugated counterpart In the cosmological context this situation is depicted on Fig. 3 with two disconnected Euclidean-Lorentzian manifolds corresponding to these factors.Each of them corresponds to the Hartle-Hawking state, and the partition function is 1 Of course, the second Lorentzian branch could have been attached to the middle turning point of the total period, but this reflection asymmetric setup would correspond to the calculation of the in-out amplitude of underbarrier tunneling through the Euclidean domain, which is not the goal of this paper.based on the instanton with S 4 -topology of Fig. 4. The latter originates by glueing together two 4-dimensional hemispheres (discs D 4 ± ) along their common equatorial boundary.
So the goal of this paper is to construct the generating functional of expectation values and correlation functions of Heisenberg operators defined with respect to such a density matrix.Motivated by applications of quantum cosmology, this is essentially non-equilibrium physical setup, because the cosmological inflationary background is very non-stationary.Because of this it raises many questions which for the impure density matrix case go essentially beyond what is known about the Hartle-Hawking state.In particular, despite non-equilibrium nature this pure state selects a distinguished set of positive/negative frequency basis functions of the so-called Euclidean vacuum which for the de Sitter metric background turns out to be a special case of the de Sitter invariant vacuum [12][13][14].But for a density matrix case this distinguished choice is unknown and, moreover, its reasonable particle interpretation is not granted at all to be possible.
The notion of the Euclidean quantum gravity density matrix was pioneered in [15].Then, within the concept of the above type, it was built in a concrete inflationary model driven by the trace anomaly of Weyl invariant fields [16].Interpreted as a microcanonical density matrix of spatially closed cosmology [9] 2 it was later shown to be a very promising candidate for the initial quantum state of the Universe.In particular, it includes the existence of the quasi-thermal stage preceding the inflation [16], provides the origin of the Higgs-type or R 2 -type inflationary scenario [17] with subplanckian Hubble scale [18] and suppresses the contribution of Hartle-Hawking instantons to zero.Thus, this model allows one to circumvent the main difficulty of the Hartle-Hawking prescription -insufficient amount of inflation in the Hartle-Hawking ensemble of universes dominated by vanishingly small values of the effective cosmological constant.Elimination of this infrared catastrophe is, on the one hand, the quantum effect of the trace anomaly which flips the sign of the Euclidean effective action and sends it to +∞ [16,19].On the other hand, this is the hill-top nature of inflation starting from the maximum of inflaton potential rather than from its minimum [20].Finally, this model suggests that quantum origin of the Universe is the subplanckian phenomenon subject to semiclassical 1/Nperturbation theory in the number of numerous higherspin conformal fields [21].Thus, it sounds reliable even in the absence of currently unavailable non-perturbative methods of quantum gravity.
All these conclusions have been recently reviewed in [22] including certain preliminary results on the primordial CMBR spectra, which might even bear potentially observable thermal imprint of the pre-inflation stage of this model [23].However, detailed calculation of this spectrum and of higher order correlation functions requires the construction of the in-in Schwinger-Keldysh formalism extended to the setup with the initial density matrix of the above type.
Schwinger-Keldysh formalism [1,2] was intensively applied in quantum gravity and cosmology, and the number of publications on this subject is overwhelmingly high, so that we briefly mention only their minor part.Together with early applications [24][25][26] and the pioneering calculation of non-gaussianities in cosmological perturbation spectra [27] these works include the calculation of cosmological correlation functions [28,29], the results on cosmological singularity avoidance due to nonlocal effects [30], equivalence of the Euclidean and in-in formalisms in de Sitter QFT [31,32] and even the analysis of initial conditions within Schwinger-Keldysh formalism [33].Among recent results one should mention the development of a special effective field theory method based on analyticity and unitarity features of in-in formalism [34], its applications to four-point correlators in inflationary cosmology [35] and numerous conformal field theory and holography ramifications of Schwinger-Keldysh technique (see, for example [34,36] and references therein).However, the success of these works essentially relies on working with the model of spatially flat Universe -extension to spathe absence of the notion of conserved energy the role of this projection in closed cosmology is played by the delta function of Hamiltonian and momentum constraints -the projector on their conserved zero value.
tially closed cosmology with S 3 -sections is likely to invalidate many of these exact analytical results.At the same time, despite a general belief that inflation completely washes out details of initial quantum state, learning its imprint on the Universe requires to go beyond K = 0 FRW model.Moreover, recent analysis of the large scale Planck 2018 data, associated with the Hubble tension problem in modern precision cosmology [37], testifies at more that 99% confidence level in favor of the closed Universe preferring a positive curvature with K = +1 [38,39].Remarkably, the model of microcanonical initial conditions in early quantum cosmology of [9,16] exists only for K = 1.Therefore, robust observational evidence in favour of a positive spatial curvature serves as an additional motivation for this model and justifies our goals.
Having said enough about the motivation coming from cosmology for the density matrix modification of the inin formalism coming from cosmology, let us emphasize that the usefulness of this modification extends to a much wider area.Note that the expression (1.4) for the case of a static background is nothing but a well-known density matrix of the equilibrium canonical ensemble at the inverse temperature Its evolution in time gives rise to Matsubara technique of thermal Green's functions [40] and thermofield dynamics [41] which satisfies nontrivial analyticity properties in the complex plane of time including periodicity in the direction of imaginary axis -Kubo-Martin-Schwinger (KMS) condition [42,43].Many of these properties depend on the condition of equilibrium and associated with the conservation of energy.What we suggest here is the generalization of this technique to non-equilibrium situation with the Hamiltonian explicitly depending on time, which would be important in many areas of quantum field theory, high energy and condensed matter physics.To cover as wide scope of models and problems as possible we will try being maximally generic and use condensed notations applicable in generic dynamical systems.
In this paper we will basically consider the elements of the diagrammatic technique for the density matrix inin formalism.Therefore we restrict ourselves with the systems having a quadratic action on top of the nonstationary background subject to reflection symmetry discussed above.The one-loop preexponential factors of this formalism will be considered elswhere.
The paper is organized as follows.Section 2 contains the summary of notations and main results.It includes the formulation of in-in generating functional in the generic non-equilibrium system with a Gaussian type initial density matrix, the selection of distinguished set of positive/negative frequency basis functions of the wave operator, determined by the density matrix parameters, and application of this formalism to a special density matrix based on the Euclidean path integral, this case demonstrating special reflection symmetry, analyticity and KMS periodicity properties.Section 3 presents preliminary material of canonical quantization and the technique of boundary value problems and relevant Green's functions in a generic dynamical system.Section 4 contains detailed derivation of all the results.Section 5 is devoted to the demonstration of the formalism on concrete examples, while Section 6 contains a concluding discussion along with the prospects of future research.Several appendices give technical details of derivations and present certain nontrivial properties of Green's functions and Gaussian type density matrices.
Schwinger-Keldysh technique for models with density matrix state
We consider a generic system with the action S[ϕ] quadratic in dynamical variables ϕ = ϕ I (t), the index I including both the discrete tensor labels and in fieldtheoretical context also the spatial coordinates, Here A = A T ≡ A IJ , B ≡ B IJ and C = C T ≡ C IJ are the matrices acting in the vector space of ϕ J , the superscript T denoting the transposition, ϕ being a column and ϕ T -a row (the use of these canonical condensed notations including also spatial integration over contracted indices I will be discussed in much detail in Section 3).What is most important throughout the paper, all these matrices are generic functions of time A = A(t), B = B(t), C = C(t), reflecting non-equilibrium and nonstationary physical setup.This action will be considered as a quadratic part of the full nonlinear action in field perturbations ϕ on a certain background whose possible symmetries will be inherited by these coefficients as certain restrictions on their time dependence.These restrictions will be very important for the results of the paper and will be discussed below, but otherwise this time dependence is supposed to be rather generic.
The prime object of our interest will be the Schwinger-Keldysh generating functional of the in-in expectation values and correlation functions of Heisenberg operators in the physical state described by the initial density matrix ρ.This is the functional of two sources Here the trace is taken over the Hilbert space of the canonically quantized field φ and ÛJ (T, 0) is the operator of unitary evolution from t = 0 to t = T with the time dependent Hamiltonian corresponding to the action (2.1) and modified by the source term −J T (t)ϕ(t) ≡ −J I (t)ϕ I (t) with the source J T (t) = J I (t).In the (2.3) We will consider the class of density matrices whose kernel in the coordinate representation ⟨φ + | ρ |φ − ⟩ = ρ(φ + , φ − ) has the following Gaussian form -exponentiated quadratic and linear forms in φ ± , where we assembled φ ± into the two-component column multiplets (denoted by boldfaced letters) φ, did the same with the coefficients j of the linear form and introduced the 2 × 2 block-matrix Ω acting in the space of such two-component multiplets The blocks of this matrix R = R IJ , S = S IJ and their complex and Hermitian conjugated versions, S † ≡ S T * , should satisfy these transposition and conjugation properties in order to provide Hermiticity of the density matrix.The same concerns the "sources" j ± in the definition of j, j + = j * − ≡ j.Transposition operation above applies also to two-component objects, so that φ T = [φ T + φ T − ].Below we will denote 2 × 2 block matrices and relevant 2block component columns and rows by boldfaced letters.
Such a choice of the density matrix is motivated by the fact that for a block-diagonal Ω it reduces to a pure quasi-vacuum state, its "source" j allows one to induce nonzero mean value of the quantum field and by the differentiation with respect to j one can generate a much wider class of density matrices with "interaction" terms in the exponential.Normalizability of the density matrix of course implies that the real part of Ω should be a positive definite matrix.
The path integral representation for the coordinate kernels of the unitary evolution operator (2.2) allows one to rewrite the generating functional Z[J 1 , J 2 ] as the double path integral.For this purpose it is useful to introduce the two-component notations for the histories ϕ 1 (t) and ϕ 2 (t) as well as for their sources, In terms of these notations the generating functional reads where the total action is obviously with the actions S[ϕ 1,2 ] given by (2.1) in the integration range from t = 0 to t = T and the total integration measure over ϕ and φ (2.10) Here dφ and Dϕ denote respectively the integration measures over variables at a given moment of time and the integration measures Dϕ = t dϕ(t) over time histories subject to indicated boundary conditions.Calculation of this Gaussian path integral leads to the expression where we disregard the source-independent prefactor.The bilinear in the full set of sources exponential is the total action in the integrand of (2.8) at its saddle pointthe point of stationarity of the action with respect to variations of both the histories ϕ(t) and their boundary data φ at t = 0.The condition of stationarity generates the boundary value problem for this saddle point including the linear second order equation of motion for ϕ(t) and the full set of boundary conditions at t = 0 and t = T .This problem is posed and solved in Section 4 in terms of its Green's function G(t, t ′ ) subject to homogeneous version of these boundary conditions.The Green's function has a block-matrix form typical of Schwinger-Keldysh inin formalism composed of the Feynman G T (t, t ′ ), anti-Feynman GT(t, t ′ ) and off-diagonal Wightman Green's functions blocks (4.3), which are related to one another by the equalities ) and satisfy respectively inhomogeneous and homogeneous wave equations with the operator F -the Hessian of the action (2.1),
14)
The block-matrix Green's function G(t, t ′ ), as is usually done in boundary value problems, can be built in terms of the full set of basis functions v ± of this operator, satisfying the boundary conditions of the variational problem for the action in (2.8).This will explicitly be done in Section 4, but in view of the complexity of these boundary conditions intertwining the ϕ 1,2 -branches of the field space these basis functions do not have a clear particle interpretation, that is separation into positive and negative frequency parts.This difficulty is caused by the convolution of problems associated, on the one hand, with the non-equilibrium nature of a generic background (rather generic dependence of the operator coefficients A(t), B(t) and C(t) on time) and, on the other hand, with the in-in physical setup involving a nontrivial density matrix.
Despite these difficulties, there exists a distinguished set of basis functions for the wave operator which have a clear particle interpretation, and this is one of the main results of the paper.This set is related by Bogoliubov transformations to v ± (t) and is uniquely prescribed by the full set of complex conjugated positive and negative frequency basis functions of the operator (2.14) v(t) and v * (t) which satisfy the intial value problem at t = 0, where W is what we call the Wronskian operator which participates in the Wronskian relation for the operator F , which is valid for arbitrary two complex fields ϕ 1,2 (t), (2.17) and, moreover, serves as the definition of the conserved (not positive-definite) inner product in the space of solutions of the homogeneous wave equation, F ϕ 1,2 = 0, (2.18) We will call the boundary conditions (2.15) and associated with them Green's functions the Neumann ones 3 .
Important point of the definition (2.15) is that the frequency matrix ω (remember that in the generic setup this is a matrix ω IJ acting in the vector space of ϕ J ) is not directly contained in the blocks of the matrix (2.6), but follows from the requirement of the particle interpretation of the basis functions v(t).This requirement can be formulated as follows.One defines the creation-annihilation operators â † and â in terms of which the Heisenberg operator φ(t) is decomposed as the sum of positive-negative basis functions v(t) and v * (t), φ(t) = v(t) â + v * (t) â † .Then there exist non-anomalous and anomalous particle averages with respect to the density matrix, and the requirement of vanishing anomalous average κ = 0 allows one to assign the average ν the interpretation of the set of occupation numbers associated with ρ.This requirement serves as the equation for the frequency matrix ω which, as it is shown in Section 4, can be explicitly solved for a special case of the real matrix Ω.This solution reads and gives the expression for the occupation number matrix in terms of the single symmetric matrix σ after the orthogonal rotation by the orthogonal matrix κ, As shown in Appendix D, the existence of this particle interpretation with a positive definite matrix ν fully matches with conditions of normalizability, boundedness and positivity of the density matrix incorporating positive definiteness of matrices I ± σ and negative definiteness of σ.
With the normalization of these distinguished basis functions v to unity where A is the index enumerating the full set of basis functions, the blocks of the in-in Green's function (2.12) take the form Here the terms of the type v(t) v † (t ′ ) should be understood as the matrix products ) (one should bear in mind that the basis function v(t) = v I A (t) represents the square (but asymmetric) matrix whose upper indices label the field ϕ I components, whereas the subscript indices A enumerate the basis functions in their full linear independent set).Correspondingly, , etc.This form of the Green's functions is very familiar from thermofield dynamics for simple equilibrium condensed matter systems, when all the matrices of the above type become diagonal in the momentum space of field modes labeled by A = p, A = d 3 p/(2π) 3/2 and ν AB = ν p,p ′ = (exp(βω p ) − 1) −1 δ(p − p ′ ) represents expected occupation number for Bose-Einstein statistics at inverse temperature β (detailed consideration of this example is presented in Section 5).Remarkably, the occupation number picture generalizes to nonequilibrium systems of a very general type -the function of the single symmetric matrix in the parentheses of Eq. (2.21) can be diagonalized by extra orthogonal rotation (additional to that of κ), and its eigenvalues would serve as occupation numbers in the generic nonequilibrium state with the initial density matrix.
Euclidean density matrix
As discussed in Introduction, in quantum cosmology context the density matrix itself can be given in terms of the Euclidean path integral and thus dynamically determined by individual properties of the system including its action functional.So we consider the path integral expression for the Euclidean density matrix where integration runs over histories ϕ(τ ) in Euclidean time τ on the segment [τ − , τ + ], interpolating between the arguments of the density matrix φ ± .In what follows we will assume that τ − = 0 and τ + = β.The Euclidean action, supplied by the Euclidean source J E (τ ) probing the interior of the Euclidean spacetime, has a structure similar to the Lorentzian action and can be obtained from (2.1) by the replacement These functions of the Euclidean time are rather generic, except that they should not contradict the basic property of the density matrix (with the source J E switched off) -its Hermiticity.Sufficient conditions providing this property, (2.28) For real values of these coefficients these relations reduce to the reflection symmetry of the action and the whole formalism relative to inversions with respect to the center of the Euclidean segment [0, β].Here we consider this property as given, but it can be derived from the assumption that the quadratic Euclidean action is built on top of the Euclidean spacetime background -the bounce which solves full nonlinear equations of motion of the theory and represents the periodic (underbarrier) motion of the system between two turning points.One of these points is associated with the center of the above Euclidean segment τ++τ− 2 = β 2 , and the other one corresponds to the (identified) points of nucleation τ ± from the classically forbidden Euclidean regime to the Lorentzian regime, the latter being described by the two branches of the Schwinger-Keldysh formalism (labelled above by 1 and 2).For an equilibrium situation with constant coefficients (2.27) at the inverse temperature β = τ + − τ − this setup is even simpler and corresponds to the density matrix of the thermal canonical ensemble.
The Gaussian integration in (2.26) allows one to express the result in terms of the Green's function of the Hessian of the Euclidean action ( The resulting density matrix looks like the expression (2.4) amended by the quadratic form in the Euclidean source, with the special expressions for the matrix Ω E and the source j E .The matrix Ω E reads where we use the arrow to indicate the direction in which the Wronskian operator is acting on the corresponding first or second time argument of the Green's function, so that for the left action the following rule holds ϕ T (τ ) , etc.The column j E is given by the following integral (2.32) Using (2.11) in the generating functional with the Euclidean density matrix (2.30) one directly finds the total Schwinger-Keldysh generating functional with the full set of sources probing the two Lorentzian branches and the Euclidean branch of the inin formalism (2.34)
Reflection symmetry and analyticity properties
The obtained expression for Z[J , J E ] features a nontrivial mixup of the Neumann and Dirichlet Green's functions of different Lorentzian F and Euclidean F E wave operators, but it becomes essentially unified if we assume that the Lorentzian and Euclidean actions are related by the analytic continuation of the form where the Lorentzian and Euclidean histories are also related by the same continuation rule ϕ(t)| t=−iτ = ϕ E (τ ).This, in particular, implies that the coefficients of the operators F E and F are related by so that F | t=−iτ = −F E .The origin of these relations, especially in connection with reality condition for the coefficients of both Lorentzian and Euclidean operators at their respective real t and τ arguments can be traced back to the properties of the full nonlinear action which gives rise to its quadratic part on top of a special background solution of full equations of motion.It is assumed that the Euclidean background solution has a turning point at τ = 0 where all real field variables have zero τ -derivatives and can be smoothly continued to the imaginary axis of τ = it where they become again real functions of real t.This leads to the above continuation rule with real With this analytic continuation rule, the expression (2.34) for Z[J , J E ] can indeed be uniformly rewritten in terms of the Lorentzian, Euclidean and mixed Lorentzian-Euclidean Green's functions, all of them subject to one and the same set of Neumann type boundary conditions which select in the Lorentzian branch of the in-in formalism a distinguished set of positive and nega-tive frequency basis functions.This expression reads where the z-integration runs respectively over t or τ in the domain depending on which of these Lorentzian or Euclidean time variables is in the argument of the following block matrix Green's function G(z, z ′ ) and the corresponding source (2.38) Here the Euclidean and Lorentzian-Euclidean blocks of the total Green's function express in terms of the relevant Euclidean and Lorentzian-Euclidean Wightman functions In their turn these Green's functions, as one can see, are built according to one and the same universal pattern out of the full set of Lorentzian v(t) and v * (t) and Euclidean u ± (τ ) basis functions.All these functions are subject to Neumann boundary conditions (2.15) and For ω fixed by the above condition of particle interpretation, leading to the expressions (2.20)-(2.22), the Euclidean basis functions u ± have a remarkable property.They satisfy at opposite ends of the Euclidean segment τ ∓ the same boundary conditions4 If one smoothly continues the operator F E beyond the segment τ − ≤ τ ≤ τ + , then it becomes periodic with the period β (which is possible because τ ± are assumed to be the turning points of the background solution on top of which the Hessian of the nonlinear action of the theory is built).This means that the basis functions u ± of this operator become quasi-periodic -u ± (τ + β) expresses as a linear combination of the same basis functions u ± (τ ) (no mixing between u − and u + occurs in their monodromy matrix).As shown in Section 4, with the normalization u ± (0) = 1/ √ 2ω this quasi-periodicity property reads in terms of the occupation number matrix (2.21) (2.47) Together with the reflection symmetry relative to the middle point of the Euclidean time segment (2.28) the periodicity of the operator F E implies its reflection symmetry with respect to the point τ = 0 (2.48) Therefore, similarly to quasi-periodicity the basis functions u ± (±τ ) are also related by the analogue of the anti-diagonal monodromy matrix L, u The above relations introduce the analytic structure which allows one to express all basis and Green's functions on the Euclidean-Lorentzian domain C in terms of one analytic function V (z) of the complexified time variable z = t − iτ .This follows from the fact, mentioned above, that the Lorentzian wave operator F can be regarded as the analytic continuation of the Euclidean operator F E into the complex plane of time at the point z = 0, F ≡ F (t, d/dt) = −F E | τ =it .As a consequence its basis function v(t) in view of its boundary conditions and boundary conditions (2.46) for the Euclidean function u + (τ ) also turns out to be the analytic continuation of the latter, (2.50) Therefore, the operators F and −F E as well as the full set of their basis functions v(t) and u ± (τ ) can be represented respectively as the boundary values at the real and imaginary axes of the complex z-plane of the complex operator F C and the solution V (z) of its homogeneous wave equation, ) The function V (z) gives rise to basis functions as and thus can be used in (2.38) for the construction of all Green's functions of the Schwinger-Keldysh in-in formalism.Conversely V (z) can be obtained by analytic continuation of the single Euclidean function u + (τ ) from the imaginary axes z = −iτ , and in view of reality of u + (τ ) for real τ it has the property Important corollary of these analiticity properties is that in view of the monodromy relations for Euclidean basis functions (2.47) the Lorenzian basis functions become quasi-periodic in the imaginary time
55)
Due to inverse matrix factors of positive and negative basis functions here the Lorentzian Wightman functions G > (t, t ′ ) (given by the expression (2.25)) and which is nothing but Kubo-Martin-Schwinger condition [42,43].It is important that this condition is satisfied in the generic non-equilibrium system with the special Euclidean density matrix (2.26) even despite the fact that no notion of conserved energy can be formulated in such a physical setup.The tubular Riemann surface of complex time z = t − iτ whose main sheet is compactified in τ to the circle of curcumference β is shown on Fig. 5.The boundaries of the main sheet of this surface form two shores of the cut depicted by dashed line, along which two branches of Lorentzian evolution are running.This rich analytic structure of Euclidean-Lorentzian evolution suggests that the equivalence of the Euclidean and Lorentzian formalisms proven beyond tree level for interacting QFT on top of the de Sitter spacetime [31,32] might be extended to a generic reflection-symmetry background underlying our definition of the Euclidean density matrix.
PRELIMINARIES
To derive the aforementioned results we dwell here in more detail on the introduced above notations and develop a canonical formalism and quantization of the underlying theory.In particular, we pose rather generic initial value and boundary value problems for equations of motion and discuss the properties of the related Green's functions.
Condensed notations
The elements of the field space will be denoted as ϕ I (t), where the index I is, in fact, a multi-index, and contains both the dependence on spatial coordinates denoted as x and discrete spin-tensor labels i, I = (x, i).Thus, we can equivalently write the fields in the form, emphasizing its dependence on the spatial coordinates ϕ I (t) = ϕ i (t, x).
Assuming that equations of motion are of the second order in time derivatives one has the most general quadratic action of the theory of the form (2.1) where we explicitly specify the initial and final moments of time range t ± , Here dots denote the derivatives with respect to time t, and A, B and C are the time-dependent real bilinear forms in the space of fields.Moreover, A and C are assumed to be symmetric.The explicit action of these bilinear forms on the fields, e.g. for A reads ) where A ij (t, x, x ′ ) is the kernel of the operator.Thus, the first term in (3.1) has the following explicit structure The superscript T applied to the bilinear form denotes the functional matrix transposition operation which implies the transposition of discrete and spatial labels of the corresponding kernel, but does not touch the time variable Consequently, the second and the third terms in (3.1) are the same.However, we will keep them separate for symmetry reasons.
In local non-gauge theories the kernels of the above coefficients are represented by delta functions of spatial coordinates and their finite order derivatives.For local gauge theories treated within reduction to the physical sector in certain gauges these coefficients can become nonlocal in space, but locality in time derivatives within canonical quantization should be strictly observed.
The equations of motion, obtained by varying the action (3.1) with respect to ϕ have the form where the wave operator F , or the Hessian of the action (3.1), has already been defined above by Eq. (2.14).Another form of this operator, obtained by integration by parts and involving both left and right time derivatives, the direction of their action being indicated by arrows allows one to rewrite the quadratic action (3.1) in even more condensed form Here the Wronskian operator W is defined by (2.16) and the origin of the boundary term at t ± is the result of integration by parts, which is also associated with the Wronskian relation (2.17).
Canonical formalism
The Hamiltonian formalism of the theory with the action (3.1), which is the first step to the canonical quantization begins with the determination of the momentum π canonically conjugated to the field ϕ where L is the Lagrangian of the action (3.1).The corresponding Hamiltonian has the form (3.9) Together with the Poisson bracket {ϕ I , π J } = δ I J it defines the dynamics of the system.The Hamiltonian equations of motion read Transition to the Lagrangian formalism by expressing π in terms of ϕ and φ obviously leads to equations of motion (3.5) following from the variation of the action (3.1).
Let us denote the basis of the independent solutions to (3.5) as v ± I A (t), where the multi-index A enumerates the number of the particular solution and has the same range as the index I.The general solution in terms of basis functions reads and can be rewritten in shortened notations as Here α ±A constitute a set of constants, specifying particular initial conditions.Using (3.8), we find the corresponding solution for the momentum so, the evolution of phase space variables can be rewritten in the joint form as (3.14) Now, we can equip the space of initial conditions, consisting of α ± , with the Poisson bracket structure inherited from the Poisson brackets of ϕ and π.Substituting (3.14) into the left hand side of we have in condensed notations where I denotes the identity matrix.The identity above fixes the pairwise Poisson brackets of α ± .Let us denote the right hand side of this equality, playing the role of the Poisson bivector in the Darboux coordinates, as Introducing also the matrix D as inverse to the matrix of the pairwise Poisson brackets Thus, one can express the inverse of M(t) in terms of its transpose, namely (3.21) Thus, for ϕ 1,2 -solutions of (3.5) l.h.s.vanishes, so we have It is easy to see, that each element of (3.18) has the form (3.22) as above, where the role of solutions ϕ 1 , ϕ 2 is played by the basis functions v + , v − .Applying the matrix transposition operator to both sides of (3.16), we obtain that the matrix D is skew-symmetric, since P T = −P.In terms of the block elements of D this means that Moreover, using the fact that the coefficient matrices A, B, and C in (3.1) are real, we conclude that basis functions v + , v − can also be chosen to be real.Thus, the matrix D is real skew-symmetric, so there is a timeindependent linear transformation S, bringing it to the canonical form, i.e. S T DS = P. Without the loss of generality one set D = P by default 5 .However, for the reasons which will become clear soon (see equation (4.21) below), we will assume that D has the following more general form where In terms of the basis functions, the vanishing of the diagonal blocks of D implies that v + , v − are chosen such that This can always be done by an appropriate transformation of the basis functions, possibly mixing v + and v − .Consequently, the pairwise Poisson brackets of α + and α − take the form ) As noted above, one can go further, and set ∆ +− = −∆ T −+ = I.Now, let us modify the Hamiltonian by introducing time-dependent sources J ϕ , J π for the field and its conjugate momentum The modified equations of motion can be written as where the subscript J of ϕ, π emphasizes the presence of the sources in equations of motion.We will find a solution to modified equations of motion using the constant variation method.Namely, we start with the solution (3.14) to equations of motion with vanishing sources, but make the integration constants α + , α − in its definition time-dependent Then, we substitute the result to the modified e.o.m. and obtain where we exploit the fact that M(t) satisfies the system (3.10).Using the equality (3.20) for the inverse of the matrix M(t) and integrating the equation on α + (t) and α − (t) we obtain ) where α + 0 and α − 0 are integration constants.Substitution back to (3.30) gives the solution to the equations (3.29) where the initial conditions ϕ 0 (t), π 0 (t) are related to constants of integration by and represent the solution to homogeneous equation, i.e. for vanishing sources J ϕ and J π .Now, let us focus on the case of vanishing momentum source and also redefine the field source for the convenience The corresponding e.o.m. in the Lagrange form reads From (3.33), one obtains the explicit form of the solution for ϕ(t), which is where G R (t, t ′ ) is called the retarded Green's function and expressed through the top-left block of the matrix (3.38)The fact that ∆ ++ = ∆ −− = 0 is crucial in obtaining this simple expression for G R .From (3.37) we find that G R satisfies the equation and is uniquely determined by the condition The latter fact follows, in particular, from the fact that any two Green's functions of the same differential operator differ by the solution of the homogeneous equation.Once some Green's function, satisfying the condition (3.40) is found, a shift by a solution to homogeneous equation will violate this condition.Alternatively, G R can be defined via initial value problem The fact that solution (3.37) is expressed through the retarded Green's function means that ϕ(t) is subject to the following initial (rather than boundary) value problem (3.42)
The solution of Dirichlet and Neumann boundary value problems
The Green's functions, solving the boundary problems, can be obtained from the retarded Green's function by shifting it by the solution of the homogeneous equation (3.5).In particular, one constructs the so-called symmetric Green's function as It is symmetric under the simultaneous transposition and exchange of the time arguments, i.e.G T S (t, t ′ ) = G S (t ′ , t).Unlike the retarded Green's function it is defined nonuniquely and the concrete boundary conditions should be specified.These are in one-to-one correspondence to the boundary conditions, satisfied by the basis functions v + and v − at the higher and lower time limits t = t + and t = t − , respectively.
In particular, to solve the inhomogeneous equation (3.36) supplemented with the vanishing Dirichlet boundary conditions one can use the Dirichlet Green's function subject to the same boudary conditions so that the solution reads Similarly, in solving Neumann boundary problem one defines the corresponding Neumann Green's function demanding ) and obtains the solution as (3.49) Notably, the Dirichlet and Neumann Green's functions, which are subject to homogeneous boundary conditions, allows one to solve the modified boundary problems, namely with inhomogeneous boundary conditions.Namely, the solutions can be obtained as follow.First, we exploit the equality (3.21) and perform in it the substitutions ϕ 2 → ϕ(t ′ ), ϕ 1 → G(t ′ , t), where ϕ(t ′ ) solves (3.36) and G(t ′ , t) is some Green's function, solving F G(t ′ , t) = δ(t − t ′ ).Next, integrating both sides of the equality over t ′ from t − to t + , we obtain Now, suppose we are to solve (3.36) supplemented by inhomogeneous boundary conditions (in contrast to homogeneous ones (3.44)) for some constants φ + , φ − .Substituting these conditions to (3.50) together with Dirichlet Green's function G → G D , satisfying (3.45), we observe that the third line vanishes, so we get where we introduce the notation for the two-component row as the transposition of the newly introduced column W denotes the Wronskian operator (2.16) acting from the right on the second argument of G D (t, t ′ ) at the total boundary of the time domain at t ± (the sign taking into account the outward pointing time derivative in W ) -the notation used above in (2.31).The transposition law here, of course, takes into account the symmetry of Dirichlet Green's function, The quantity w(t) introduced above has the following important property.Namely, evaluating both sides of (3.52) at the boundary points t = t ± , and using (3.45) we observe that Similarly, one can consider inhomogeneous Neumann boundary conditions with some boundary sources j + and j − .Substitution of this condition and Neumann Green's function G → G N , satisfying (3.48), to (3.50) gives the solution to (3.36) with the boundary conditions above Here g T N (t) is the notation analogous to (3.53) -the row built in terms of the Neumann Green's function kernels with the second argument located at the total 2-point boundary of the time domain (points t − and t + ), (3.59)
The relation between Dirichlet and Neumann Green's functions
There is important explicit connection between Dirichlet and Neumann Green's functions, which can be derived in the following way.The idea is to consider the problem with homogeneous Neumann boundary conditions (3.47) as the Dirichlet problem with some nontrivial boundary values φ ± .Substituting the solution of this problem (3.52) into (3.47)one obtains a linear equation on φ ± , which can be solved as where the matrices ω and Ω read ) Substituting these φ ± back into (3.52)gives which implies, after comparing with (3.49), the following expression for the Neumann Green's functrion Here we use the notations (3.53)-(3.54)introduced above.Substituting t = t ± to the both sides of the equality and using (3.56), we get the equality that allows us to express the Dirichlet Green's function from (3.64) via the Neumann one as ), (3.66)where we use the notation (3.59) for the row g N (t) = [G N (t, t + ) G N (t, t − )] and its transpose.Using (3.56) once again, we can write down the expression for the block matrix of boundary values of the Neumann function g N at both ends of the time segment (double bar denoting the restriction of both arguments to t ± ) (3.67)
Canonical quantization
Before proceeding to the canonical quantization of the theory (3.1), whose Hamiltonian formalism was constructed in the previous subsection, let us make a more specific choice of basis functions, which is more convenient for the quantization purposes.We first choose the basis functions v ± (t) real, and such that the matrix D defined by (3.24) has a canonical form, D = P. Together with the reality of ϕ(t) this implies also the reality of the corresponding integration constants α ± .Next, we combine these basis functions and integration constants into the following complex conjugated pairs The equation (3.20) takes the form Evaluating at t = t − and substituting back to (3.72), one obtains evolving phase space variables in terms of the basis functions v(t), v * (t) and initial data, Now, we are ready perform the canonical quantization of the system under consideration, whose Hamiltonian form was obtained in the previous subsection.We will quantize it in the Heisenberg picture.Thus, we map the solutions of the Hamiltonian equations to the corresponding Heisenberg operators, i.e. ϕ(t), π(t) → φ(t), π(t), whereas the Poisson bracket is replaced by the commutator times the factor i, so that we obtain the equal-time quantum commutation relations [ φ(t), π(t)] = i Î, where Î is the identity operator in the Hilbert space.Thus, the Hamiltonian equations (3.10) are mapped to the corresponding Heisenberg equations, defining the evolution of the operators Here Ĥ(t) is the classical Hamiltonian (3.9) where the field and the momentum are replaced by the corresponding Heisenberg operators.Linearity of the system obviously implies that the classical Hamiltonian and the Heisenberg equations formally coincide and their solutions are in one-to-one correspondence.In particular, the relation (3.8) between the field ϕ and its conjugate momentum π is literally the same at classical and quantum levels π(t) = W φ(t). Formal coincidence and linearity of the Hamiltonian and Heisenberg equations allow one to obtain the solution of the latter ones from classical equations (3.75) Similarly, our quantization procedure implies that the integration constants α, α * are in one-to-one correspondence to the creation/annihilation operators â, â † .According to (3.72) the operators φ(t), π(t) are decomposed in the creation/annihilation operators as that can be inverted similar to (3.74) as The fact that â and â † are indeed Hermitian conjugate to each other immediately follows from the Hermicity of φ(t).Indeed, comparing φ(t) to it's conjugate we find the coincidence, for which the choice (3.68) of two complex conjugated basis functions is crucial.The commutation relations of the creation/annihilation operators are inherited from the Poisson brackets (3.71), namely Though the explicit solution to the Heisenberg equations (3.77) is obtained, we still have no the expression for the evolution operator in a closed form.The latter solves the Schroedinger equation in the form of the chronological ordering, dt ĤS (t) , (3.82) where ĤS (t) is the Hamiltonian in the Schroedinger picture, so that its time dependence is only due to timedependent coefficients A, B, and C. The operators φ, π in the Schroedinger picture are identified with the Heisenberg ones, evaluated at the initial time φ = φ(t − ), π = π(t − ). (3.83) In the presence of the source, H → H −J T ϕ, the solution (3.78) to the Heisenberg equation generalizes to Since the action (3.1) is quadratic in the field φ, the latter integral is Gaussian, so it can be calculated explicitly.We will do this in the next section with the use of the saddle point method.
Bogoliubov transformations
In the previous subsection we made the choice (3.68) of basis functions which implies a simple form of the commutation relations (3.81) for the creation/annihilation operators.It will be useful to study the transformations, preserving these commutation relations.For this purpose, let us define a new set of creation/annihilation operators b, b † as a linear combination of the initial ones, where U , V are referred to as the matrices of Bogoliubov transformations.Demanding that the commutation relations of the new cration/annihilation operators coincides with those of the initial ones (3.81), one obtains the equality Thus, the field operator φ(t) has two equivalent decompositions where the new set of the basis functions ṽ(t), ṽ * (t) is related to the initial one via the following relation or, in more explicit form v = ṽ U + ṽ * V * .
(3.94) Equality (3.90) leads to the following formula for the inverse matrix of the Bogoliubov transformation coefficients so that (3.94) can be inverted as Now, let us solve the inverse problem.Namely, suppose we have two sets of the basis functions v(t), v * (t) and ṽ(t), ṽ * (t), such that the commutation relation of the corresponding creation/annihilation operators are of the canonical form (3.81).The question is what are the Bogoliubov coefficients relating these two sets?To find the explicit form of the coefficients, let us introduce the following inner product in the space of solutions of the equation (3.5) This is constant if ϕ 1 , ϕ 2 solve (3.5) due to the Wronskian property (3.22), together with the fact that the operator F , defining equations of motion (and the Wronskian W ) is real.The inner product (3.97) is usually referred to as the Klein-Gordon type inner product.The choice (3.68) of the basis functions implies the following normalization with respect to this inner product and the same for ṽ(t), ṽ * (t).Projecting the equality (3.94) onto ṽ, and using the property , one obtains the explicit expressions for the Bogoliubov coefficients U = (ṽ, v), V = (ṽ, v * ). (3.99) If both the old and the new sets of basis functions satisfy the Neumann conditions with different frequency matrices ω and ω at the initial moment of time one can find the explicit expressions for U , V in terms of ω and ω.Let us first write down the normalization conditions (3.98) explicitly ) where all quantities are evaluated at t = t − .The same equations hold for ṽ(t) and ṽ * (t).The second equation implies that the matrices ω and ω are symmetric (in view of invertibility of the matrix v(t) at a generic moment of time), whereas the first equation allows one to fix the initial value of the basis functions as where ω re and ωre are the real parts of ω and ω, respectively.Using (3.99) with the inner product defined in (3.97) one finds the following expressions for Bogoliubov coefficients relating two sets of Neumann basis functions with different frequency matrices, where ω is given in terms of the positive frequency basis function whose integration gives the (unnormalized) solution For this normalization we have the following expression for the Fock states in terms of coherent state (3.114)
GENERATING FUNCTIONAL IN THE PATH INTEGRAL FORMALISM
We begin our derivation of the in-in Green's function generating functional for the theory, defined in the previous section, by the physical motivation and the definition of an arbitrary Gaussian initial state.After that, we derive the corresponding two-component Green's functions.As we will observe, there is an ambiguity in the definition of these Green's function, parameterized by a matrix, defining initial conditions for the modes, employed in the mode expansion of the field operators.There is no any a priory preferred choice, fixing this ambiguity.However, being motivated by the simple harmonic oscillator in a thermal state, we make a choice of the modes such that the resulting Green's function has the form and the properties, very close to that of the Green's functions for the equilibrium system in a thermal state.Further, we introduce the notion of the quasi-thermal state, which is a very particular case of the Gaussian state, in which the properties of the Green's functions become even more closer to those of the thermal ones, in particular, satisfying the Kubo-Martin-Schwinger (KMS) condition.
Gaussian states
Our goal is to find the explicit and useful form of the generating functional of in-in correlation functions where ÛJ are the evolution operators subject to equation (3.85) with different sources J 1 and −J 2 , whereas T and T denote chronological and anti-chronological ordering, respectively.The relation between (4.1) and the correlation functions (4.2) obviously follows from (3.86).
The basic elements are the two-point correlation functions, namely where G T , GT, and G < are Feynman, anti-Feynman and Wightman Green's functions, respectively.The density matrix ρ is assumed to be the Hermitian positive-definite operator of unit trace.Inserting the partition of unity in the coordinate representation to the definition (4.1) of the generating functional three times, and using the path integral representation (3.88) of the evolution operator, one obtains the following expression for the generating functional where the integration over ϕ 1,2 (t) runs with the indicated boundary conditions and we introduce the notation for the coordinate representation of the density matrix Now, we restrict ourselves with the Gaussian density matrices, i.e. those whose coordinate representation has the form of the Gaussian exponent where the matrix Ω, and the vector j play the role of the parameters of ρ, and normalization constant 1/Z is independent of φ.The Hermitian property of the density matrix, ⟨φ implies the following conditions on Ω and j or, in a more explicit block-matrix form Normalizability of ρ implies that the real part of the sum R + S is positive-definite.The case in which the matrix S is non-vanishing corresponds to the mixed states, i.e. such that ρ2 ̸ = ρ.The role of the linear term in the exponential in (4.5) is two-fold.Firstly, j defines nonvanishing mean value of the field operator.Secondly, it can also be used to introduce non-linearities to the density matrix, namely, by differentiating it with respect to j.The typical example of the (pure) Gaussian state is the vacuum state (3.104), i.e. ρ = |0⟩⟨0|, associated with some choice of the annihilation operator, for which R = ω * , S = 0, and j = 0. Another example of the pure Gaussian state is the coherent state (3.110) whose density matrix reads ρ = |α⟩⟨α|, and R = ω * , S = 0 again, but j = √ 2ω re α * .
In-in boundary value problem
Substituting the general Gaussian density matrix to (4.4), one obtains where we put the boundary points φ ± , φ ′ , appearing in (4.4) to the functional integration measure, and also omit the constant normalization factor of the density matrix.We will compute the integral (4.9) representing the generating functional, with the use of saddle point method.The latter turns out to be exact since the integral has the Gaussian form.First of all, we introduce the notations for the block-matrix operators acting on columns of fields and sources (2.7) introduced in Section 2, so that the sum of the actions for ϕ 1 and ϕ 2 in (4.9) can be rewritten in the joint form This allows us to treat the underlying equations of motion, Green's functions, etc. in exactly the same way as the original theory with the action (3.1), except that now the field content is doubled.In terms of the new notations the expression for the generating functional is given by Eqs.(2.8)-(2.10) of Section 2.
The saddle point equation obtained by varying the exponential of this double-field action (2.8) with respect to all fields including the boundary values at t = 0 and t = T reads Independent variation of the fields δϕ(t) in the interior of the time interval gives equations of motion whereas the variation of the boundary values δϕ(T ) and δϕ(0) = δφ supply these equations with the boundary conditions.They read as the following matrix relations where we took into account that in view of ϕ 1 (T ) = ϕ 2 (T ) the variation δϕ T (T ) = δϕ T 1 (T ) I I , and the boundary conditions at t = T reduce to the equality of both the fields and their time derivatives of both ϕ 1 and ϕ 2 .
To solve the boundary value problem above, we first find the Green's function subject to the homogeneous version of the above boundary conditions, i.e. those of vanishing j We can construct the Green's function G solving the problem above, out of the basis functions v ± .These basis function should solve the homogeneous equation, and satisfy the same boundary conditions as those of the Green's function, Applying the generic Green's function expression (3.43) to the case of the doubled field content, we obtain the Green's function G in terms of these basis functions
Neumann type basis functions and Green's function representation
However, we do not have the explicit form of basis functions v ± .We will construct v ± with the help of another set basis functions v, v * subject to much simpler boundary conditions Since W and ω are block-diagonal, the basis functions v, v * can be chosen block-diagonal too, namely With a real operator F the blocks of these matrices solve the equations F v(t) = 0 and F v * (t) = 0, subject to the complex conjugated boundary conditions Thus, v and v * are simply the basis functions for the single field ϕ + or ϕ − subject to the Neumann boundary conditions introduced above.We assume that ω is the symmetric matrix with a positive-definite real part.
The answer for the basis function v + in terms of v and v * can be easily constructed as while the calculation of v − requires more efforts.We will obtain the answer for v − with the use of Bogoliubov coefficients relating two sets of different Neumann basis functions (3.103) by treating v − as the negative frequency basis function complex conjugated to its positive frequency counterpart v * − satisfying at t = 0 the boundary condition (iW − Ω * ) v * − | t=0 = 0. Thus, in accordance with (3.103) the answer for v − reads where U , V are the corresponding Bogoliubov coefficients Here we assume the normalization v(0 √ 2Ω re , and denote the real parts of ω and Ω respectively as ω re and Ω re . Finally, let us consider the details of the Green's function G(t, t ′ ) defined by (4.21) for a particular form of v ± we have just built.Its matrix ∆ −+ given by (4.22) reads Next, let us consider separately the first term in (4.21).
After the calculation presented in Appendix B one obtains for it the following form ) where we introduce the following symmetric matrix Recalling that the second term of the expression (4.21) can be obtained from the first one by the simultaneous transposition and exchange of time arguments, and observing that the second term in (4.31) is symmetric under this transformation, we find that two theta functions sum up to identity, so that the final expression for the Green's function reads where G 0 is defined as and interpreted as the Green function, corresponding to the vacuum state, having the density matrix ρ0 = |0⟩⟨0|, associated with the basis functions v(t), v * (t).Indeed, from (3.109) one observes that the matrix Ω, defining the vacuum density matrix ρ0 , coincides with ω * , i.e.
Ω = ω * .In this case, ν vanishes due to its definition (4.32), so from (4.33) we find that G = G 0 .Substituting the generating functional obtained to (4.2), we observe that for vanishing j the block-matrix components of G are composed of the Feynman, anti-Feynman and Wightman Green's functions (4.3), namely where G > (t, t ′ ) ≡ G T < (t ′ , t), and the explicit form of the block components can be read off form (4.33).Now we have to find the solution ϕ(t) of the boundary value problem (4.13)- (4.15) in order to substitute it into the exponential of (2.8).The only inhomogeneous boundary conditions in this problem are the Neumann conditions (4.14), so that the solution is given by the doublefield version of (3.58) with the substitutions j + → 0 (remember that there is no j + at the point t = T ) and j − → −j.Thus it reads Substituting it to the exponential of (2.8) then gives Eq. (2.11) advocated in Section 2.
where all time integrations run from t = 0 to t = T .
Here the restriction of G(t, t ′ ) to G(t, 0) does not lead to essential simplification whereas G(0, 0) has, as shown in Appendix C, the following explicit and simple form in terms of the parameters of the density matrix where the "ratio" of matrices I +X and Ω re is unambiguous because these matrices are commuting in view of the special form of Ω subject to the relation XΩX = Ω * .
Keldysh rotation
For further convenience it is useful to perform the change of the basis in the doubled field space ϕ + , ϕ − and introduce the so-called classical and quantum fields ϕ c and ϕ q [44,45], This transformation is called Keldysh rotation.In the new basis, the Green's function G takes the form Here G R and G A are the retarded and advanced Green's functions, respectively, having the following operator form They are consistent with the classical definition (3.38), in particular, because of independence of the commutator average of the state ρ.The block G K is called Keldysh Green's function and contains the information about the state.In view of operator averages (4.3) it expresses as the mean value of the anti-commutator of fields and, due to (4.40), explicitly reads in terms basis functions as
. Special choice of basis functions and particle interpretation
Thus far, the matrix ω, which defines the Neumann boundary conditions for the basis functions v, v * , is not fixed except the requirement of symmetry under transposition and positive definiteness of its real part.In this section we make a convenient choice of ω which leads to the expressions for the Green's functions admitting particle interpretation with well-defined notion of average occupation number.
For this purpose, it is useful to rewrite the Keldysh Green's function in terms of non-anomalous and anomalous particle averages so that from (4.43) Note that the matrix κ is symmetric, whereas ν is Hermitian.Comparing with (4.43) we find the connection between particle averages and the matrix ν Thus, we see that block-diagonal components of ν are responsible for anomalous averages.To ascribe the particle interpretation to the creation/annihilation operators, we will try to choose the matrix ω, defining the corresponding basis functions v(t) and v * (t) so that the diagonal blocks of ν, defining the anomalous averages κ, vanish.Moreover, this choice will simplify the expressions for the Green's functions, since they contain the terms, containing κ.For example, with a nonzero κ the Wightman function reads To make the matrix ν block off-diagonal consider the expression (4.32) and note that the only block diagonal contribution is contained in the identity matrix I and, possibly, in the term involving (ω + Ω) −1 .Thus, we want to choose ω such that block-diagonal contribution of the latter exactly cancels those of I. Using the block matrix inversion formula 6 we have the condition of the vanishing block-diagonal part of ν, (4.48) We will focus on the case in which R and S are real.The formalism described below can be easily extended to the complex R, but it seems that there is no straightforward extension to general complex (Hermitian) S. Introducing the dimensionless quantities the equation (4.48) can be rewritten as r + I − s(r + I) −1 s = 2 and further simplified by introducing the new variable s = (r + I) −1/2 s (r + I) −1/2 and solving for s, so that it takes the following form This is implicit equation on ω, due to the above definition of r and s.Its explicit form reads which can be solved in the form advocated in Section 2 6 The useful form of the block matrix inversion formula is Note that the assumption of positive definiteness of ω implies that I − σ 2 = (I − σ)(I + σ) is positive definite.Recalling that R + S = R 1/2 (I + σ)R 1/2 should be positive definite for normalizability of the density matrix, it is easy to see that I − σ = R −1/2 (R − S)R −1/2 , or equivalently R − S should be positive definite too.Then, the substitution of the obtained expression for ω to (4.32) gives the desired block-diagonal matrix form of ν advocated in Section 2 where the matrix κ introduced above is orthogonal.Therefore, as a consequence of positive definiteness of I + σ and I − σ, the matrix ν is necessarily real.As shown in Appendix D, for the density matrix to be positive definite, the matrix σ should be negative definite, so ν is positive definite.Substituting it to (4.33), one immediately obtains simple expressions for the Green's functions.In particular, for Wightmann and Feynman functions one has while the others can be expressed through them in a straightforward way.
It will be useful to express Ω in terms of ν.Disentangling Ω from (4.32), and then using the explicit form (4.53) of ν, corresponding to the special choice of Neumann basis functions with (4.52), we obtain the following expression Now, let us focus on the particular Gaussian state, which is obtained from the Euclidean path integral, namely Here S E is the quadratic action of Euclidean field theory within time limits τ ± which we will chose to be τ + = β and τ − = 0, , (4.59) (4.60) The partition function Z in the normalization factor is such that tr ρ E = 1 for vanishing source J E = 0. Hermiticity of the density matrix implies the following (sufficient) condition on the coefficient matrices A E , B E , and C E as the functions of τ which are not necessary real.Nevertheless, we restrict ourselves to the real case below.The source J E is included in the path integral to be able to introduce nonlinear terms of the Euclidean action, leading to the non-Gaussinities of the resulting density matrix.We take the path integral (4.58) over the Euclidean fields ϕ by using the saddle point method.The boundary conditions of the integral fix the endpoints ϕ(β) = φ + , ϕ(0) = φ − , so we have the boundary problem with the Dirichlet boundary conditions Using the Dirichlet Green's function G D for vanishing boundary conditions and substituting to (3.52), one expresses the solution of (4.62) as follows where we introduce the notations, similarly to those of the Lorentzian context (3.53)-(3.54),for the row w T E (τ ) obtained by the transposition of the column w E (τ ) in where we disregard the source independent prefactor, all τ -integrations run from 0 to β, whereas the matrix Ω and the source j, introduced in (4.5) take the following particular form Now, one can substitute the density matrix (4.66), defined by the parameters (4.67) to the general expression for the generating functional (4.37).This leads to Note that the kernel of the third integral here is the periodic Euclidean Green's function 70) corresponding to the fact that with the Lorentzian sources switched off the functional Z[0, J E ] represents the Euclidean path integral over periodic fields ϕ(τ ) on the time interval with the identified boundary points τ ± .The expression for this Green's function seemingly dependent via G(0, 0) on Lorentzian objects is in fact independent of them.This property is based on the relation (4.38) and derived in Appendix C.
Analytic continuation and KMS condition
The further transformation of the generating functional, which allows one to reveal its new analyticity properties, can be performed due to two assumptions.The first assumption is that the Euclidean action (4. where g N (τ ) is the Euclidean version of the definition (3.59) for the Neumann Green's function.
To proceed further we have to derive several important properties of Euclidean Neumann Green's function which is the part of (4.73), specific to the choice (4.52) of ω.In terms of the Euclidean basis functions it reads as Here u + , u − are the basis functions obeying Neumann boundary conditions and, as usual, where we use the explicit form (4.53) of ν, corresponding to the particular choice of basis functions described in the previous subsection.Equating these two expressions for G N ∥, with due regard to the structure of ∆ N +− in (4.76), we find the two sets of equalities.The first set follows from the diagonal blocks where the basis functions u + , u − are evaluated either at τ = 0 or τ = β.Similarly, from the off-diagonal blocks of (4.78), one gets the formulas, relating the boundary values of the basis functions It is useful to continue the Euclidean equations of motion beyond the interval 0 < τ < β with the period β (which is again possible because τ = 0 and τ = β) are the turning points), Together with (4.61) it also implies
.83)
Once the functions u ± (τ ) satisfy the same homogeneous boundary conditions for both τ = 0 and τ = β (cf.(4.75) and (4.79)), being translated by the period they can only differ by the multiplication with some nonsingular matrices L ± , u ± (τ +β) = u ± (τ )L ± .From (4.81) we obtain their explicit form With the normalization this monodromy simplifies to then G E (τ, τ ′ ) can be expressed as .91) and the Lorenzian Wighmann Green's function (4.56) is an analytic continuation of G > E (τ ).Now, it is time to connect the Euclidean basis functions u ± and the Lorentzian ones v, v * .Specifically, let us show that both sets of functions can be obtained from a single function V (z) of the complex time z = t − iτ , obeying complexified equations of motion (3.5) .93) which reduces to those for v or u + after the substitution z = t or z = −iτ , respectively.Supplementing the latter condition with the normalization one finds i.e. v and u + are analytic continuation of each other.Similarly, complex conjugation of (4.93) and the same assumptions of coefficient functions reality and its reflection symmetry, we find that V * obeys the following boundary condition so that v * and u + can be obtained from V * as
.97)
Thus, assuming that the complexified basis function V (z), z = t − iτ is analytic on 0 ≤ t ≤ T , 0 ≤ τ < β, we have the following transformation law of the basis functions Substituting to (4.56), one has the following condition on Wightmann Green's function which is nothing but KMS condition advocated in Section 4.7.
Harmonic oscillator
In this section we consider harmonic oscillator as the simplest instructive example, which demonstrates the main concepts and quantities, introduced above, together with the convenience of the special choice of the basis functions v, v * .The corresponding action reads where ϕ is one-component field, defining the coordinate of oscillator, and ω 0 is its frequency.We will consider the system in the state, defined by the Euclidean path integral (4.58),where the Euclidean action is an analytic continuation (4.71) of the Lorentzian one Note that for J E = 0 density matrix (4.58) coincides with the thermal density matrix of the inverse temperature β.
The corresponding differential operator defining the Euclidean equation of motion F E ϕ E = 0 and the Wronskian read To exploit the answer (4.66), one should first calculate the Dirichlet Green's function, which can be constructed out of corresponding basis functions u D ± (τ ) satisfying These basis function can be chosen as so that Dirichlet Green's function has the following form Substituting the Green's function obtained to (4.67), one finds the explicit form of the density matrix constituents where we assume ω to be real for the simplicity.
This basis inherits the properties of y ± (τ ) under translation by period, reflection and complex conjugation.In particular u ± (τ + β) = e ∓βε u ± (τ ). (5.23) Comparing with (2.47) one concludes that the parameter ε is connected to ν as ν = 1 e βε − 1 . (5.24) The basis functions u ± (τ ) have significantly different frequency properties depending on whether ε is real or imaginary.Thus, real ε implies where ω is a real number, which coincides with those defined in (2.20), as will be described below.In contrast, imaginary ε leads to the property (u ± (τ )) * = u ± (−τ ), so that the fraction is imaginary 7 , and one can write where the number ω ′ = iω is real.Let us calculate the density matrix (4.58) and examine its properties.To use the answer (4.66), one should first construct the Dirichlet Green's function.The corresponding basis functions u D ± (τ ) obeying u D − (0) = u D + (β) = 0 can be constructed as linear combinations of u ± (τ ).Namely, one defines u D − (τ ) as so that u D − (0) = 0 due to u − (τ ) = u + (−τ ).Due to reflection symmetry of (5.16) one can obtain u The corresponding Wronskian of u D + and u D − reads where we use the relations (5.27)-(5.28) between Dirichlet basis functions and u ± (τ ), and its derivatives at the boundary points (5.30) 7 In deriving this property we use that Substitution of the corresponding Dirichlet Green's function to (4.67) gives where ω is defined in (5.25).Note that for real ε this coincides with (4.57), with (5.24) substituted.For imaginary ε we express it as ε = iq, so Ω has the form where ω ′ is defined in (5.26).Following Appendix D, let us examine the properties of the underlying density matrix, defined by the obtained Ω.For real ε we have R = ω coth βε and S = −ω/ sinh βε, so that R, R + S and R − S have the same sign, and we conclude that the density matrix is bounded, normalizable and positive-definite for ω > 0.
If it is the case, σ ≡ S/R = −1/ cosh βε, so the definition (5.25) is consistent with (2.20), and particle interpretation is allowed.In contrast, for imaginary ε we have R = ω ′ cot βq, S = −ω ′ / sin βq, so R + S and R − S have different signs, so even if the density matrix is normalizable, the particle interpretation is not available.
The case of a pure state: vacuum no-boundary wavefunction
As we have shown above, the Euclidean density matrix prescription in a rather nontrivial way suggests a distinguished choice of the particle interpretation.In context of the pure Hartle-Hawking state this fact is well known and takes place in a much simpler way.Let us briefly discuss this here along with a general demonstration how the transition from a mixed state to the pure one proceeds via the change of spacetime topology of the underlying Euclidean instanton from Fig. 1 to Fig. 3.
The no-boundary state defined by the path integral over the fields on the Euclidean "hemisphere" D 4 + of Fig. 3 (and its reflection dual on D 4 − considered as a factor in the factorizable pure density matrix of Fig. 3) is the vacuum wavefunction (3.109) with the real frequency (3.107), ω = [iW v(t)][v(t)] −1 | t=0 .The relevants positivefrequency basis function v(t), similarly to (2.50), can be regarded as the analytic continuation of a special Euclidean basis function u(τ ), v(t) = u(τ + + it).This basis function is selected by the requirement that it is regular everywhere inside D 4 + , including its pole which we label by τ = 0 [12,48].
To show this one should repeat the calculation of Section 4. where the second equality follows from the analytic continuation rule v(t) = u(τ + + it).Thus, the Hartle-Hawking no-boundary wavefunction of the linearized field modes is the vacuum of particles uniquely defined by a particular choice of positive-frequency basis functions v(t) which in their turn are the analytic continuation of the regular Euclidean basis functions u(τ ), v(t) = u(τ + +it). 9This is a well-known fact [12,48] which in the case of de Sitter cosmology corresponds to the Euclidean de Sitter invariant vacuum [13,14].
It is known that vacuum in-in formalism in equilibrium models can be reached by taking the zero temperature limit β → ∞.It is not quite clear how this limit can be obtained in generic non-equilibrium situations, but it is likely that the transition from mixed Euclidean density matrix to a pure state is always associated with ripping the Euclidean domain into two disjoint manifolds D 4 + and D 4 − depicted in Fig. 3. To show this consider generic situation of the mixed state with the Euclidean density matrix of Fig. 1.This density matrix has a Gaussian form (2.4)-(2.6)with the matrix Ω given by Eq. (2.31) with the Dirichlet Green's function which can be represented in terms of two sets of Dirichlet basis functions u D ± (τ ), u D ± (τ ± ) = 0, (5.35) 8 The point τ = 0 is an internal regular point of a smooth manifold D 4 + , so that this point with τ treated as a radial coordinate turns out to be a regular singularity of the equation F E ϕ(τ ) = 0. Its two linearly independent solutions u ∓ (τ ) have the asymptotic behavior u ∓ ∝ τ µ ∓ with µ − > 0 > µ + , so that only u − (τ ) ≡ u(τ ) is the regular one, while the contribution of the singular u + (τ ) → ∞, τ → 0, should be discarded from the solution ϕ(τ ) [48]. 9 The set u(τ ) is of course defined only up to a linear transformation with some constant matrix L, u(τ ) → u(τ )L, v(t) → v(t)L, but this Bogoliubov transformation does not mix frequencies and therfore does not change particle interpretation.
Now consider the case of a pure state, when the density matrix factorizes into the product of two wavefunctions, or the situation of Ω +− ≡ S = 0.This off-diagonal block of Ω reads as where we used the fact that in view of boundary conditions on u D ± (τ ).Therefore, the requirement of S = 0 implies singularity of u D + (τ − ) which is impossible, because the Green's function G D (τ, τ ′ ) can have a singularity only at the coincidence point of its arguments τ = τ ′ .This means that no Dirichlet Green's function on a smooth connected Euclidean manifold of the topology [τ − , τ + ]×S 3 can generate the density matrix of a pure state.The only remaining option is ripping the bridge between Σ + and Σ − into the union of two disjoint parts D 4 ± by shrinking the middle time slice at τ ≡ τ++τ− 2 to a point.In context of the cosmological model driven by the set of Weyl invariant quantum fields [9,16,22] this option also matches with the interpretation of zero temperature limit β → ∞, because the inverse temperature of the gas of conformal particles in this model is given by the instanton period in units of the conformal time β = 2 τ+ τ dτ /a(τ ) → ∞, which diverges because the cosmological scale factor (the size of the spatial S 3 -section) a(τ ) → 0 at τ → τ .
DISCUSSION AND CONCLUSIONS
Generality of the above formalism allows one to apply it to a wide scope of problems ranging from condensed matter physics to quantum gravity and cosmology.Our goal in future work will be its use in the calculation of the primordial CMB spectrum of cosmological perturbations in the model of microcanonical initial conditions for inflationary cosmology [9,16,20], which was briefly discussed as a motivation for this research.Quasi-thermal nature of this setup was associated in these papers with the fact that the model was based on local Weyl invariant (conformal) matter which, on the one hand, generates the Friedmann background providing the necessary reflection symmetry and, on the other hand, turns out to be effectively in equilibrium, because in the comoving frame it describes a static situation.
Our results show, however, that thermal properties, including particle interpretation with the distinguished positive/negative frequency decomposition, are valid in much more general case.Specifically, the corresponding frequency matrix ω in the initial conditions problem for basis functions (2.15) is shown to be determined by the parameters of Gaussian type density matrix (2.20), and the occupation number matrix ν reads as (2.21)- (2.22).In this setup, the Euclidean density matrix, which incorporates the reflection symmetry property guaranteed by (4.61), plays the role of the particular case.If in addition the Lorentzian action is related to the Euclidean action via the analytic continuation at the turning points of the bounce background (which, of course, respects its reflection symmetry), important analytic properties of correlation functions, including the KMS condition, begin to hold.These are the main results of the paper.They allow one to derive the full set of Lorentzian domain, Euclidean domain and mixed, Lorentzian-Euclidean, Green's functions of the in-in formalism and reveal its rich analytic structure.In particular, the results of Section 4.2 significantly extend those of [49], where the nonequilibrium evolution of Gaussian type density matrices was examined.The discussion of simple application examples of Section 5 shows the relation of the obtained formalism to the stability properties of dynamical systems in Floquet theory and the theory of Bloch functions.These properties, in their turn, are related to the eigenmode properties of the wave operator F E subject to periodic boundary conditions on the bounce instanton within Euclidean time [ τ − , τ + ]-range and deserve further studies.
Prospective nature of rich analytic structure of the Euclidean-Lorentzian in-in formalism consists in the hope that quantum equivalence of purely Euclidean calculations of loop effects with those of the Lorentzian calculations can be extended to generic bounce type backgrounds.This equivalence was proven in [31,32] for the vacuum case of the flat chart of the de Sitter spacetime vs its Euclidean -S 4 instanton.A similar but much simpler equivalence at the one-loop order was observed within covariant curvature expansion in asymptotically flat spacetime for systems with the Poincareinvariant vacuum which is prescribed as the initial condition at asymptotic past infnity [50].This equivalence is realized via a special type of analytic continuation from Euclidean to Lorentzian spacetime, which guarantees unitarity and causality of relevant nonlocal form factors.
Further applications of the in-in formalism in quantum cosmology require its extension to models with local gauge and diffeomorphism invariance (see also [51] for related problem in the context of quantum electrodynamics).What have been built thus far is the formalism in the physical sector of the theory for explicitly disentangled physical degrees of freedom.In cosmological models subject to time parametrization invariance time is hidden among the full set of metric and matter field variables, and disentangling time is a part of the Hamiltonian reduction to the physical sector.This reduction shows that the cosmological background can be devoid of physical degrees of freedom (just like Friedmann equation in FRW-metric models does not involve any physical degree of freedom in the metric sector of the system).This might play a major role in handling a zero mode of the wave operator F E , which necessarily arises on the bounce type background [52] and comprises in the cosmological context one of the aspects of the problem of time in quantum gravity [11].This and the other problems of cosmological applications of the in-in formalism go beyond the scope of this paper and will be the subject of future research.
Figure 1 .
Figure 1.Picture of instanton representing the density matrix.Gray lines depict the Lorentzian Universe nucleating from the instanton at the minimal surfaces Σ− and Σ+.
Figure 2 .
Figure 2. Origin of the partition function instanton from the density matrix instanton by the procedure of gluing the boundaries Σ+ and Σ− -tracing the density matrix.
Figure 3 .
Figure 3. Density matrix of the pure Hartle-Hawking state represented by the union of two no-boundary instantons.
Figure 4 .
Figure 4. Origin of the partition function instanton from the density matrix instanton by the procedure of gluing the boundaries Σ+ and Σ− -tracing the density matrix.
Schroedinger picture (labelled by S) it reads as the chronologically ordered operator T
Figure 5 .
Figure 5. Euclidean-Lorentzian contour C on the Riemann surface of complex time z = t − iτ .Wightman functions are periodic in imaginary (Euclidean) time direction with a period β, whereas the basis function v(z) suffers a jump at the cut denoted by the horizontal dashed line, the two Lorentzian time branches running along the shores of this cut.
.65) (the last equality implies the symmetry of the Dirichlet Green's function, G T D (τ, τ ′ ) = G D (τ ′ , τ )).Substitution back to (4.58) gives 59) is obtained by analytic continuation of the Lorentzian one (3.1),namely iS[ϕ] t=−iτ = −S E [ϕ] (4.71)This implies the following form of the Euclidean action coefficient functionsA E (τ ) = A(−iτ ), B E (τ ) = −iB(−iτ ), C E (τ ) = −C(−iτ).(4.72)Though this requirement sounds rather restrictive, it can be based on the assumptions discussed in Introduction about the properties of the Euclidean background underlying the quadratic action and sandwiched between the two (identified) turning points at which the analytic match between the Euclidean and Lorentzian branches can be done.Another assumption which we use in what follows is the possibility to make a special choice of the Neumann basis functions, derived above.The first step is to rewrite the second and the third terms in the exponential of the generating functional (4.69) in terms of the Euclidean Neumann Green's function G N (τ, τ ′ ) instead of the Dirichlet one, i.e. (W E+ ω)G N (β, τ ′ ) = (W E − ω * )G N (0, τ ′ ) =0 where ω is the same as in (4.23)-(4.24).This is done using the relations (3.65)-(3.66)(after the replacement ω → −iω associated with the transition to the Euclidean version of Dirichlet and Neumann Green's functions) and the derivation in Appendix C. The result reads as the expression (4.69) with the kernel of the Lorentzian-Euclidean term −G(t, 0) w E (τ ) replaced by G(t, 0) (ω + Ω) g N (τ ) and the new form of the periodic Green's function G E (τ, τ ′ ) in the Euclidean-Euclidean block .76) Note that the boundary conditions on u ± above are exactly the analytic continuation t → −iτ of the boundary conditions (4.26) on v, v * .Now, consider in detail the matrix of boundary values of the Euclidean Neumann Green's function at τ + = β and τ − = 0 77) (double vertical bar denotes here the restruction of the two Green's function arguments to two boundary surfaces thus forming the 2×2 block matrix).Using the Euclidean version of the relation (3.66), we find the alternative form of this matrix view of the reflection symmetry (4.83) of the operator F E the functions u + (τ ) and u − (−τ ) can differ at most by some non-degenerate matrix L, u + (τ ) = u − (−τ ) L. For the normalization (4.85) this implies u + (τ ) = u − (−τ ).(4.87)For the choice (4.85) we have ∆ N +− = −∆ N −+ = I, so that the blocks of the Euclidean and Lorentzian-Euclidean Green's function in (4.73) read
. 9 )
The basis functions, satisfying (4.26) are the linear combinations of e ±iω0t which are the solutions of e.o.m. and read (cf.(3.96) and (3.103)) 6 on D 4 + -the support of the Euclidean action S E (φ) evaluated at the regular solution of equations of motion F E ϕ(τ ) = 0 with the boundary value φ = ϕ(τ + ) at the single boundary Σ + = ∂D4 + .This regular solution is given by the expression proportional to the regular basis function u(τ) of F E on D 4 + , ϕ(τ ) = u(τ )[u(τ + )] −1 φ,(5.33)because the contribution of the complementary basis function dual to the regular u(τ ) should be excluded in view of its singularity at τ = 0. 8 After the substitution into the expression for the action (4.59) its onshell value reduces to the contribution of the single surface term at Σ + , S E (φ) = 1 2 ϕ T (W E ϕ)| Σ+ .As a result S E (φ) = 1 2 φ T ωφ, and the Hartle-Hawking wavefunction Ψ HH (φ) ∝ e −S E (φ) becomes the vacuum state (3.109) with ω = −[W E u(τ + )][u(τ + )] −1 = [iW v(t)][v(t)] −1 t=0 , (5.34) which can be obtained from (2.14) by the replacement (2.27), subject to Dirichlet boundary conditions, t) P.(3.20)Before proceeding further, let us show explicitly that the right hand side(3.19)isindeedindependent of time t.To demonstrate this, we contract l.h.s of the equation(3.5)wherefield ϕ = ϕ 1 , with another field ϕ 2 , and subtract the same quantity, but with F , acting on ϕ 2 (ϕ 1,2 are not necessarily solve e.o.m.).The result can be written as .79) and means that the basis functions u + , u − obey the same Neumann boundary conditions at both boundary values of the Euclidean time (cf.Eq. (4.75)).This also implies the following explicit form of the matrices ∆ .92) This equation reduces to the Lorentzian e.o.m. for z = t and to the Euclidean ones for z = −iτ .Under the assumption that coefficient functions A(t), B(t), and C(t) are real, together with the reflection symmetry (4.83), one can find that V * (z) ≡ (V (z * )) * obeys the same equation.Moreover, the initial conditions (4.26) for v, v * are connected with those (4.75) for u ± via analytic continuation t → −iτ .This motivates us to impose the boundary condition on V as follows | 22,619.8 | 2023-09-07T00:00:00.000 | [
"Physics"
] |
Aharonov-Bohm effect in a side-gated graphene ring
We investigate the magnetoresistance of a side-gated ring structure etched out of single-layer graphene. We observe Aharonov-Bohm oscillations with about 5% visibility. We are able to change the relative phases of the wave functions in the interfering paths and induce phase jumps of pi in the Aharonov-Bohm oscillations by changing the voltage applied to the side gate or the back gate. The observed data can be well interpreted within existing models for 'dirty metals' giving a phase coherence length of the order of 1 micrometer at a temperature of 500mK.
PACS numbers:
The progress in nano-fabrication technology of graphene has lead to the realization of graphene constrictions 1,2,3,4,5,6 and quantum dots. 7,8,9,10 The same technology allows one to study phase coherent transport of charge carriers in single-and multilayer graphene. In Ref. 11, weak localization and conductance fluctuations in mesoscopic samples with about seven graphene layers were investigated. Recent transport measurements on single-layer pnp-(npn-) junctions created with a narrow top gate were interpreted in terms of Fabry-Perot interference. 12 Magnetoconductance fluctuations and weak localization effects were observed in singlelayers with superconducting contacts. 13,14 Theoretical aspects of phase-coherent conductance fluctuations in graphene nanostructures, 15 and the Aharonov-Bohm effect 16,17 have been addressed. 18,19 The Aharonov-Bohm effect has been observed in carbon-materials i.e. carbon nanotubes before 20,21 . Recently the Aharonov-Bohm effect was investigated experimentally in two-terminal graphene ring structures, and a systematic study of its dependence on temperature, density of charge carriers, and magnetic field was presented. 22 In this experiment the visibility of the Aharonov-Bohm oscillations was found to be less than 1% at low magnetic fields. It was speculated that this small value might be due to inhomogeneities in the two interferometer arms leading to a tunneling constriction that suppressed the oscillations.
In this paper we present four-terminal magnetotransport through a side-gated graphene ring of smaller size than the devices studied in Ref. 22, and demonstrate h/eperiodic Aharonov-Bohm oscillations with a visibility of more than 5%. In addition, we demonstrate that a πphase shift of the oscillations can be achieved by changing the side or back gate voltages. Fig. 1(a) displays a scanning force micrograph of the graphene ring studied in this work. The graphene * Both authors have contributed equally to this work. On each end of the ring structure, there are two graphene contact pads labeled S1/2 and D1/2 allowing us to perform four-terminal resistance measurements. The side gates labeled SG1 and SG2 are located 100 nm away from the structure. (b) Raman spectrum of the same flake before processing. The spectrum was recorded using a laser excitation wavelength of 532 nm. (c) Four-terminal resistance across the ring structure as a function of back gate voltage, with both side gates grounded. The measurement is recorded at a temperature of 500 mK with a constant current of 10 nA. flakes are produced by mechanical exfoliation of natural graphite, and deposited on a highly doped Si wafer covered by 295 nm of silicon dioxide. 23 Thin flakes are detected by optical microscopy, and Raman spectroscopy is used to confirm the single layer nature of the selected graphitic flakes. 24,25 In Fig. 1(b) we show the Raman spectrum of the graphene flake used for the fabrication of the investigated graphene ring device [ Fig. 1(a)]. The spectrum has been recorded before structuring the flake and the narrow, single Lorentzian shape of the 2D line is evidence for the single layer nature. 24,25 Electron beam lithography, followed by reactive ion etching is used to define the structure. The contacts are added in a second electron beam lithography step, followed by the evaporation of Cr/Au (2 nm/40 nm). 8 All measurements presented in this work are performed in a He 3 cryostat at a base temperature of T ≈ 500 mK. Standard low-frequency lock-in techniques are used to measure the resistance by applying a constant current. A magnetic field is applied perpendicular to the sample plane. Fig. 1(c) displays the four-terminal resistance of the ring as a function of applied back gate voltage V BG . The charge neutrality point occurs at V BG ≈ 10 V. The high resistance observed at the charge neutrality point is related to the small width W = 150 nm of the ring arms. 5 However, this width was chosen large enough that strong localization of charge carriers leading to Coulombblockade dominated transport in narrow ribbons 5,6 is not dominant. A rough estimate of the mobility taking into account the geometry of the structure and using the parallel plate capacitor model leads to µ ≈ 5000 cm 2 /Vs, comparable to the value quoted for the material used in Ref. 22. For the typical back gate voltage V BG = −5.8 V used for most of the measurements presented in this paper, the parallel plate capacitor model gives the sheet carrier density p s = 1.2 × 10 12 cm −2 .
We identify the relevant transport regime in terms of appropriate length scales. The Fermi wavelength corresponding to the carrier density mentioned above is λ F = 4π/p s = 33 nm. For comparison, at the same density the mean free path is l = µ √ πp s /e ≈ 65 nm. This is less than half of the width W of the arms, and much smaller than the mean ring radius r 0 = 275 nm and its corresponding circumference L = 1.7 µm. Therefore, the presented measurements are all close to the diffusive (dirty metal) regime, and carrier scattering at the sample boundaries alone cannot fully account for the value of the mean free path. The relevance of thermal averaging of phase-coherent effects can be judged from the thermal length l th = v F l/2k B T = 700 nm, which is significantly smaller than L. This indicates that thermal averaging of interference contributions to the conductance is expected to be relevant. Fig. 2(a) displays the four-terminal resistance of the ring as a function of magnetic field at V BG = −5.789 V. The raw data trace shows a strong modulation of the background resistance on a magnetic field scale of about 100 mT. Clear periodic oscillations can be seen on top of this background. They have a period in magnetic field ∆B AB = 17.9 mT, indicated by the vertical lines. This period corresponds to the h/e-periodic Aharonov-Bohm oscillations of a ring structure of 271 nm radius, in good agreement with the mean radius r 0 of the ring. Fig. 2(b) shows the same data with the background resistance subtracted. The background was determined by performing a running average over one Aharonov-Bohm period ∆B AB . This method was found to lead to no relevant distortion of the oscillations after background subtraction (with some exception in Fig. 3 which is of minor importance for this paper.) The amplitude of the Aharonov-Bohm oscillations is modulated as a function of magnetic field on the same scale as the background resistance, indicating that a finite number of paths enclosing a range of different areas contribute to the oscillations. This observation is compatible with the finite width W of the ring. 29 In Fig. 2(c) the fast Fourier transform (FFT) of the data in Fig. 2(a) is plotted. The peak seen at 60 mT corresponds to the h/e-periodic Aharonov Bohm effect. The width of this peak is significantly smaller than the range of frequencies expected from the range of possible enclosed areas in our geometry (indicated as a gray shaded region in Fig.2(c)). We therefore conclude that the paths contributing to the Aharonov-Bohm effect do not cover the entire geometric area of the ring arms.
In this four-terminal measurement, the oscillations have a visibility of about 5%. In general, the observed Aharonov-Bohm oscillations become more pronounced for smaller current levels, as expected. The current level of 5 nA was chosen as a good compromise between the signal-to-noise ratio of the voltage measurement and the visibility of the Aharonov-Bohm oscillations. However, due to limited sample stability, the visibility of the oscillations at a given back gate voltage depends on the back gate voltage history. Therefore measurements presented here were taken only over small ranges of back gate voltage after having allowed the sample to stabilize in this range.
Higher harmonics, especially h/2e-periodic oscillations, are neither visible in the magnetoresistance traces, nor do they lead to a clear peak in the Fourier spectrum (less than 1% of the h/e-oscillation amplitude). This indicates that the phase coherence length l ϕ < 2L, i.e., it is (significantly) smaller than twice the circumference of the ring. Given the temperature of our experiment, this esti-mate is well compatible with the phase-coherence lengths reported in Refs. 11,13,22,and 30. The measurements were taken in a magnetic field range where the classical cyclotron radius R c = k F /eB > 640 nm is bigger than the mean free path l, the ring width W , and even the ring diameter. At the same time, Landau level quantization effects are negligible, because the sample is studied in the low field regime µB ≪ 1. The only relevant effect of the magnetic field on the charge carrier dynamics is therefore caused by the field-induced Aharonov-Bohm phase.
In diffusive ring-shaped systems, conductance fluctuations can coexist with Aharonov-Bohm oscillations. However, the relevant magnetic field scale of the conductance fluctuations ∆B CF ∼ φ 0 /W l ϕ (φ 0 = h/e) can be forced to be well separated from ∆B AB = φ 0 /πr 2 0 by choosing a sufficiently large aspect ratio r 0 /W . Judging the situation from the measurement traces in Fig. 2(a), the only candidates for conductance fluctuations are the magnetic field dependent variations of the background resistance, which occur on a magnetic field scale that is at least a factor of five larger than ∆B AB . As far as the amplitude of the modulation of the background can be estimated from Fig. 2(a), it is of the order of the conductance quantum e 2 /h which is reasonable, since the condition l ϕ ∼ L implies the absence of strong self-averaging over the ring circumference L. Fig. 3 displays the four-terminal resistance of the ring as a function of magnetic field and voltage V SG applied to the side gate SG1, for two different back gate voltages without [ Fig. 3(a), (c)] and with [ Fig. 3(b), (d)] background subtraction. In the raw data [ Fig. 3(a), (c)], a modulation of the background resistance on a magnetic field scale, similar to that in Fig. 2(a), can be observed. The subtraction of the background (extracted as described before) makes the Aharonov-Bohm oscillations visible [ Fig. 3(b), (d)]. Aharonov-Bohm oscillations at different V SG display either a minimum or a maximum at B = 0 T, with abrupt changes between the two cases at certain side gate voltage values. This behavior is compatible with the generalized Onsager symmetry requirement for two-terminal resistance measurements, R(B) = R(−B). Although our measurement has been performed in four-terminal configuration, the contact arrangement with respect to the ring and the fact, that the contacts are separated by distances ≥ l φ from the ring lead to a setup where the two terminal symmetry is still very strong. [c.f., Fig. 1(a)]. Closer inspection shows that the part antisymmetric in magnetic field of each trace (not shown) is more than a factor of ten smaller than the symmetric part.
In previous studies on metal rings the effect of electric fields on the Aharonov-Bohm oscillations has been investigated, and two possible scenarios were discussed: 31 on one hand, the electric field may shift electron paths in space and thereby change the interference. On the other hand, the electric field may change the electron density and thereby the Fermi-wavelength of the carriers. We discuss the latter effect in more detail below, since the relative change in the Fermi wavelength is expected to be more pronounced in graphene compared to conventional metals.
In order to estimate which phase change ∆ϕ an electronic wave picks up on the scale of the side gate voltage change ∆V SG on which Aharonov-Bohm maxima switch to minima, we use the relation ∆ϕ = ∆k F L eff , where L eff , being the effective length of a characteristic diffusive path, is assumed to be independent of the side gate voltage, 32 whereas the change in wave number ∆k F is assumed to be caused by ∆V SG . The quantity ∆k F is found from the density change ∆p s using ∆k F = π/4p s ∆p s . The density change is related via a parallel plate capacitor model to a change in back gate voltage, i.e., ∆p s = ∆V BG ǫǫ 0 /ed (ǫ: relative dielectric constant of the silicon dioxide substrate, d: thickness of the oxide layer) leading to ∆p s /∆V BG ≈ 7.5 × 10 10 cm −2 V −1 . Finally, ∆V BG is related to ∆V SG via the lever arm ratio α SG /α BG .
In order to determine this lever arm ratio, we have performed measurements of conductance fluctuations in the plane of the back gate voltage V BG and the side gate voltage V SG (not shown). The characteristic slope of fluctuation minima and maxima in this parameter plane allows us to estimate the lever arm ratio α SG /α BG ≈ 0.2. In previous experiments on side-gated graphene Hall bars 33 we found a similar lever arm for regions close to the edge of the Hall bar whose width is roughly comparable to the width of the arms of the ring investigated here.
Using the numbers given above and using the density p s = 1.2 × 10 12 cm −2 for Fig. 3(b), we find ∆k F ≈ 1.2 × 10 6 m −1 V −1 ∆V SG . In ballistic systems the effective length of a path is given by L eff ∼ L, giving ∆ϕ ≈ ∆V SG π/1.5 V. A phase change of π would imply a change of side gate voltage on the scale of 1.5 V which is large compared with the measurement in Fig. 3(b) where this scale is of the order of 100 mV. However, in the diffusive regime, a characteristic path contributing to Aharonov-Bohm oscillations is longer by a factor of L/l ≈ 27 due to multiple scattering 34 giving ∆ϕ ≈ ∆V SG π/55 mV. A change of the side gate voltage of typically 55 mV would cause a switch of the Aharonov-Bohm phase by π, in better agreement with the observation than the ballistic estimate. The same calculation could be used to estimate the correlation voltage of the conductance fluctuations of the background resistance, in agreement with the observation in Fig. 2 and Fig. 3. This correlation voltage is on the same scale as the phase jumps of the Aharonov-Bohm oscillations. Fig. 4 shows magnetoresistance data for varying back gate voltages and V SG = 0 V. Similar to the case where the side gate was tuned, we observe variations of the oscillation patterns as well as π-phase shifts. The raw data displayed in Fig. 4(a),-shows background fluctuations with h/e-periodic Aharonov-Bohm oscillations superimposed. In Fig. 4(b), the background has been removed. be observed.
The larger visibility of Aharonov-Bohm oscillations observed in our sample, compared to the work in Ref. 22 is unlikely to be caused by better material or sample quality. Also our measurement temperature is about a factor of four higher than the lowest temperatures reported there. We therefore believe that the smaller ring dimensions in combination with the four terminal arrangement may be responsible for the larger value of the visibility observed in our experiment. In Ref. 22 the expression 29 ∆G ∝ l th /l ϕ exp(−πr 0 /l ϕ ) was invoked to explain the observed T −1/2 -dependence of the oscillation amplitude. The exponential term on the right hand side contains the radius of the ring r 0 . A smaller radius will lead to a larger oscillation amplitude which may explain the improved amplitude in our measurements. However, trying to relate the visibilities observed in the two experiments quantitatively (assuming that all experimental parameters except the ring radius are the same) would lead to a phase-coherence length l ϕ smaller than the ring circumference L and only slightly larger than the ring radius r 0 . As our experiment demonstrates, a separation of h/eperiodic oscillations from background variations due to magnetoconductance fluctuations is still possible in our device despite the aspect ratio r 0 /W which is reduced in our device compared to Ref. 22. A phase-coherence length between L and r 0 is also compatible with the observation ∆B CF /∆B AB ≈ 5.
We also note that the diffusive regime investigated in our device is quite extended in back gate voltage. Assuming diffusive scattering at the edges to become dominant as soon as l ≈ W , we estimate that this does not occur (for transport in the valence band) until V BG becomes more negative than −80 V. Transport may also enter a different regime, when the Fermi wavelength becomes larger than l, which is expected to happen (again for transport in the valence band) at back-gate voltages larger than +2 V in our sample. An even different regime may be entered at a back gate voltage of +9.3 V, where λ F ≈ W . As a consequence, the 'dirty metal' description of the Aharonov-Bohm oscillations should be applicable in the whole range of back-gate voltages shown in Fig. 1(c), except for a region of ±8 V around the charge neutrality point, where the resistance is maximum.
In conclusion, we have observed Aharonov-Bohm os-cillations in four-terminal measurements on a side-gated graphene ring structure. The visibility of the oscillations is found to be about 5%. By changing the voltage applied to the lateral side gate, or the back gate, we observe phase jumps of π compatible with the generalized Onsager relations for two-terminal measurements. The observations are in good agreement with an interpretation in terms of diffusive metallic transport in a ring geometry, and a phase-coherence length of the order of one micrometer at a temperature of 500 mK. | 4,263 | 2009-04-08T00:00:00.000 | [
"Physics"
] |
Differential Effects of Viscum album Preparations on the Maturation and Activation of Human Dendritic Cells and CD4+ T Cell Responses
Extracts of Viscum album (VA); a semi-parasitic plant, are frequently used in the complementary therapy of cancer and other immunological disorders. Various reports show that VA modulates immune system and exerts immune-adjuvant activities that might influence tumor regression. Currently, several therapeutic preparations of VA are available and hence an insight into the mechanisms of action of different VA preparations is necessary. In the present study, we performed a comparative study of five different preparations of VA on maturation and activation of human dendritic cells (DCs) and ensuing CD4+ T cell responses. Monocyte-derived human DCs were treated with VA Qu Spez, VA Qu Frf, VA M Spez, VA P and VA A. Among the five VA preparations tested VA Qu Spez, a fermented extract with a high level of lectins, significantly induced DC maturation markers CD83, CD40, HLA-DR and CD86, and secretion of pro-inflammatory cytokines such as IL-6, IL-8, IL-12 and TNF-α. Furthermore, analysis of T cell cytokines in DC-T cell co-culture revealed that VA Qu Spez significantly stimulated IFN-γ secretion without modulating regulatory T cells and other CD4+ T cytokines IL-4, IL-13 and IL-17A. Our study thus delineates differential effects of VA preparations on DC maturation; function and T cell responses.
Introduction
Extracts of Viscum album L. (VA) or European mistletoe, a semi-parasitic plant, are traditionally used for the complementary therapy of cancer and other disorders [1][2][3][4]. Several lines of evidence indicate that VA improves patient survival, reduces the damage caused by conventional cancer therapies and increases patients' quality of life [1,5,6]. Depending on the concentration used for treatment, mistletoe extracts induce tumor cell death and exert direct necrotic effects or apoptosis [2]. VA preparation is a heterogeneous mixture of several bio-active molecules, but the major components are lectin and viscotoxin. Mistletoe lectin (ML) consists of two subunits, the A chain (29 KDa) and B chain (34 KDa). The A chain is responsible for ribosome inactivation, whereas the B chain helps in binding to terminal galactoside residues on cell membrane [7,8].
Dendritic cells (DCs) are antigen presenting (APCs) and involved in mounting and modulating the immune response. Being sentinels of the immune system, DCs bridge innate and adaptive immunity. Thus, DCs are potential targets for the therapeutic intervention in immune-mediated conditions. Immature DCs expressing low MHC II on their surface are specialized in uptake of antigens. Upon receiving activation signals, DCs undergo maturation and induce distinct CD4 + T cell responses. The mature DCs express high level of MHC II and co-stimulatory molecules and secrete a large array of cytokines that mediate inflammation and CD4 + cell polarization [9][10][11][12][13][14]. However, in the absence of danger signals, presentation of self-antigens by immature DCs promotes immune tolerance by silencing the effector and autoreactive T cells and enhancing CD4 + CD25 + FoxP3 + regulatory T cells (Tregs) or T regulatory type 1 cells [9,[15][16][17][18].
As DCs have a central role in anti-tumor immune responses, efficient functioning of these cells is crucial for the success of cancer immunotherapy [19]. DCs are immature and functionally defective in cancer patients and tumor-bearing animals, possibly due to insufficient danger signals in the tumor microenvironment [20]. Further, several reports indicate that tumor cells hamper the maturation process of DCs and their capacity to prime protective T cell responses [21][22][23][24].
Our previous report demonstrates that VA Qu Spez, one of the VA preparations, induces activation of human DCs, and DC-mediated CD4 + T cell proliferation and tumor-specific CD8 + T cell responses as measured by IFN-γ and TNF-α secretion [25]. However, several therapeutic preparations of VA are currently available. Each VA preparation is heterogeneous in its chemical composition and is influenced by the host tree, harvest season and extraction method [26][27][28]. Therefore, the therapeutic outcome of a particular VA preparation might not be similar to that of other preparations [29,30]. An insight into the mechanisms of action of different VA preparations is therefore necessary to provide guidelines for the correct therapeutic use of VA preparations.
In the present study, we performed a comparative study of five different preparations of VA (VA Qu Spez, VA Qu Frf, VA M Spez, VA P and VA A) on the maturation and activation of human DCs and ensuing CD4 + T cell responses. Our data show that among five preparations tested, VA Qu Spez is the most potent inducer of DC maturation and secretion of DC cytokines. Furthermore, VA Qu Spez significantly stimulated IFN-γ secretion without modulating Tregs and other CD4 + T cytokines IL-4, IL-13 and IL-17. Our study thus delineates differential effects of VA preparations on DC maturation, function and T cell responses.
Effect of Different VA Preparations on the Maturation of DCs
Immature DCs of 5 day old were either untreated or treated with five VA preparations at four different concentrations: 5, 10, 15 and 20 µg/mL/0.5ˆ10 6 cells for 48 h. DCs were analysed for the expression of various maturation-associated surface molecules ( Figure 1A-F). We found that among five VA preparations, only VA Qu Spez was able to significantly enhance the intensity of expression of antigen presenting molecule HLA-DR, co-stimulatory molecules CD86 and CD40 and % of expression of terminal maturation marker CD83. The induction of DC maturation by VA Qu Spez was observed only at higher concentrations i.e., 15 and 20 µg. Further, the effect of VA Qu Spez on maturation of DCs was dose-dependent. The expressions of CD40 and HLA-DR were 100% on control DCs and were not altered by VA Qu Spez. VA Qu Spez also did not alter % expression of CD1a and intensity of expression of CD83.
We observed that HLA-DR expression on VA Qu Spez (20 µg) and LPS (positive control, 10 ng/ 0.5 million cells)-stimulated DCs was similar. However, induction of CD40 and CD86 by VA Qu Spez was 2-fold lesser and CD83 was 4-fold lesser than LPS. In line with our previous report on stimulation of tumor-antigen-specific cytotoxic T cell responses by VA Qu Spez-stimulated DCs [25], we found that these DCs expressed higher levels of HLA class I molecules (13.6%˘1.1% on control DCs vs. 20.6%˘3.2% on VA Qu Spez-stimulated DCs, n = 3). However, VA Qu Frf, VA M Spez, VA P and VA A did not significantly modify the expressions of any of maturation-associated molecules on DCs. These results suggest that among all preparations tested; only VA Qu Spez is able to induce maturation of DCs. significantly modify the expressions of any of maturation-associated molecules on DCs. These results suggest that among all preparations tested; only VA Qu Spez is able to induce maturation of DCs.
VA Qu Spez but Not Other VA Preparations Stimulate the Secretion of DC Cytokines
It is well reported that DC-derived cytokines play a critical role in regulating the immune responses and in polarizing distinct CD4 + T cell responses. We analysed the differential effects of various VA preparations on the secretion of DC cytokines such as IL-6, IL-8, IL-12, IL-10 and TNF-α. As VA Qu Spez significantly induced maturation of DCs, it was likely that this effect is associated with modulation of DC cytokines. In fact, compared to control DCs, VA Qu Spez-treated DCs showed significantly increased secretion of IL-6, IL-8, IL-12 and TNF-α (Figure 2A-C,E). Control DCs secreted 4.7 ± 5.1 pg/mL of IL-6 and was enhanced to 156.9 ± 105.1 pg/mL by VA Qu Spez. In case of IL-8, control DCs secreted 102.2 ± 78.5 pg/mL, whereas VA Qu Spez at the highest concentration induced 612.1 ± 20.4 pg/mL. The Th1-polarizing cytokine IL-12 was secreted at 3.3 ± 4.9 pg/mL by control DCs and was increased to 10.4 ± 6 pg/mL by VA Qu Spez-treated DCs. TNF-α secretion by untreated DCs was 3.2 ± 2.1 pg/mL, and with VA Qu Spez treatment, this cytokine was increased to 135.7 ± 37.9 pg/mL. We could observe a moderate but insignificant induction of the aforementioned DC cytokines by VA Qu Frf and VA M Spez. However, VA P and VA A did not modulate any of the DC cytokines (Figure 2A-C,E). These results show that VA Qu Spez is the most potent preparation that induces both maturation and cytokines by DCs. Of note, production of IL-10, an immunosuppressive cytokine was unaltered upon VA Qu Spez treatment ( Figure 2D). Together, our data suggest that VA
VA Qu Spez but Not Other VA Preparations Stimulate the Secretion of DC Cytokines
It is well reported that DC-derived cytokines play a critical role in regulating the immune responses and in polarizing distinct CD4 + T cell responses. We analysed the differential effects of various VA preparations on the secretion of DC cytokines such as IL-6, IL-8, IL-12, IL-10 and TNF-α. As VA Qu Spez significantly induced maturation of DCs, it was likely that this effect is associated with modulation of DC cytokines. In fact, compared to control DCs, VA Qu Spez-treated DCs showed significantly increased secretion of IL-6, IL-8, IL-12 and TNF-α ( Figure 2A-C,E). Control DCs secreted 4.7˘5.1 pg/mL of IL-6 and was enhanced to 156.9˘105.1 pg/mL by VA Qu Spez. In case of IL-8, control DCs secreted 102.2˘78.5 pg/mL, whereas VA Qu Spez at the highest concentration induced 612.1˘20.4 pg/mL. The Th1-polarizing cytokine IL-12 was secreted at 3.3˘4.9 pg/mL by control DCs and was increased to 10.4˘6 pg/mL by VA Qu Spez-treated DCs. TNF-α secretion by untreated DCs was 3.2˘2.1 pg/mL, and with VA Qu Spez treatment, this cytokine was increased to 135.7˘37.9 pg/mL. We could observe a moderate but insignificant induction of the aforementioned DC cytokines by VA Qu Frf and VA M Spez. However, VA P and VA A did not modulate any of the DC cytokines ( Figure 2A-C,E). These results show that VA Qu Spez is the most potent preparation that induces both maturation and cytokines by DCs. Of note, production of IL-10, an immunosuppressive cytokine was unaltered upon VA Qu Spez treatment ( Figure 2D). Together, our data suggest that VA Qu Spez significantly induces several pro-inflammatory cytokines without modulating immune-suppressive cytokine IL-10. Qu Spez significantly induces several pro-inflammatory cytokines without modulating immunesuppressive cytokine IL-10.
Differential Effects of VA Preparations on the CD4 + T Cell Response
One of the key functions of APC is to promote CD4 + T cell responses. DCs primed with various preparations of VA were co-cultured with CD4 + T cells and Th1, Th2, Th17 and Treg responses were determined by flow cytometric analysis of intracellular IFN-γ (Th1), IL-4 (Th2), IL-17A (Th17), FoxP3 (Treg). Although VA Qu Spez induced maturation of DCs, this effect was not associated with the modulation of frequency of any of the T cell subsets ( Figure 3A-H). However, analysis of amount of secretion of T cell cytokines in DC-CD4 + T cell co-culture revealed that VA Qu Spez significantly stimulated IFN-γ secretion ( Figure 4A), without having any effect on the secretion of IL-4 ( Figure 4B), IL-13 ( Figure 4C) and IL-17A ( Figure 4D). These results suggest that VA Qu Spez selectively favours Th1 responses without modulating Th2, Th17 and Treg responses. Other four preparations of VA did not alter either frequency of T cell subsets or secretion of various T cell cytokines. These results were in line with the fact that VA Qu Frf, VA M Spez, VA P and VA A did not induce maturation and activation of DCs.
Differential Effects of VA Preparations on the CD4 + T Cell Response
One of the key functions of APC is to promote CD4 + T cell responses. DCs primed with various preparations of VA were co-cultured with CD4 + T cells and Th1, Th2, Th17 and Treg responses were determined by flow cytometric analysis of intracellular IFN-γ (Th1), IL-4 (Th2), IL-17A (Th17), FoxP3 (Treg). Although VA Qu Spez induced maturation of DCs, this effect was not associated with the modulation of frequency of any of the T cell subsets ( Figure 3A-H). However, analysis of amount of secretion of T cell cytokines in DC-CD4 + T cell co-culture revealed that VA Qu Spez significantly stimulated IFN-γ secretion ( Figure 4A), without having any effect on the secretion of IL-4 ( Figure 4B), IL-13 ( Figure 4C) and IL-17A ( Figure 4D). These results suggest that VA Qu Spez selectively favours Th1 responses without modulating Th2, Th17 and Treg responses. Other four preparations of VA did not alter either frequency of T cell subsets or secretion of various T cell cytokines. These results were in line with the fact that VA Qu Frf, VA M Spez, VA P and VA A did not induce maturation and activation of DCs.
Discussion
Currently available mistletoe extracts are highly heterogeneous due to differences in the host trees, nutritional source, season of harvest, and extraction methods [4,[26][27][28]. Therefore, VA preparations could exert divergent biological activities. However, comparative study of immunomodulatory properties of different VA extracts on immunocompetent cells such as DCs has not been performed to date. The present data therefore provide guidelines for the therapeutic use of VA preparations.
IFN-γ plays an important role in mediating the protective immune response against cancer, viral and intracellular bacterial infections [31]. IFN-γ enhances MHC class I expression on tumor cells and MHC class II expression on APCs like DCs, which in turn link innate and adaptive immunity [32]. IFN-γ responsiveness of tumor cell is important for the successful immune recognition. Indeed, it has been demonstrated that mice that are non-responsive to IFN-γ develop more tumors as compared to wild-type mice. Studies have shown that cross-talk between lymphocytes and IFN-γ/STAT1 signalling pathway plays an important role in maintaining the immune competiveness of the host [33]. Idiotype-specific CD4 + Th1 cells can achieve tumor apoptosis directly by Fas/Fas L interaction and indirectly by IFN-γ production [34]. Thus, IFN-γ pathway is considered as an extrinsic tumor-suppressor mechanism [35]. We found that VA Qu Spez significantly enhances IFN-γ production without modulating Treg subsets and production of other T cell cytokines IL-4, IL-13 and IL-17A. This selective enhancement of Th1 cytokine strongly supports the use of VA as an immune modulator.
The success of DC-based cancer immunotherapies is dependent on the maturation status of DCs, their migration capacity and ability to mount protective T cell responses [36]. DC immunotherapy for cancer in humans though shown promises, it has not met with great success as compared to therapeutic molecules that target immune checkpoints. The reasons are multiple including poor survival of transferred DCs, limited number of DCs reaching the secondary lymphoid organs, heterogeneity in the DC subtypes and immune suppressive environment created by the tumor. Previous reports have shown that PGE2 produced by DCs mediate Treg expansion [37][38][39], which might help in tumor evasion. Vaccination of cancer patients with 'PGE2-educated DCs' also induced Treg expansion in the patients [40]. We observed that VA Qu Spez did not modulate Treg responses suggesting that VA Qu Spez selectively induces IFN-γ responses. Although not examined in DCs, we have recently shown that VA Qu Spez inhibits COX2-mediated PGE2 in epithelial cell line [41,42]. Therefore, it is likely that VA Qu Spez-mediated suppression of COX-2 in DCs might be responsible for nonmodulation of Tregs in the present study. As these data are from the in vitro experiments, further work is necessary to validate these results from the patients treated with VA. Of note, through enhancement of Fas/FasL
Discussion
Currently available mistletoe extracts are highly heterogeneous due to differences in the host trees, nutritional source, season of harvest, and extraction methods [4,[26][27][28]. Therefore, VA preparations could exert divergent biological activities. However, comparative study of immunomodulatory properties of different VA extracts on immunocompetent cells such as DCs has not been performed to date. The present data therefore provide guidelines for the therapeutic use of VA preparations.
IFN-γ plays an important role in mediating the protective immune response against cancer, viral and intracellular bacterial infections [31]. IFN-γ enhances MHC class I expression on tumor cells and MHC class II expression on APCs like DCs, which in turn link innate and adaptive immunity [32]. IFN-γ responsiveness of tumor cell is important for the successful immune recognition. Indeed, it has been demonstrated that mice that are non-responsive to IFN-γ develop more tumors as compared to wild-type mice. Studies have shown that cross-talk between lymphocytes and IFN-γ/STAT1 signalling pathway plays an important role in maintaining the immune competiveness of the host [33]. Idiotype-specific CD4 + Th1 cells can achieve tumor apoptosis directly by Fas/Fas L interaction and indirectly by IFN-γ production [34]. Thus, IFN-γ pathway is considered as an extrinsic tumor-suppressor mechanism [35]. We found that VA Qu Spez significantly enhances IFN-γ production without modulating Treg subsets and production of other T cell cytokines IL-4, IL-13 and IL-17A. This selective enhancement of Th1 cytokine strongly supports the use of VA as an immune modulator.
The success of DC-based cancer immunotherapies is dependent on the maturation status of DCs, their migration capacity and ability to mount protective T cell responses [36]. DC immunotherapy for cancer in humans though shown promises, it has not met with great success as compared to therapeutic molecules that target immune checkpoints. The reasons are multiple including poor survival of transferred DCs, limited number of DCs reaching the secondary lymphoid organs, heterogeneity in the DC subtypes and immune suppressive environment created by the tumor. Previous reports have shown that PGE 2 produced by DCs mediate Treg expansion [37][38][39], which might help in tumor evasion. Vaccination of cancer patients with 'PGE 2 -educated DCs' also induced Treg expansion in the patients [40]. We observed that VA Qu Spez did not modulate Treg responses suggesting that VA Qu Spez selectively induces IFN-γ responses. Although not examined in DCs, we have recently shown that VA Qu Spez inhibits COX2-mediated PGE 2 in epithelial cell line [41,42]. Therefore, it is likely that VA Qu Spez-mediated suppression of COX-2 in DCs might be responsible for nonmodulation of Tregs in the present study. As these data are from the in vitro experiments, further work is necessary to validate these results from the patients treated with VA. Of note, through enhancement of Fas/FasL expression and caspase activation, IFN-γ has been shown to enhance apoptotic response to ML II in human myeloid U937 cells [43].
MLs are the active components of mistletoe extracts and have several functions. The cytotoxicity of mistletoe is attributed majorly to its lectin contents [44,45] and lectin internalization is required for ML-I-mediated apoptosis [46]. MLs are responsible for stimulating cells of the innate and adaptive immune system such as DCs, macrophages, natural killer cells, and B and T lymphocytes. This function of MLs might represents one of the mechanisms responsible for the anti-tumoral and immunomodulatory effects of mistletoe extracts. It is known that ML-I B chain causes Ca 2+ influx in Jurkat cells and is mediated by its interaction with surface glycoprotein receptors [47]. Chemical labelling of the lectin revealed that it binds to surface of peripheral and intra-tumoral monocytes [48].
A recent study shows that 3D structure of ML-A chain shares structural homology with shiga toxin from Shigella dysenteriae and provides an explanation for the strong immune stimulatory capacity of ML [49]. It is also demonstrated that Korean mistletoe lectin (KML) induces activation of innate cells by TLR4-mediated signalling [50]. The nature of the receptor(s) on DCs that recognizes ML and mediates activation is not known. Since Korean ML and European ML share 84% sequence identity [51], it is presumable that European ML might signal DCs via TLR [49]. However, we found that not all VA preparations are stimulatory on DCs. VA Qu Frf, an unfermented preparation containing the highest concentration of lectin and viscotoxin was unable to activate DCs. Other VA preparations, which are fermented and contain low lectin, were also unable to stimulate DCs, whereas VA Qu Spez, a fermented preparation that contains the second highest concentration of lectin (785˘10% ng/mL) efficiently activated DCs and promoted Th1 response. These results suggest that mere lectin content in a VA preparation does not necessarily determine its immunostimulatory capacity. The methodology of preparation, i.e., fermented vs unfermented, might be crucial for conferring the stimulatory properties to VA. Alternatively, the fermentation process might modify the structure of the lectins of the VA preparation.
To conclude, our study delineates the differential effects of various VA preparations on DC maturation, function and T cell responses. These results reveal that VA Qu Spez is the most potent preparation in activating DCs and promoting Th1 response. The current evidence to support mistletoe therapy in oncology is weak [52]. Thus, this study along with other reports on mistletoes [53][54][55][56][57][58][59][60] provides a rational for examining the use VA as an immune modulator. Such mechanistic studies are also important to undertake randomised clinical trials to improve level of evidence for the use of VA in complementary therapy of cancer.
VA Preparations
Five clinical grade preparations of VA (VA Qu Spez, VA Qu Frf, VA M Spez, VA P and VA A) obtained from Hiscia Institute, Verein für Krebsforschung (Arlesheim, Switzerland) were used. These preparations were free from endotoxins and were formulated in 0.9% sodium chloride isotonic solution as 5 mg/mL vials. The chemical compositions of the VA preparations are provided in Table 1.
Human DCs
Human monocyte-derived DCs were used as a source of DCs. Peripheral blood mononuclear cells (PBMC) were isolated from buffy coats of healthy donors. The buffy coats were purchased from Centre Necker-Cabanel (EFS, Paris, France). Ethics committee approval for the use of such material (Institut National de la Santé et de la Recherche-EFS Ethical Committee Convention N˝12/EFS/079) was obtained and experiments were performed in accordance with the approved guidelines of INSERM. Circulating monocytes were isolated using CD14 microbeads (Miltenyi Biotec, Paris, France) and were cultured for 5 days in RPMI 1640 containing 10% fetal calf serum, rhIL-4 (500 IU/10 6 cells) and rhGM-CSF (1000 IU/10 6 cells) to obtain immature DCs [61].
Viscum Album Treatment of DCs
Immature DCs were washed and cultured in rhIL-4 and rhGM-CSF and treated with VA Qu Spez, VA Qu Frf, VA M Spez, VA P and VA A at four different concentrations: 5, 10, 15 and 20 µg/mL/ 0.5 million cells for 48 h. Cell culture supernatants were collected for analysing the cytokines and DCs were analysed for the phenotype by flow cytometry.
DC: CD4 + T Cell Co-Cultures
CD4 + T cells were isolated from the PBMC using CD4 microbeads (Miltenyi Biotec). VA-treated DCs were washed extensively and seeded with 1ˆ10 5 responder allogeneic CD4 + T cells at DC: T cell ratio of 1:10. On 5th day, CD4 + T cell responses were analysed by intra-cellular staining for specific T cell cytokines (IFN-γ, IL-17A and IL-4) and transcription factor (FoxP3). The cell-free culture supernatants were analysed for the cytokines secreted.
Flow Cytometry
For surface staining, following Fc receptor blockade, antibodies against surface molecules were added at pre-determined concentration and incubated at 4˝C for 30 min. FITC-conjugated monoclonal antibodies (MAbs) to CD1a, CD86, HLA-DR, and CD25; PE-conjugated MAbs to CD83 (all from BD Biosciences, Le Pont de Claix, France), CD40 (Beckman Coulter, Villepinte, France) and Alexa Fluor ® 700-conjugated MAbs to CD4 (eBioscience, Paris, France) were used for the analysis of surface phenotype.
Cells were acquired on LSR II and processed with FACS DIVA software (BD Biosciences) and analysed by Flowjo. The data were presented as % positive cells for indicated markers or mean fluorescence intensities (MFI) of their expression.
Conclusions
Our study demonstrates the differential effects of various VA preparations on human DC activation and ensuing CD4 + T cell responses. Our data reveal that VA Qu Spez is the most potent VA preparation in activating DCs and promoting Th1 response. | 5,510.4 | 2016-07-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
E-learning is a burden for the deaf and hard of hearing
When considering deaf and hard of hearing (DHH) population, research recognizes that fatigue due to communication challenges and multi-focal attention allocation is a significant concern. Given the putative heightened demands of distance learning on deaf and hard of hearing students, we investigate how an online environment might differently affect deaf and hard of hearing participants, compared to hearing participants, Portuguese Sign Language (PSL) users and non-users. Our findings show that the deaf and hard of hearing group present higher values in the post-task fatigue rates with significant differences from the hearing group (non-PSL users). Furthermore, our results revealed an association between post-task fatigue rates and lower performance scores for the deaf and hard of hearing group, and the gap is significantly bigger when compared with the hearing group (non-PSL users). We also found evidence for high levels of post-task fatigue and lower performance scores in the hearing group PSL users. These novel data contribute to the discussion concerning of the pros and cons of digital migration and help redesign more accessible and equitable methodologies and approaches, especially in the DHH educational field, ultimately supporting policymakers in redefining optimal learning strategies.
www.nature.com/scientificreports/ however, dependent on the efficiency of the instructional design that should consider modern learning theories, like the Cognitive Load Theory (CLT). This theory assumes that a cognitive load occurs when cognitive processing requirements exceed the capacities already available to students 17 . Researchers state that the CLT is concerned with the instructional implications of interaction between information structures and cognitive architecture. However, in tandem with the "interactivity" element, the way in which information is presented to learners and the learning activities required can also impose a cognitive load 18 . Exposure to High Cognitive Load (HCL) levels, in conditions where the time to process ongoing cognitive demands is restricted, also leads to increased Cognitive Fatigue 19 . Within the Cognitive Theory of Multimedia Learning (CTML), cognitive load on DHH students can be measured 17 , 20,21 in dimensions as mental demand, physical demand, temporal demand, performance, effort, and frustration. Findings showed that the use of multimedia resources proved to be insufficient in the acquisition of scientific concepts by deaf students of elementary / high school, since, if poorly designed, the multimedia presentations can increase the levels of Cognitive Load and act as a barrier to the learning process, instead of acting as a facilitator. Given the quantity and diversity of informational modalities, studies indicate that presentation format may make it difficult for students to grasp the taught concepts effectively 17,20,21 .
As we witness a COVID-19 motivated push towards digital migration with the transition to online work and online classes speeding up without careful impact analyses, the effects of this accelerated transition towards distance learning modalities within the DHH population, must be thoroughly investigated. Here we search for evidence on the putative differential impact that a traditional model of an e-learning situation might have on DHH compared to PSL hearing users and non-users 22 .
We believe that due to COVID-19, given the short time to adapt to distance learning scenarios, learning situations migrated rapidly to virtual environments without the necessary adjustments in the design of multimedia resources, specifically for the DHH population, namely, to concern issues of cognitive load and fatigue 23 . Given the putative heightened demands of distance learning on DHH students, we aim to investigate how an online environment might differently affect DHH compared to Hearing participants 24,25 . Here, we infer the consequential fatigue involved in an e-learning situation based on performance and fatigue scores. In line with this, we applied a Fatigue Assessment Scale (FAS) to quantify fatigue before the experiment procedure, upon participant recruitment. The FAS is a validated and standardized self-report Likert scale [26][27][28] . We used it to generate an initial baseline for comparison with later results from this study. Immediately after the e-learning presentation, the participants also indicated on a Visual Analogue Scale (VAS) [29][30][31][32][33] , the level of mental and physical fatigue perceived post-task and proceeded to submit a performance test, pertaining to information conveyed in the online class.
Participants.
We chose an ex-post facto experimental type design for which we developed an ecological e-learning situation wherein the information conveyed was kept constant across selected groups, while comparing fatigue and learning outcomes between Portuguese adult samples (n = 51), namely: a group of deaf and hard of hearing participants (DHH; n = 17) proficient in Portuguese sign language, a group of hearing participants (PSL; n = 17) proficient in PSL and a control group of hearing participants unfamiliar with sign language (C; n = 17).
Individuals identified themselves as DHH/hearing individuals, Sign Language proficient users upon recruitment for this study. The procedure was similar for individuals unfamiliar with PSL.
To fulfil the requisites for parametric testing 34 when dealing with 2 to 9 groups, we strived to recruit more than 15 participants per group. Participants were recruited via convenience sampling. All methods were carried out in accordance with the Declaration of Helsinki 35 guidelines for human research and approved by the University's local ethics committee (Comissão de Ética para a Saúde da Universidade Católica Portuguesa). All participants gave their informed consent prior to enrolment. The images that directly identify people involved in the study are from one of the researchers (the presenter) and from the Portuguese Sign Language Interpreter, who gave informed consent for publication of identifying images in an online open-access publication.
Task design and procedure. The study was developed across four moments-in T0: completion of an online visual literacy test (in tandem with participant recruitment) and of the Fatigue Assessment Scale (FAS); T1: online class attendance; T2: VAS completion measuring both mental and physical fatigue and T3: performance questionnaire (Fig. 1).
The participants were requested to access and complete online forms, containing the self-completion scales and 2 tests: an art literacy test, the FAS, a VAS to assess mental and physical fatigue levels and a multiple-choice performance Test. During the whole procedure, participants were assessed individually.
Visual art literacy test. To form homogeneous groups in terms of visual art literacy, participants completed an online test upon recruitment. Visual literacy pertains to the knowledge and use of visual elements in visual communication, knowledge and use of specific vocabulary, and the ability to present, respond, and connect through symbolic and metaphoric forms that are unique to the visual arts. The test consisted of 10 multiplechoice questions with a score of 10 points each, concerning the information conveyed by different sets of images.
The Fatigue Assessment Scale (FAS).
At the time of recruitment, participants were asked to complete a fatigue rating scale-assessment Fatigue Scale (FAS). The FAS score is obtained from a 10-item scale that evaluates symptoms of chronic fatigue. Some examples of FAS questions are: "Physically I feel exhausted", "I have trouble thinking with clarity" or "Fatigue bothers me" (see Supplementary Information S1). The scale had a filling time of approximately 2-3 min, without a time limit. We used a validated and authorized Portuguese version of FAS 36 . We chose to apply the FAS questionnaire before experimental manipulation to obtain an initial The Videographic stimuli. After the tasks performed upon recruitment, the tasks to be performed following the experimental design were scheduled: viewing the online class followed by filling out the VAS and the performance testing. For this purpose, participants received a new Zoom link to access these contents. The introduction to the online class was presented (voice and image of the presenter) in oral and written (Portuguese Language) in tandem with Sign (Portuguese Sign Language). During this, the presenter gave instructions for the completion of task. The online class content started immediately after this brief information and contemplated information concerning four different works of art. Art works allow the establishment of bridges between the visual-verbal language through the "reading" of the visual narrative of the pictorial composition. We consider, therefore, artistic teaching to be a very relevant topic since it showcases information transmission according to the multimodal dimension. The information conveyed was based on images of works of art and its description, presenting a simultaneous combination of semiotic resources of dual nature (visual and verbal). The presentation lasted 35 min and contained information about artworks, the artists and historical contextualization. To achieve this, we designed a screen display with the simultaneous presentation of information in the following modalities: visual and auditory (teacher) visual (stimulus to be learned), sign language (PSL translator) and written topics corresponding to the presented discourse. For each work, the presenter transmitted oral information for approximately 10 min, with simultaneous translation in Portuguese Sign Language. The written information appeared as a short sentence at the bottom of the screen (type of short caption called "oracles"). Regarding the presentation format, the following screen display structure was presented to all participants, designed to mimic a typical online presentation (Fig. 2). After the intro clip, four different online class modules were randomly presented, pertaining to different works of art. According to the above, the chosen works are labelled A, B, C and D. The selection of artworks was made from the Fundação Calouste Gulbenkian publication named Primeiro Olhar (2002), an integrated Visual Arts Education Program 37 . The use of reproductions of these artworks is lawful, not having a lucrative purpose as they are intended for academic use only (Fig. 3).
The Visual Analogue Scale (VAS).
Immediately after the video presentation, the participants should indicate on 2 different VAS scales the level of mental and physical fatigue perceived at that moment. Visual Analogue Scales are commonly used to measure magnitude of internal states such as pain stress, anxiety, mood and various functional capabilities 30,31 . VAS is a psychometrical measurement instrument that makes use of self-reported quantity of symptoms, emotional states, and attitudes. Its advantage as a measurement instrument relies in a format that covers a range of continuous values, for subjective indicators that cannot easily be directly measured. Since a VAS can measure any subjective construct, we decided to use it to measure the subjective feeling of physical fatigue and the subjective feeling of mental fatigue as the VAS is sensitive to small changes in intensity. The VAS is also very useful because the line bisection can then be converted into mm which translate into a numerical score that can be parametrically analysed 33 . The use of VAS with DHH populations has been documented as an instrument to determine degree and type of hearing loss and has proven to have an important role in measuring hard of hearing participants´ perceptions, especially with young adult populations 38 .
A VAS can be presented either vertically or horizontally. It takes the shape of a 100 mm line without numerical anchors. The participant is required to bisect the line more to the left or right (more to the top or to the bottom) www.nature.com/scientificreports/ depending on how much the subjective construct is felt. Thus, the VAS is useful in situations wherein a subjective construct needs to be measured without the bias of numerical anchors. After submitting the form, a conversion to numerical values is generated, allowing for a parametric analysis of the results. In the first VAS the participants are prompted to answer the following question: "After completing this task I feel mentally…". And in the second VAS the participants answer the following question: "After completing this task I feel physically…".
The performance test. Following the VAS completion, a set of ten online questions about the information conveyed were presented, using the multiple-choice modality for answering. This task did not impose a pre-set time limit, and the time taken to complete the performance test by each group was later analyzed. Questions such as: "A work painted on wet cloth was presented that represents a gestural attitude. Tick the false option." Here, the word descriptors were linked to fatigue levels, i.e., on the left side we indicated "not at all tired" and on the right side we indicated "extremely tired". The scale was presented online with a digitally sensitive slider bar format. The participants were instructed to slide the bar between anchors to report their subjective feeling of fatigue. By dragging the slider bar, the participant indicated a higher value when approaching the right end (extremely tired), and a lower value or less fatigue, when sliding the bar towards the left end (not at all tired) (Fig. 5): Although no values were presented, clicking on a point of the 100 mm line with the slider allowed for posterior conversion into a score between 0 and 100 for parametric analyses. Bonferroni's post hoc test was used with adjustment for multiple comparisons, and a significant difference was found, for p < 0.05, between the DHH group and the Control group (p = 0.046). We calculated Cohen's D to verify the magnitude of the effect for this difference and observed a high magnitude of the effect (d = 0.87) (Fig. 6).
The multiple-choice performance test. As already mentioned, the online class purposely reproduced a traditional model, commonly used in academia, but not only. Meetings, conferences, or seminars share the same type of display, combining images, text, presenter/lecturer voice and sign language translation. This eco- www.nature.com/scientificreports/ logical situation delivered to participants as framed in our experimental design, followed the recognition of adverse effects that poor or inadequate multimedia instructional material brings to DHH students, since it is recognized in recent studies on the use of educational technologies, in distance learning for the deaf during the pandemic, that the presence of an interpreter on the screen might lead to erroneous assumptions on the accessibility and efficacy of online classes 22 . Accordingly, we wanted to know the extent of these adverse effects in the performance dimension, in which the participants had to recall the information received in the online class to respond to a set of ten questions. As we claimed before, our hypothesis is that the DHH performance would be below the other group results, so we computed a one-tailed analysis for statistical significance. One way ANOVA for correlated samples test shows differences between the 3 groups in the Performance total score
Discussion
Here, we aimed to investigate how an e-learning environment, such as an online class, might differently affect the participant groups in this study, with focus in fatigue assessment, and performance. We recruited participants from three distinct groups, DHH participants, hearing participants, and a control group. We applied different instruments at different timings. The FAS and a Visual art literacy test when recruiting participants and a VAS at a post-task moment. The Visual art literacy test was used to assess the knowledge and understanding of the language and codes of the visual arts, without the influence of the subsequent experimental procedures. The three groups did not present any differences in such knowledge so we could assume that the putative differences in the performance post-task test would be due to how they were able to acquire and process the information conveyed during the online class. The FAS was used to obtain an initial baseline of everyday life fatigue and concerning the FAS questionnaire results, the DHH group revealed subtle higher rates, but no statistical differences were found between the three groups. The use of PSL was an important factor in the selection of participants due to the multimodal and bilingual nature of the instructional material used in the online class. Here, this variable does not seem to impact DPSL in daily fatigue when compared to the DHH group. As we didn't find statistically relevant differences between groups, our FAS results differ from previously described self-reported results by groups of hearing-impaired individuals, i.e., listeners with hearing loss that reported high levels of listening effort wherein the experimental design included the use of FAS 39 . In fact, previous literature states that, daily, DHH adults report higher fatigue rates in a consistent way, associated with sustained visual attention combined with listening effort to grasp environment information and respond to cognitive tasks 1-3,6,13 . It is possible that the tasks that lead to such fatigue are more strenuous (such as an online class) than the daily tasks experienced by our participants. Indeed, when attending to the VAS scores (mental and physical), the DHH have the highest posttask fatigue scores and significantly differ from the Control group. The DHH group have the closest maximum values between mental and physical fatigue, indicating the relationship between the two fatigue dimensions in a post-task moment involving cognitive demands. Furthermore, our results reveal an association between post-task fatigue rates and lower performance scores for the DHH. Optimal methods and tools used in the classroom, to direct and maintain visual attention, can prevent DHH students from visual attention strays and keep the connection to the delivered information that, otherwise, becomes tenuous increasing potential mental fatigue 40 . Again, the differences are significantly bigger when compared with the non-PSL users hearing participants. Here, and diverging from the FAS results, the PSL variable seems to have contributed to an increase in both mental and physical fatigue of the hearing group. Previous literature shows that individuals have a limited processing capacity and must select pertinent information www.nature.com/scientificreports/ from the multitude of available sensory input. This limitation is evidenced in the attention processing mechanism such as divided attention as it relates to the optimal allocation of resources between different sets of input by splitting or rapid shifting of the attentional focus, given the inability to process stimuli in one or several sensory modalities in parallel 41 . This process becomes more difficult with the quantity and complexity of the component tasks, suggesting that dividing attention between simultaneous stimuli intensifies and recruits additional neurocognitive resources, and may lead to limitations on attention span and cognitive load management 17,20 . Also, bilingual bimodal individuals might have experienced here the processing of code-blending stimuli (speech and sign simultaneously) which is analogous to a cognitive demanding sociolinguistic code-switching in communication i.e., it is harder to suppress a second language when that second language uses a different modality 15,41 . PSL individuals might have tried to suppress PSL to pay attention to the oral language (or the opposite), trying a complete suppression of the non-selected language and thus experiencing higher levels of fatigue. According to the literature, visuospatial attention is altered by early deafness but, interestingly, research about the gaming experience with DHH adults has proven that training visual peripherical responses in gaming (videogames) have an important role in the achievement of better visuospatial attention control, that is, the type of response to gaming challenges might contribute to minor potential visuospatial distractions. However, in our online class, which strongly differs from a traditional classroom context, we acknowledge the inherent problems of the distribution of visual attention, since all the information conveyed was relevant, contrary to the studied effect of the video game experience, which manages to train visuospatial attention by a combination of relevant-irrelevant visual stimuli using Flanker tasks 42 .
We consider that, in our research, the augmentation of the attention stray and split-attention effect occurred in tandem with poorly designed instructional/educational materials, namely the inadequate design of multimedia instructional resource [17][18][19][20] . In fact, this effect was confirmed by the VAS fatigue scores, as we consider having presented an ecological e-learning situation which hardly meets the needs of DHH students, due to its problematic simultaneous stimuli input, with no concern for interactivity situations between presenter and participant, pauses between contents, opportunities to evoke and consolidate information and diversity in the designed modality for content presentation (e.g., screen display elements).
Also emphasized before, test performance times were, on average, similar in the 3 groups. These data lead to an unavoidable analysis of the issue of the duration of assessment moments in classes with DHH students as, in this case, no time limit was imposed to complete the task, and an extension period would not have positively influenced test scores for the DHH: this group performed the worst of the 3 groups and presented the highest fatigue rates.
From our analysis, the consistency of results between DHH and PSL group also stands out: the levels of mental and physical fatigue in post-task effort relates with lower performance scores, i.e., PSL nonusers feel less fatigue and achieve better performance scores.
Interestingly, an innovative dimension of our study emphasizes the situation of the hearing participants PSL users, mostly working as Sign Language Interpreters. We showed that for slight non-significant lower levels of daily fatigue (FAS), similar fatigue levels are obtained at post-task, when compared to DHH, as well as lower performance scores in the performance test. It is possible that this group (PSL) might have felt a similar cognitive overload and a subsequent fatigue sensation due to the limitations of the divided attentional mechanisms. That is, as they are PSL users and fluent in the dimension of oral Portuguese, the integration of information through simultaneous multimodal channels made it difficult to grasp the contents of the online class.
It should be noted that the relationship between stress and burnout in Sign Language Interpreters has been established in the literature confirming burnout dimensions such as emotional exhaustion, depersonalization, and personal accomplishment 43 . Although interpreters work situation may vary (e.g., daily working hours, different schedules, working location/setting), research has looked closely to some occupational demands that suggest possible predictors of stress and burnout in educational interpreters, such as workload, responsibility, perceived control, and co-workers support, among others. Investigation shows that educational interpreters experience high work demands, which are congruent with our experiment results, as these 2 groups might have been impacted by levels of distractibility with subsequent out-turn on both fatigue and test performance scores.
Concomitantly, our results are consistent with the literature review regarding the risk of ineffectiveness of poor or inadequate multimedia resources for the DHH population 17 , 20 . We have also confirmed the need, according to the available literature, of optimizing interactive cognitive tasks in multimedia instructional design, as they help in creating more flexible and engaging learning dynamics, different cognitive demands as well as the opportunity to control fatigue through breaks and recovery time 8,[22][23][24][25] .
Overall, our results indicate levels of mental and physical fatigue consistent with research in the field of deafness and cognitive load 19,21,23,24 , and the consequent constraints on the maintenance of attentional mechanisms 10,12 in demanding cognitive tasks. Together, our results seem to show that, when DHH are asked to visualize the multimedia stimulus in the format presented in our research, there is a combination of factors that negatively affect both the apprehension of the conveyed information and simultaneously lead to an increase in levels of mental and physical fatigue. In line with previous research, our study sheds light into the attentional split mechanism affecting hearing participants who use PSL (bimodal and bilingual) but that does not seem to interfere with PSL non-users, on post task fatigue rates or performance scores 41 .
Given the frequent exposure of learning situations in the e-learning modality during the last two years with periods of confinement due to pandemic for COVID-19, we are aligned with research in the field of communication technologies and multimedia instructional material for DHH students, that assert the need to reconsider the limitation imposed by the combination of audio/video channels as unquestionable assumptions on which multimedia design theories and principles are based 14,17,18 . Our findings state a clear association between cognitive load and low achievement i.e., whenever cognitive load increases, the apprehension and memorization of the conveyed concepts decreases. In our study, in addition to the mental dimension, CL is also self-reported in www.nature.com/scientificreports/ terms of physical fatigue. These results are confirmed by the principles of Cognitive Load theory and outline the importance of prioritizing the assumptions upon which Cognitive theory of multimedia learning stands on the design of educational materials that reduce CL, enhancing effective learning 17,20,21,24,25.
Conclusion
The e-learning reality is not an unprecedented reality, namely in the educational field. Given the great adhesion to information and communication technologies, the DHH population is generally elected for distance learning modalities. However, during the pandemic period triggered by Covid-19 and the post-pandemic transition, there was an exponential increase in this type of knowledge transmission, whose benefits, in certain situations, outweigh the losses of in-person training, class or lecture 22 . For the DHH population, these benefits are not evident, and here we demonstrate different levels of possible harm. Faced with higher fatigue rates and lower performances, the DHH population might be at disadvantage in the several dimensions of academic challenges, leading to further inequalities and constraints that affect well-being and participation opportunities. With this research we hope to contribute to the discussion concerning the pros and cons of digital migration and shed new light that might help redesign more accessible and equitable methodologies and approaches, especially in the DHH educational field, ultimately supporting policymakers in redefining optimal learning strategies.
Data availability
All relevant data are within the manuscript and its Supporting Information files. | 5,832.4 | 2022-06-04T00:00:00.000 | [
"Computer Science"
] |
Trends in Susceptibility to Aggressive Periodontal Disease.
Aggregatibacter actinomycetemcomitans is a gram-negative microbe involved in periodontitis. Strains with varying degrees of virulence have been identified, in healthy and periodontally compromised individuals alike. Hosts mount differential immune responses to its various serotypes and virulence factors. Studies have explored host immune response in terms of antibody titers, leukocyte responses, and specific inflammatory mediators, questioning the ways in which the infectious microorganism survives. This mini-review will identify the key themes in immune response patterns of individuals both affected by and free from aggressive periodontal disease, thereby using it to understand various forms of periodontitis.
and Tannerella forsythia are three consensus periodontal pathogens implicated in periodontal diseases.
The PSD model of disease also attributes periodontitis to host defense mechanisms. Protective mechanisms achieve a homeostatic balance with the microenvironment to varying degrees in individuals. Innate and humoral immune mechanisms may be hyper-responsive to, or perhaps deficient towards, a particular stimulus, presumably for genetic reasons.
Current classifications of periodontitis
The different types of periodontitis are classified by neither the associated bacteria, nor the molecular basis of host susceptibility to different periodontal diseases due to limitations in understanding the disease process. Historical classifications focused on the time of onset and rate of progression of disease [2,3]. In 1999, the American Academy of Periodontology (AAP) developed the most recent classification of periodontal diseases to distinguish between Aggressive Periodontitis (AgP) and Chronic Periodontitis (CP). Aggressive Periodontitis (AgP) patients present with rapid destruction and bone loss accompanied by minimal inflammation at the time of diagnosis, often after irreversible damage has occurred. Chronic Periodontitis (CP), by contrast, is usually seen in patients above 35years of age, frequently with a buildup of dental plaque that is suggestive of poor oral hygiene. The body mounts a strong, pro-inflammatory response to the several pathogenic organisms present in sub gingival plaque. It progresses at a slow rate relative to AgP. Both are found in localized and generalized forms, based on the number of affected sites, but they vary in rate of progression [4]. Diagnoses of both infectious diseases can be made clinically, through measurement of probing depths, and radiographically, through analysis of bone levels.
Previous classifications of periodontitis
Classifications of periodontitis prior to 1999 included patient age as a parameter of diagnosis, distinguishing "early-onset" periodontitis from "adult" periodontitis. Early-onset periodontitis was subdivided into pre-pubertal, juvenile, and rapidly progressive forms. Prepubertal and juvenile forms appeared to be associated with specific bacteria; the aforementioned A. actinomycetemcomitans was the putative pathogen in Localized Aggressive Juvenile Periodontitis (LJP), a disease which affected adolescents by causing rapid, localized destruction at the central incisors and first molars (Table 1) [5].
There were several challenges to this classification. To illustrate by example, a young adult, 25 years of age, may exhibit classic symptoms of LJP: localized destruction limited to the central incisors and molars. One might believe that the destruction progressed aggressively based on the young age of the patient. However, without a history of onset of periodontitis, it is difficult to claim that the patient exhibits LJP, or, more accurately, a history thereof [4]. It may be discerned from such an example that the terms "early-onset" and "adult periodontitis" were arbitrary in delineating boundaries. To expand, consider an alternative instance of a 16-year-old patient displaying symptoms of Adult Periodontitis, classically associated with inflammation and poor oral hygiene, rather than the acute destruction associated with Early-Onset Periodontitis. It would seem inappropriate to diagnose a juvenile as having "adult" periodontitis. Thus, the benefit of the 1999 Classification is that the distinction between Aggressive/Chronic forms rather than Juvenile/Adult forms circumvents ambiguous variables: age boundaries and age of onset.
However, the new nomenclature decreases granularity in classifications of periodontitis, which is unfortunate given that certain forms of periodontitis may be better distinguished etiologically with higher levels of granularity. However, etiologically, this granularity may be important, especially if certain forms of periodontitis are associated with specific microorganisms. If A. actinomycetemcomitans is an etiologic agent of LJP, as literature suggests [6], then it may be ideal to classify LJP as a disease of its own and not subcategorize it under AgP. The new nomenclature thus closely but perhaps artificially groups LJP/LAP with different forms of periodontitis such as GAgP. One may argue that the generalized form of AgP is due to a LAP A. actinomycetemcomitans infection having spread, or a certain genetic defect in responding appropriately to A. actinomycetemcomitans. Alternatively, one may argue that GAgP is a disease that is etiologically unrelated to LAP. Thus, in terms of understanding the etiology of periodontal diseases, it is debatable as to which classification is most accurate and helpful. The focus of this review is differential immune response susceptibility in the host that could facilitate microbial infection, particularly in response to A. actinomycetemcomitans as an etiologic agent of AgP, that may help clarify classifications of periodontitis.
Origin/Epidemiology of the JP2 infection in LAP
The JP2 strain is a subset of serotype b of A. actinomycetemcomitans, and it contains strong associations with LAP [7]. Those with the JP2 strain of A. actinomycetemcomitans have a high relative risk for developing aggressive periodontitis [8]. The virulent subset is thought to have emerged approximately 2,000 years ago in North Africa [9]. Interestingly, the JP2 clone in particular does not appear to have spread to non-African populations despite the widespread migration of the individuals of African origin [9]. It is not known whether there is some host susceptibility that renders certain populations vulnerable to the disease. The limited spread of the LAP facilitates its use as a model for understanding periodontal disease.
Highly leukotoxic clones
A 530-base pair (bp) deletion in the JP2 genome results in secretion of large amounts of leukotoxin (LtxA) which likely contributes to increased virulence [10]; JP2 clones are referred to as "highly leukotoxic" [11]. The toxin induces apoptosis in mononuclear leukocytes (MNLs), but the molecular mechanism of cytotoxicity has yet to be elucidated. Secretion of LtxA into the GCF may allow A. actinomycetemcomitans to lyse lymphocytes in the microenvironment and effectively delay an immune response to the oral biofilm. Thus, the microbe would have ample time to proliferate within the host [12]. Like the rare nature of LAP, the 530-bp is useful as a marker for tracing the disease and understanding one particular mechanism of periodontitis.
State of Affairs Humoral immune response susceptibility in periodontitis
It is not clear whether humoral immune responses to A. actinomycetemcomitans terminate the spread of infection or, alternatively, lead to a hyper-responsiveness. Furthermore, failure to mount a humoral response may indicate genetic susceptibility which facilitates microbial infection [13]. Development of a specific response may reveal information about the timing and mechanism of A. actinomycetemcomitans associated tissue destruction, ultimately highlighting when A. actinomycetemcomitans infection is most preventable. However, there is inconsistent correlation of the immune response with clinical presentation of symptoms [14,15].
The humoral response to A. actinomycetemcomitans appears to be protective, rather than a hyper-inflammatory means of periodontal destruction. Vlachojannis et al. 2010 [16] find that that development of IgG antibodies to various A. actinomycetemcomitans serotypes is associated with clinical diagnosis of periodontal status. The National Health and Nutrition Examination Survey (NHANES) were analyzed for serum antibodies to various periodontal pathogens. Edentulous patients generally displayed lower antibody titers to pathogenic organisms such as A. actinomycetemcomitans and other "red complex" species such as P. gingivalis. Presumably, the failure to mount a humoral response is associated with clinical presentation of symptoms. It should be noted that although the Vlachojannis study is specific to A. actinomycetemcomitans, it distinguishes neither between age cohorts nor between AgP and CP.
A study by Casarin et al. 2010 [17] finds that adult GAgP patients display lower IgG levels towards A. actinomycetemcomitans and P. gingivalisin comparison with adult patients with GCP. Generalized forms of periodontitis that are rapidly progressive and aggressive may be etiologically related to forms of periodontitis that involve failure to mount an immune response; Thus Casarin's findings echo those of the NHANES study, which suggested that humoral immune response susceptibility to multiple organisms existed in individuals with generalized destruction.
In addition, a LAP-specific study from Mette Rylev in 2011 [18] showed that individuals with JP2 infections would uniquely react to certain A. actinomycetemcomitans antigens. The study found a uniquely strong humoral response to LtxA in a Moroccan cohort [8].Thus, LAP as a model for periodontal disease may serve as a useful study tool for understanding other types of periodontitis, given that all diseases potentially involve a weak humoral immune response.
Innate immune response susceptibility in periodontitis
Innate immune mechanisms also differ between patients with periodontitis and periodontally healthy patients [19]. In an LAP-specific study, Fine et al. 2013 [20] finds low levels of antimicrobial Lactoferrin-iron (Lf-iron), minimal A. actinomycetemcomitans agglutinating activity, and high killing activity against gram-positive bacteria, potentially reducing competition for A. actinomycetemcomitans. Fine et al. [20] posits that lower levels of Lfiron facilitate A. actinomycetemcomitans colonization [21]. A lack of functional IgA, an agglutinating agent, may fail to facilitate A. actinomycetemcomitans aggregation and subsequent elimination.
The innate immune system also involves Intercellular Adhesion Molecule-1 (ICAM-1). This pro-inflammatory cell surface molecule is involved in extravasation of lymphocytes, osteoclast formation, and interactions between leukocytes. It is the receptor for Lymphocyte function-associated antigen-1 (LFA-1, CD11a/CD18b), an integrin with which LtxA interacts [12,22]. Umeda et al. [23] find that AgP and CP patients up regulate intercellular adhesion molecule-1 (ICAM-1) and granulocyte macrophage-colony stimulating factor (GM-CSF) in comparison to healthy controls ( Figure 1A). In vitro studies with epithelial cell lines showed up regulation of pro-inflammatory genes such as ICAM-1 in response to A. actinomycetemcomitans. Other oral pathogens are not reported to have such effects [24].
Umeda further highlights that pathways downstream of GM-CSF and ICAM-1 expression involves cytokines such as Receptor activator of nuclear factor kappa-B (RANK), which are involved in osteoclastogenesis ( Figure 1A). Osteoclasts, which resorb bone, are key in periodontal bone loss [22,[25][26][27]. Although levels of the cytokines may be elevated due to other organisms; the data suggests there may be an over expression of pro-inflammatory mediators in both AgP and CP, suggesting a mechanistic link between the two forms of periodontitis. While studies from Fine et al. [20] suggest a protective role for the innate immune system in periodontitis, studies from Umeda et al. [23] suggest that the innate immune system is hyper-responsive in periodontitis [20,23]. Studies related to LAP and A. actinomycetemcomitans have not clarified the role of the innate immune system in periodontitis.
Candidates for immune response susceptibility
Individuals affected by LAP have immune responses that differ from those in healthy individuals [28,29]. In vitro studies serve as platforms for exploring what the specific differences may be. Kelk et al. [30] found that LtxA resulted inIL-1B secretion, IL-18secretion, and cell death in macrophages ( Figure 1B) [30,31]. In addition, incubation with A. actinomycetemcomitans leads to an increase in NLRP3 inflammasome gene expression, a complex associated with pathogen-associated molecular patterns (PAMPs), and inflammatory pathways.
The significance of IL-1B extends to osteoblasts as well. These bone-forming cells deposit the mineralized matrix that is degraded in individuals with periodontal bone recession. Zhao et al. [32] found that a human osteosarcoma cell line responded to A. actinomycetemcomitans incubation in a manner that led to cell death. Increased transcription and translation of NLRP3 and associated adaptor molecules was observed, eventually resulting in cell death and IL-1B secretion ( Figure 1C) [33,34]. A. actinomycetemcomitansinduced osteoblast cell death may increase the severity of periodontitis.
Paino and colleagues conducted several studies to explore the consequences of high IL-1B levels within the GCF. Data suggests that A. actinomycetemcomitans contains an IL-1B receptor that localizes to proteins involved in gene expression ( Figure 1D) [35,36].Candidates for affected genes include adhesins and biofilm-forming proteins which enhance A. actinomycetemcomitans virulence [37].
Thus, if inflammasomes and IL-1B are differentially expressed in periodontitis, there may be evidence for a hyper-responsive trait that leaves individuals prone to not only LAP but also CP and GAgP. Given that LAP is associated with minimal inflammation; the proinflammatory pathways may be misregulated in a unique, highly localized fashion that may help to clarify how some of these markers are involved in disease.
Conclusions and Future Directions
Both differential microbial virulence and differential host susceptibilities contribute what is ideally a commensal relationship in periodontal health [1]. Microbes may possess competitive advantages that allow them to colonize the oral cavity with ease. Alternatively, a host defect may lead to the same end result. Such colonization may soon be followed by a hyper-inflammatory reaction in the host or perhaps a failure to respond. There are gaps in the literature regarding the factors that modulate host-pathogen relationship in periodontitis, making nomenclature and categorizations of a periodontal disease a challenge. Different classifications of periodontitis may exhibit patterns in the process of periodontal destruction, and may not be highly pathologically distinct. The nomenclature that recognizes pathological or mechanistic differences in the types of periodontitis may be doing so artificially. Conversely, forms of periodontitis may be unique in their pathology, in which case grouping together different infections risks misdiagnosis of disease.
In order to attempt to understand the progression of periodontal disease and improve diagnoses, the studies highlighted in this review provide evidence for patterns related to A. actinomycetemcomitans as an etiological agent of LAP. While studies of the humoral immune response generally suggest that there exist a defect in protective mechanisms in periodontitis, studies of the innate immune response do not clearly demonstrate whether it is protective or hyper-responsive. Further studies of IL-1B pathways, inflammasome pathways, and salivary antimicrobial molecules may lead to the identification of a deficiency common to several types of periodontitis.
Understanding the nature of host susceptibility through studies of A. actinomycetemcomitans in LAP will clarify distinctions or similarities between different classifications of periodontitis, further delineating conditions in which microbial colonization is particularly relevant. Future studies may consider exploring how expression of such presumed susceptibilities change over time, perhaps once a disease is terminated. If patterns exist in that regard, then perhaps there exists yet another way to understand the nature of periodontal diseases. | 3,271.8 | 2016-04-25T00:00:00.000 | [
"Biology",
"Medicine"
] |
Medications used in dementia: a review of evidence
Dementia is an acquired global impairment of intellect, memory and personality, but without impairment of consciousness. It is usually progressive in nature. The pattern of cognitive impairment depends on the type and severity of dementia. Impairments of cognitive function are commonly accompanied, and occasionally preceded by, deterioration in emotional control, social behaviour, or motivation resulting in significant impairment in activities of daily living (1). The noncognitive symptoms associated with dementia such as mood, psychotic and sleep-wake cycle disturbances – i.e., the behavioural and psychological symptoms of dementia (BPSD) – and are seen in about 50-80% of patients (2).
Introduction
Dementia is an acquired global impairment of intellect, memory and personality, but without impairment of consciousness. It is usually progressive in nature. The pattern of cognitive impairment depends on the type and severity of dementia. Impairments of cognitive function are commonly accompanied, and occasionally preceded by, deterioration in emotional control, social behaviour, or motivation resulting in significant impairment in activities of daily living (1). The noncognitive symptoms associated with dementia such as mood, psychotic and sleep-wake cycle disturbancesi.e., the behavioural and psychological symptoms of dementia (BPSD) -and are seen in about 50-80% of patients (2). Alzheimer's disease (AD) is the commonest type of dementia, accounting for about 50-60% of all dementias. The prevalence of AD is 1 to 2% at the age of 65, but the prevalence doubles every 5 years after that (4). Memory impairment is prominent although it is not the only cognitive domain that is affected (5). Vascular dementia (VaD) accounts for about 20-25% of all dementias. The management options include controlling of cerebrovascular and metabolic risk factors (5,6). The clinical features of dementia with Lewy bodies (DLB) and Parkinson's disease with dementia (PDD) are similar. The diagnosis of PDD rests on the occurrence of dementia in a person formally diagnosed with Parkinson's disease at least 12-months previously. DLB accounts for 15-20% of cases of dementia. Characteristic symptoms are deterioration of attention and visuospatial abilities, fluctuations in cognition and attention, well-formed visual hallucinations with motor features of Parkinsonism (5).
The management of dementia is mainly two faceted; pharmacological and non-pharmacological. Both these approaches are utilised for the management of the two main symptom domains; cognitive and non-cognitive. Although not strictly disease modifying, cognitive enhancers are used to manage the cognitive symptoms while antipsychotics, antidepressants, benzodiazepines and mood stabilizers are used to treat non-cognitive symptoms.
Methods
We searched ALOIS (a comprehensive, open-access register of dementia studies), Cochrane database, PubMed, Scopus and Google scholar using key words: dementia, cognitive enhancers, cholinesterase inhibitors and memantine. Our review is based on review articles indexed in these databases and the RCTs included in the reviews. We did not record the number of articles or assess the quality of the articles as this is not a systematic review. Relevance of the articles were determined by careful scrutiny of the contents by all three authors.
Cognitive enhancers
The principal class of medication used for the treatment of dementia is cognitive enhancers, which includes cholinesterase (AChE) inhibitors, such as donepezil, rivastigmine, galantamine, and N-Methyl-D-aspartate (NMDA) antagonists, such as memantine.
Acetylcholinesterase (AChE) inhibitors
The cholinergic hypothesis of AD attributes cognitive deterioration to progressive loss of cholinergic neurons and decreasing levels of acetylcholine (ACh) in the brain (7). The three AChE inhibitors used in mild to moderate AD are donepezil, rivastigmine and galantamine, of which donepezil and rivastigmine are available in Sri Lanka.
In Alzheimer's disease a meta-analysis of 13 RCTs found that treatment for over 6 months produced improvements in cognitive function, of on average -2.7 points (95% CI -3.0 to -2.3) on the ADAS-cog scale. Most trials were on patients with mild-moderate dementia. Benefits were also seen on measures of Activities of Daily Living (ADL) and behaviour, with small effect sizes (5). The metaanalyses reported no significant differences in efficacy between different AChE (8-11), a finding that may have been influenced by the small effect sizes.
Donepezil
A 12-week RCT of 468 patients reported that donepezil use was associated with statistically significant improvements compared to placebo. The mean drugplacebo differences, at end point for the groups receiving 5 mg/d and 10 mg/d of donepezil hydrochloride were, respectively, 2.5 and 3.1 units for the ADAS-cog (11,12). Results from a Cochrane review suggest that donepezil results in statistically significant improvements for both 5 and 10 mg/day at 24 weeks compared with placebo on the ADAS-cog, with a 2.01 point and a 2.80 point reduction, respectively (9). A long-term placebo controlled trial of donepezil in 565 patients with mild-tomoderate AD found a small but significant benefit on cognition compared with placebo. This was reflected in a 0.8 point difference in the MMSE score (95% CI 0.5-1.2; P<0.0001) which was replicated in other similar trials (14,43).
Rivastigmine
Studies for rivastigmine suggest an advantage of 2.6-4.9 points on the ADAS-cog over placebo (13). A Cochrane review found that, high-dose rivastigmine (6-12 mg daily) was associated with a 2-point improvement in cognitive function on the ADAS-cog and a 2.2-point improvement in ADL at 26 weeks over placebo. At lower doses (4 mg daily or lower), the differences were statistically significant for cognitive functions only (14). According to a 6-month, double-blind, placebo controlled RCT, Rivastigmine transdermal patch (9.5 mg/24 h) is as effective as the highest doses of oral formulations (15).
Galantamine
A 5-month placebo controlled study of 978 patients found that the galantamine-placebo differences on ADAS-cog were 3.3 points for the 16 mg/day group and 3.6 points for the 24 mg/day group (p < 0.001 versus placebo, both doses) (16). A Cochrane review of ten trials found treatment with galantamine was associated with a significantly greater proportion of subjects with improved or unchanged global rating scale rating (k = 8 studies), at all dosing levels except for 8 mg/d (17). Galantamine is marginally effective in patients with severe AD with MMSE scores of 5-12 points (18,32).
Memantine
Memantine is a moderate-affinity, uncompetitive, voltage-dependent NMDA receptor antagonist. A number-needed-to-treat (NNT) analysis of memantine Medications used in dementia: a review of evidence showed a NNT of 3-8 (19). It is indicated for the treatment of moderate to severe AD, and has shown significant efficacy in improving symptoms in several large-scale, controlled clinical studies (20)(21)(22)(23). Memantine may be effective in delaying worsening of clinical symptoms, and decreasing the emergence of BPSD (24)(25). A metaanalysis done in 2007 of six individual phase III studies using a subgroup of patients with moderate to severe AD showed that the drug resulted in a statistically significant benefit in four domains: namely cognitive, functional, global, and behavioural domains. The metaanalysis also highlighted a significant improvement in ADL with memantine, compared to placebo (22). A metaanalysis in 2011 found no significant differences between memantine and placebo on any outcome for patients with mild AD, either on individual trial or when data were combined (ADAS-cog 0.17; P = 0.82) (25). Memantine appears to be well tolerated (27). However, caution is required in hepatic impairment and seizures. The most frequently reported adverse effects in placebo-controlled trials included agitation (7.5% memantine, 12% placebo), falls (6.8% versus 7.1%) and dizziness (6.3% versus 5.7%) (28).
Combination treatment
Of the combinations, AChE inhibitor and memantine combination is the best tolerated, although there is no clear evidence of superior efficacy (21,29). However, there is some evidence that combining memantine with AChE inhibitors may slow cognitive and functional decline compared with mono-therapy or no treatment in the long term (29).
Statins in dementia
Four RCT have assessed the efficacy of statins in Alzheimer's or probable Alzheimer's dementia. Most patients were already on AChE. Pooled data showed no significant benefit from statin as measured by the ADAS-Cog (mean difference -0.26, 95% confidence interval (CI) -1.05 to 0.52, p=0.51) (48).
Vascular dementia
Three trials with a total of 800 participants have assessed the use of rivastigmine in vascular dementia (49). The largest included 710 participants with vascular dementia (VaD), including those with subcortical and cortical forms of the disorder. Statistically significant improvement in cognitive response was seen with rivastigmine treatment at 24 weeks, but there was no global impression of change, and no improvement of non-cognitive measures. Two other trials, with 1378 participants, have reported statistically significant treatment effects in favour of galantamine compared with placebo in cognition, activities of daily living and behaviour (50). There is evidence for donepezil, rivastigmine and galantamine use in VaD (29)(30)(31). There is also evidence for the use of memantine in VaD (6).
Parkinson's disease with dementia and dementia with Lewy bodies
RCTs have assessed the use of cholinesterase inhibitors in both Parkinson's disease with dementia (PDD) and dementia with Lewy bodies (DLB) (33). Three trials have reported cholinesterase inhibitor treatment to be superior to placebo in PDD, as measured by the Clinical Global Impression of Change (CGIC) score of -0.38, favouring the cholinesterase inhibitors (95% CI -0.56 to -0.24, P < 0.0001). There was no statistically significant difference in the MMSE between the control and treatment groups for patients with DLB (33). Although there are some concerns about worsening or adverse responses when patients with DLB are exposed to memantine, a recent RCT found it to be mildly beneficial in terms of global clinical status and behavioural symptoms in patients with DLB (26).
Treatment of behavioural and psychological symptoms of dementia
Second generation antipsychotics (SGA) are as effective as first generation antipsychotics (FGA) for behavioural and psychological symptoms of dementia (BPSD) (34,40,45,46). Reviews and trials support the efficacy of olanzapine, risperidone, quetiapine, aripiprazole and amisulpiride with no significant differences between treatment groups (34)(35)(36).
Benzodiazepines
Although widely used, benzodiazepines are not recommended as it is associated with cognitive decline and falls (37).
Antidepressants
Evidence suggests that depression, which is present in 30-50%, can be both a cause and consequence of AD. Reviews indicated that antidepressants (mainly SSRIs) not only showed efficacy in treating BPSD, but were also well tolerated (47,51). Two out of five studies of sertraline versus placebo and one study of sertraline versus haloperidol have shown benefit. Five studies have shown that citalopram is of more benefit compared to risperidone.
Mood stabilisers/anticonvulsants
RCTs of mood stabilisers in BPSD are only available for carbamazepine and valproate, although gabapentin, lamotrigine and topiramate have also been used (38). A literature review of anticonvulsants in BPSD found that although there are benefits, the evidence is insufficient to support routine use (39).
Discussion
This review examined the evidence regarding the use of medication in the treatment of dementia. The symptoms of dementia can be classified into two broad groupsnamely cognitive and non-cognitive. The review examined the effects of medication on both groups of symptoms. The interpretation of results in trials is difficult due to the factors involving the disease process itself as well as other confounders. Most trials were conducted on patients with mild to moderate dementia with MMSE scores of 10-26, which introduces a bias.
The main question clinicians need answered is 'will medication halt or reverse the process of cognitive decline, and will this result in improved quality of life?' The average annual rate of decline in untreated patients ranges from 6 to 12 points on the ADAS-cog. A 4-point change in the ADAS-cog score is considered clinically meaningful (42).
Patients are classified in to three groups depending on the response: 'non-responders', who continue to decline at the anticipated rate; 'non-decliners', who neither improve significantly nor decline; and 'improvers', who improve to a clinically relevant extent. Data from trials of 6-months duration indicate that of those with AD being treated with AChE inhibitors, 24-34% will be 'improvers', compared to only 16% who will be 'improvers' on placebo. Around 55-70% will be 'non-decliners' (40).
It was previously thought that treatment does not improve cognitive functions, merely halts cognitive decline. However recent RCTs indicate that cognitive functions in nearly one third of patients improve with treatment, and more than half are non decliners. Based on this evidence we recommend starting AChE in all patients diagnosed with mild-moderate AD. The duration of most trials is 24 weeks, although there is some evidence of benefit from longer trials. The data also shows that there is little to choose between the differences of AChE in terms of efficacy. So the choice is dependent on ease of dosing, tolerability, cost and availability (42,44).
The patient's cognitive and functional status should be monitored over 6-month intervals, and pharmacologic therapy should ideally be continued until there are no meaningful social interactions and quality of life has irreversibly deteriorated.
With AChE, side effects such as nausea, vomiting, dizziness, insomnia and diarrhoea are due to excess cholinergic stimulation, which most likely occurs at the start of therapy or when the dose is increased. Donepezil appears to a better side effect profile according to clinical trials. However any significant difference between AChE inhibitors is yet to be identified. Gastrointestinal effects appeared to be more common with oral rivastigmine than with other AChE inhibitors, thus requiring slower titration. The cardiac adverse effects should be borne in mind when prescribing AChE inhibitors due to vagotonic effects.
There are other issues that clinicians need to consider; for instance, the treatment response with AChE inhibitors is lost when the medications are interrupted and may not be fully regained when it is reinitiated (42). And failure to benefit from one AChE inhibitor does not necessarily mean that a patient will not respond to another and poor tolerability to one agent does not rule out good tolerability to another (28).
Memantine is a NMDA receptor antagonist, which can be used in Sri Lanka under a personal license. Unlike AChE it is effective in moderate-severe AD. It is effective in treating cognitive decline and behavioural problems and may be better tolerated than AChE by some patients.
Most of the trials have been conducted in patients with Alzheimer's dementia. It is important to know if these medications are effective in other types of dementia too. Vascular dementia is the second most common type of dementia after Alzheimer's disease. In older patients in particular, the combination of vascular dementia and Alzheimer's disease is common, and is referred to as mixed dementia. Evidence supports the use of AChE in vascular dementia (49,50).
The currently available evidence supports the use of cholinesterase inhibitors in patients with PDD, with a positive impact on global assessment, cognitive function, behavioural disturbance and activities of daily living rating scales. The effect in DLB remains unclear (33).
Behavioral and psychological symptoms of dementia include agitation, depression, apathy, repetitive questioning, psychosis, aggression, sleep problems, wandering, and a variety of inappropriate behaviors (41). As dementia progresses treatment of behavioural and psychological symptoms become more difficult. Firstgeneration antipsychotics (FGAs) which have been used for a long time for BPSD are being replaced by the secondgeneration antipsychotics (SGA), as they are better tolerated due to the absence of extra pyramidal side effects (45). However, their use is limited by controversial issues, such as small effect sizes, poor tolerability and possible association with increased mortality (45,46). The reports of increase of stroke have led to a black box warning for the use of SGAs for BPSD. Therefore antipsychotics should be used with caution and for a limited period, where other methods such as behavioural modification have failed.
Recent data supports the efficacy of SSRI in treating depressive symptoms in dementia. Findings also suggest that in treatment of AD patients with cholinesterase inhibitors and SSRIs may offer some degree of protection against the adverse effects of depression on cognition (47). Tricyclic antidepressants are not recommended due to the poor side effects profile.
Conclusion
The currently available evidence supports the use of cholinesterase inhibitors and memantine in patients with Alzheimer's dementia, vascular dementia and PDD, with a positive impact on global assessment, cognitive function, behavioural disturbance and activities of daily living rating scales. Antipsychotics should be used with caution, when needed and when other treatment measures have failed. Antidepressants have shown to be useful in those patients with dementia who also have features of depression.
Declaration of interest
None declared | 3,721.2 | 2015-12-15T00:00:00.000 | [
"Psychology",
"Medicine",
"Biology"
] |
Modulating the expression of tumor suppressor genes using activating oligonucleotide technologies as a therapeutic approach in cancer
Tumor suppressor genes (TSGs) are frequently downregulated in cancer, leading to dysregulation of the pathways that they control. The continuum model of tumor suppression suggests that even subtle changes in TSG expression, for example, driven by epigenetic modifications or copy number alterations, can lead to a loss of gene function and a phenotypic effect. This approach to exploring tumor suppression provides opportunities for alternative therapies that may be able to restore TSG expression toward normal levels, such as oligonucleotide therapies. Oligonucleotide therapies involve the administration of exogenous nucleic acids to modulate the expression of specific endogenous genes. This review focuses on two types of activating oligonucleotide therapies, small-activating RNAs and synthetic mRNAs, as novel methods to increase the expression of TSGs in cancer.
INTRODUCTION
What are tumor suppressor genes?
Tumor suppressor genes (TSGs) are a category of genes that serve to keep cell growth tightly regulated, such that a cell will only divide when absolutely necessary and in response to the appropriate external signals, such as growth factors. In addition to controlling proliferation, TSGs are also involved in preventing cells from migrating to, and invading, other tissues, as well as stimulating cells to undergo apoptosis when they encounter a cellular stress, such as DNA damage. If the latter is left unchecked, this could result in the introduction of mutations and dysregulation of the cell cycle. When TSG function is lost, this can result in critical cellular processes becoming dysregulated and cells may proliferate uncontrollably, fail to initiate apoptosis in response to damage, or start to invade through the basement membrane and metastasize to a different part of the body. 1 TSGs are, therefore, an important group of genes in the context of cancer pathogenesis and therapy.
Although TSGs are a vast group of genes, they can be further categorized according to their function and the pathways which they control, see below and Table 1. TSGs have traditionally been labeled as being homozygous recessive in terms of their role in promoting carcinogenesis, meaning that both alleles of the TSG have to exhibit loss of function to result in loss of protein activity and promotion of cell proliferation and tumorigenesis. The "two-hit hypothesis" was first defined during the analysis of retinoblastoma (RB), a cancer of the eye in children. 38 Knudson 38 reported that familial RB was dominantly inherited. Indeed, it was shown that children inherit a mutation in the RB1 gene, which encodes the RB protein (although the actual disease-causing gene was not identified at the time), from one parent, which predisposes them to developing RB. A secondary mutation in the other RB1 allele is acquired somatically as the eye develops, meaning that both alleles of the gene are mutated, leading to a loss of function. This means that there is no functional RB protein in the retinoblast cell; therefore, it cannot exert its function as a repressor of cell cycle progression. The deficient cells are, therefore, able to progress through the cell cycle, even in the absence of the appropriate growth factor signal. This results in the initiation of tumorigenesis.
However, many sporadic cancers (i.e., cancers where there are no inherited pre-disposing gene mutations) exhibit a loss of function in a single allele of a TSG, whereas the other allele seems to be normal. These instances led to the development of the concept of haploinsufficiency, which suggests that the loss of a single allele may be sufficient for a TSG to have decreased function and, hence, play a role in the development of a tumor. 39 The continuum model of tumor suppression takes this concept a step further, and suggests that even subtle changes in the expression of a TSG can impact its function and tumor-suppressive activity. 39 This model takes into consideration the fact that genes can be regulated in ways other than by mutation or allele loss, such as by epigenetic modifications, microRNAs and post-translational modifications. These changes can result in altered expression of the gene, but the gene itself is not mutated. This means that the transcription and translation of the gene produces a normal, functional protein, but the levels of the protein are lower in the tumor compared with normal, healthy tissue. The continuum model of tumor suppression, therefore, implies that even a slight upregulation in a TSG may Review recover some of its tumor-suppressive function, and hence stunt the growth, invasiveness, or malignancy of a tumor, depending on the molecular pathways that the TSG acts on.
An example of a TSG that does not follow Knudson's two-hit hypothesis of tumor suppression is phosphatase and tensin homolog on chromosome 10 (PTEN). Studies have shown that subtle changes in the expression level of PTEN can result in the loss of its function and an increase in carcinogenesis in certain tissues. For example, a 20% decrease in the normal level of PTEN expression is sufficient to cause cancer in the breast, 39,40 but is not sufficiently low enough to cause carcinogenesis in the liver, small intestine, pancreas, adrenal glands, and prostate. 40 Furthermore, haploinsufficiency of PTEN is sufficient to accelerate prostate tumor progression in mice. 41
TSGs in the clinic
It is now well recognized that cancer is a genetic disease, with the two major categories of genes involved in the initiation of cancer being oncogenes and TSGs. There are many drugs that have been developed that target overactive oncogenes, such as kinase inhibitors like sorafenib in hepatocellular carcinoma, and imatinib in chronic myeloid leukemia. 42,43 However, small molecule drugs targeting underactive or mutated TSGs have been somewhat lacking.
There are some examples where this has been the case, such as the identification of thioemicarbazone family molecules in targeting mutant p53, allowing for zinc chelation and the restoration of the DNA-binding properties of the important transcription factor. 44 Furthermore, compounds such as PhiKan083 allow for the restoration of normal function in p53 mutant cells harboring a Y220C or Y220S mutation by stabilizing the protein structure and preventing denaturation. 45,46 However, no drugs directly targeting p53 have yet to make it through clinical trials to approval, indicating the difficulty in targeting TSGs, even after years of extensive research into the gene and protein's structure and function.
A p53 activator that has entered clinical trials is APR-246, a quinuclidinone derivative that is able to rescue the ability of mutant p53 to interact with DNA by interacting with the cysteine residues within the protein to restore the wild-type conformation. 47 APR-246 has completed phase I and II trials for multiple different cancer types and in combination with other anti-cancer therapies. [48][49][50] The firstin-human trial showed that the drug does have some promising anti-tumour activity, indicated by increased apoptosis of circulating malignant cells in selected patients, regardless of TP53 mutation status. 48 However, the active form of the drug, methylene quinuclidinone, to which APR-246 is converted once inside the body, is rapidly degraded under physiological conditions, which limits the effectiveness of APR-246 as a monotherapy. 51 Another approach that has been used to circumvent underactive TSGs is the targeting of the downstream consequences of such downregulation. For example, the downregulation or loss of PTEN leads to overactivity of the AKT pathway, which provides an ideal target for small molecule drugs via inhibition of the kinase AKT. 52 However, oligonucleotide therapies offer an advantage as they allow for the root of the issue (i.e., the underactive TSG) to be targeted directly.
In light of the critical role of TSG modulation in various forms of cancer, there is much interest in the potential to target specific TSGs as a novel therapeutic approach in patients. This review explores the potential of using oligonucleotide therapies to restore the function of TSGs in cancer (Figure 1), and the benefits and challenges of such an approach.
What are oligonucleotide therapies?
Oligonucleotide therapies include inhibitory antisense oligonucleotides (ASOs) and short interfering RNAs (siRNAs), along with stimulatory small-activating RNAs (saRNAs) and synthetic, nucleoside-modified mRNAs. Another example of an activating oligonucleotide is plasmid DNA, which is an approach used in gene therapy. Plasmid DNA can be delivered to a host cell via both viral and non-viral methods, with viral-based delivery systems accounting for the majority of gene therapy approaches in clinical trials. 53 However, such methods of delivery are associated with immunogenicity and the risk of insertional mutagenesis 54,55 and are not discussed further in the current review. Broadly, oligonucleotide therapies involve the administration of exogenous nucleic acids to a cell to modulate expression of specific target genes. Oligonucleotides can interact with complementary sequences on RNA or DNA, depending on their mechanism of action, and they offer an advantage over small molecule drugs, which tend to work on protein targets and can lack specificity. Oligonucleotide therapies offer a more specific approach to modulating gene expression, which can minimize off-target effects and toxicity. Given that the exact sequence of the oligonucleotide is known, and the fact that these therapies interact with their target via complementary Watson-Crick base pairing, any off-target interactions can be predicted bioinformatically, and adverse effects that these interactions may cause can be forecasted and potentially circumvented.
An increased understanding of the roles of genes and other oligonucleotides in disease is leading to an expansion of opportunities for developing oligonucleotide therapies directed to previously undruggable targets. A particular focus of oligonucleotide therapy is oncology, with approximately one-quarter of oligonucleotide therapy candidates being developed to modulate targets in cancer. 56 Given that TSGs are commonly downregulated in cancer, and small molecules have traditionally been used to inhibit, rather than stimulate, a target, there is the potential to restore TSG expression and function using stimulatory oligonucleotides, such as saRNAs and synthetic nucleoside-modified mRNAs. The remainder of this review describes these forms of oligonucleotide therapy and summarizes the current evidence that they can be used to modulate TSG expression in the context of cancer.
DIFFERENT APPROACHES TO RESTORING TUMOUR SUPPRESSOR FUNCTION saRNAs saRNAs are an emerging sub-class of oligonucleotides that have the potential to stimulate the transcription, and hence increase the expression, of a target gene. Unlike siRNAs, which work by interacting with mRNA to block its translation or mediate its degradation, saRNAs instead mediate transcriptional activation by binding to the complementary sequence in or near the promoter region of the target gene. 57 saRNAs are short, 21-nucleotide, double-stranded oligomers that are typically delivered to the target organ encapsulated in a lipid nanoparticle. There are many research groups interested in using saRNAs to upregulate TSGs which, as discussed above, are commonly downregulated in various forms of cancer. To date, there is a single saRNA candidate that has progressed into clinical trials, targeting the CEBPA gene in hepatocellular carcinoma (HCC). However, there are many other TSGs being explored as potential saRNA targets preclinically ( Table 1).
Mechanism of action
Once an saRNA has entered the cell, the guide strand of the doublestranded molecule is loaded into argonaut protein 2 (Ago2). 2 The guide strand of the saRNA duplex is determined by the thermodynamic stability of the 5 0 end of the molecule and is usually the antisense strand of the complex. 58-60 Chu et al. 58 (2010) showed that siRNA repression of Ago2 expression significantly decreased saRNA activity, but did not diminish it completely, suggesting there may be a secondary, currently unknown mechanism that also facilitates saRNA delivery to the nucleus.
saRNA-loaded Ago2 protein is shuttled to the nucleus via importin-8, 61 and the saRNA guide strand interacts with the complementary sequence in the promoter region of the target gene. This interaction facilitates the recruitment of other proteins required for transcriptional activation, including proteins involved in RNA splicing and binding such as heterogeneous nuclear ribonucleoproteins. 60 Others include RNA helicase A (RHA) and RNA polymerase II complex component 9 (CTR9), both of which are known transcriptional activators. RHA is a helicase that catalyzes the unwinding of double-stranded DNA and acts as a bridging factor linking b-actin and RNA polymerase II to form part of the pre-initiation complex. 62 CTR9 forms part of the polymerase-associated factor 1 complex (PAF1C), which can interact with histone-modifying enzymes and RNA polymerase II directly to activate gene transcription. 60 PAF1C can induce mono-ubiquitination of histone 2B and recruit methyltransferase enzymes to methylate histone 3 lysine 4 (H3K4). H3K4 methylation results in the DNA becoming less condensed around the histone nucleosome, thus making the promoter more accessible for the transcriptional machinery and leading to an increase in gene expression ( Figure 2). 3 Targeting TSGs using saRNAs saRNAs were discovered by two groups in tandem. Janowski et al. 63 (2007) discovered them during the development of anti-gene oligonucleotides (agRNAs) designed to downregulate the expression of the progesterone receptor (PR) in breast cancer. Unexpectedly, the group found that some of their synthesized oligonucleotides actually increased the expression of PR by up to 2-fold in T47D and MCF7 cells. The authors also showed that selected agRNAs targeting major vault protein could upregulate its protein expression by up to 4-fold. 63 At a similar time, Li et al. 2 (2006) successfully developed saRNAs targeting CDH1, CDKN1A, and VEGF in prostate cancer cell lines. Since their initial discovery, there has been a number of groups interested in using saRNAs to target and upregulate genes in specific diseases, in particular, targeting TSGs in cancer. TSGs that function in different capacities to avert cancer, such as by inducing cell-cycle arrest, preventing pro-proliferative signaling, maintaining cell morphology, and inducing apoptosis have been targeted using saRNAs, and examples from each classification are discussed below. Further examples can be found in Table 1.
Cell-cycle arrest: CDKN1A
A TSG that is commonly downregulated in cancer is cyclin-dependent kinase inhibitor 1A (CDKN1A), which encodes p21. p21 is involved in controlling the cell cycle and is able to induce cell-cycle arrest by preventing cyclin-dependent kinase 4 and cyclin D1 from forming an active kinase complex. This prevents the RB protein from becoming phosphorylated and, thus, prevents transcription factor E2F from being active and driving the transcription of genes involved in the S-phase of the cell cycle, such as DNA polymerase and cyclin A. 4 p21 can also induce apoptosis by suppressing the pro-apoptotic protein B-cell lymphoma-extra-large and activating caspase-3 and poly-ADP ribose polymerase. 4 Therefore, because of the anti-proliferative and pro-apoptotic roles of p21, there has been interest in targeting this protein in oncology.
Since the initial discovery of CDKN1A saRNAs by Li et al. 2 as a decrease in migrative and invasive capabilities in prostate cancer cell lines. 8 Inhibition of pro-proliferative signaling: CEBPA The first saRNA candidate to enter human clinical trials targets the CEBPA gene, which encodes CCAAT/enhancer-binding protein alpha (C/EBP-a), one of six members of the C/EBP transcription factor family. C/EBP-a upregulates expression of albumin (ALB), adipose triglyceride lipase (ATGL), colony stimulating factor 3 receptor (CSF3R), CDKN1A, and glycogen synthase 1 (GYS1), among other genes. 57 These targets encode proteins that are involved in preventing the over-proliferation of mature hepatocytes and maintaining healthy hepatocyte metabolic function. C/EBP-a also downregulates expression of pro-proliferative and pro-inflammatory genes such as c-MYC and interferon-gamma (INFG). 57 C/EBP-a expression is decreased in 60% of HCC tumors in comparison with non-tumour tissue from the same patients, and its deficiency is associated with increased hepatic proliferation in mice, thus resulting in worsened tumor phenotype. 65 In vitro studies showed that increased CEBPA expression in HCC cell lines led to decreased expression of mesenchymal markers such as N-cadherin, Slug, and vimentin and upregulated epithelial markers such as E-cadherin, implying that the upregulation of C/EBP-a helps to prevent epithelial-to-mesenchymal transition. 14 The upregulation of C/EBP-a also resulted in the downregulation of other pro-proliferative proteins such as b-catenin and epidermal growth factor receptor (EGFR), c-MYC, axis inhibition protein 2 (AXIN2), and cyclin D1 (CCND1). 14 Huan et al. 14 (2016) investigated the effects of saRNA-targeting CEBPA in a mouse orthotopic HCC tumor model. Over 40 days, the group saw a decrease in the tumor size and intrahepatic metastasis in CEBPA saRNA-treated mice compared with mice treated with a scrambled saRNA. Furthermore, the downregulation of pro-proliferative genes EGFR and catenin beta 1 (CTNNB1) was observed in CEBPA-saRNA-treated mice using immunohistochemistry analysis. 14 In another study, CEBPA-51, the preclinical candidate saRNA targeting CEBPA, was encapsulated into liposomal nanoparticles (SMARTICLES) for targeted delivery to the liver, and injected into rats exhibiting cirrhotic HCC, induced by injection of diethylnitrosamine over a 9-week period. 66 Animals treated with saRNA targeting CEBPA had an 80% decrease in hepatic tumor size compared with animals treated with a non-specific oligonucleotide.
Recently, MTL-CEBPA completed first-in-human clinical trials to assess tolerability in patients. The trial consisted of 34 patients with advanced HCC with underlying cirrhosis, metastasis or resulting from non-alcoholic steatohepatitis. Despite the drug not causing a significant improvement in liver function test results compared with baseline after two cycles of treatment, the relative expression of CEBPA mRNA in white blood cells increased 1.5-fold consistently across all treatment groups. 15 Furthermore, one patient achieved a confirmed partial tumor response, as seen both by computed tomography scan and by a rapid decrease in the alpha-fetoprotein level, which was maintained for 24 months. The mean progression-free survival of the entire patient cohort (different doses of MTL-CEBPA) was 4.6 months, suggesting that the drug has some anti-tumour activity. 15 A phase Ib clinical trial of MTL-CEBPA in combination with sorafenib in patients with advanced HCC showed that the treatment can successfully decrease the tumor burden. One-quarter of patients who were naive to tyrosine-kinase inhibitor treatment and had a viral etiology showed a complete or partial objective response to treatment after 12 months, with three patients exhibiting a complete response, as demonstrated by a complete eradication of target lesions at month 12. 16 Furthermore, the study demonstrated that the upregulation of CEBPA expression caused by the treatment leads to the downregulation of immune suppressive genes in myeloid-derived suppressor cells and the upregulation of genes associated with monocyte and neutrophil function. 16 Although no upregulation in monocyte and neutrophil levels was seen in the patients' blood 24 h and 7 days after MTL-CEBPA administration, a marked decrease in the levels of monocytic MDSCs was observed. This suggests that MTL-CEBPA is able to exert its tumor suppressive effects by abrogating the immune suppressive activity of monocytic MDSCs in the tumor microenvironment. 16 Given the success of MTL-CEBPA in clinical trials for advanced HCC, the drug has attracted interest as a novel therapeutic agent in other cancers, such as pancreatic ductal adenocarcinoma, for which saRNA targeting CEBPA has been reported to have anti-tumour effects both in vitro and in vivo. 17,18 Maintaining cell morphology: CDH1 CDH1 is a TSG that has gained attention for its potential to be upregulated using saRNAs and has been investigated in this context across many cancer types. 2,23-27 CDH1 encodes the transmembrane glycoprotein E-cadherin, which is involved in maintaining epithelial cell morphology via adherens junction connections. 67 Furthermore, E-cadherin is anchored to the cell's cytoskeleton via an interaction with b-catenin, a key protein involved in the pro-proliferative Wnt signaling pathway. 68,69 saRNAs targeting CDH1 were initially discovered in 2006 in the context of prostate cancer. 2 further investigated the effects of saRNAs targeting CDH1 in prostate cancer, using different saRNAs to those designed by Li et al. (2006). The group found that one of their candidates, an saRNA that binds to the À661 region of the CDH1 promoter, could successfully upregulate CDH1 expression by up to 5-fold at the mRNA level, and that the anti-sense strand of the saRNA duplex was responsible for the RNA activating activity. The same group tested this saRNA in HCC cell lines and observed a similar increase in E-cadherin protein, as observed by western blot.
The À215 region of the CDH1 promoter has also been commonly targeted, with saRNAs designed around this region being shown to www.moleculartherapy.org Review successfully upregulate CDH1 in renal carcinoma and breast cancer, and produce beneficial downstream phenotypic effects. 23,26 Dai et al. 23 (2018) found that renal carcinoma cell lines transfected with CDH1 saRNA had a reduced migration capability compared with non-transfected cells, as measured using a Matrigel invasion chamber assay. When CDH1 was knocked down again using RNA interference, the renal carcinoma cells regained their migration capability. 23 Similarly, saRNAs targeting the À215 region of the CDH1 promoter in breast cancer led to cells having a decreased proliferation rate, increased apoptosis and decreased migration capabilities, compared with mock-transfected and control sequence transfected cells, as well as a potent growth inhibitory effect in vivo. 26 Promoting apoptosis: PAWR An example of a TSG which is involved in promoting apoptosis is PRKC apoptosis WT1 regulator (PAWR). This TSG has been implicated in the apoptotic response of prostate cancer cells to several exogenous agents. One group has investigated the impact of saRNAs targeting PAWR in vitro and has found that transfection with the oligonucleotide results in a decrease in cell growth, as well as cell shrinkage and apoptosis. 31 This study provides proof-of-concept that PAWR upregulation in prostate cancer is a potential therapeutic avenue to explore, and provides opportunities for further in vivo work to build upon the preclinical profile of such a molecule.
SYNTHETIC mRNAs
Although saRNAs are a promising novel approach for upregulating TSGs in cancer, their ability to increase gene expression is limited to the upregulation of genes in which at least one allele retains normal activity. When both alleles of a gene are deleted or mutated, saRNAs are unable to exert any beneficial effect. An alternative oligonucleotide therapy that can be used to restore the expression of TSGs is synthetic, nucleoside-modified mRNA. Synthetic mRNAs have been explored for many years for their potential in cancer immunotherapy and have recently made headlines for their use in the development of sever acute respiratory syndrome coronavirus 2 vaccines. [70][71][72] Synthetic mRNAs are synthesized to mimic natural mRNAs and are translated in the cytoplasm to produce protein, which is indistinguishable to that resulting from the translation of endogenous mRNA.
The in vitro mRNA synthesis process involves ligating the open reading frame of the gene of interest, along with any other desired sequences such as 5 0 and 3 0 untranslated regions into a plasmid with a viral promoter, usually T7, upstream of the transcription start site. The plasmid must be linearized, such that RNA transcripts of defined length are produced. The viral promoter is recognized by phage RNA polymerases to initiate RNA synthesis. These sequences are then transcribed in vitro to produce mRNA. 73 The use of synthetic mRNAs to increase translation of a target protein has been under investigation since the 1980s. Malone et al. 73 (1989) successfully transfected in vitro transcribed mRNA encoding Photinus pyralis luciferase into NIH 3T3 mouse cells. The same group showed that surrounding the open reading frame (ORF) of the target gene with 5 0 and 3 0 untranslated regions (UTRs) helped to increase translational efficiency. 73 Given their size in comparison with saRNAs, synthetic mRNAs are more difficult to deliver to a cell; however, once inside the cell they can exert their effects rapidly, as the transcription stage is bypassed. However, the rapid degradation of mRNAs and the ability to activate toll-like receptors (TLRs) and stimulate the immune system led to a period of decline in interest in their use as a therapeutic agent. 74 However, interest in the use of synthetic mRNAs was reignited in the mid-2000s, when Karikó et al. (2005) 75 discovered that the addition of naturally modified nucleosides, such as pseudouridine and 5-methylcytidine, to the ORF resulted in a decreased secretion of cytokines and interleukin-8 in response to their administration in vitro. The modified mRNAs led to less stimulation of immune response mediators such as interferon-gamma, interleukin-2 (IL-12), interferon-alpha, retinoic acid-inducible gene 1, and TLRs 3, 7, and 8. 76 Alongside the addition of 5 0 and 3 0 UTRs and nucleoside modifications to the ORF, synthetic mRNAs can be further modified with the addition of a 5 0 cap and poly-A tail to mimic endogenous mRNAs ( Figure 3). Endogenous mRNAs contain a single 7-methylguanosine residue at their 5 0 end, attached to the next residue by a 5 0 -5 0 triphosphate linkage. 77 The 5 0 cap confers resistance to enzymatic cleavage and allows for the translational machinery to recognise and bind to the mRNAs to allow efficient translation into protein. 73 The 3 0 poly(A) tail plays a role in improving mRNA stability. Optimization studies have shown that an increased length of poly(A) tail from 20 nucleotides to 120 nucleotides is associated with improved translational efficiency and therefore protein expression. 78 However, other studies have used even longer poly(A) tails (e.g., %200 nucleotides in length) with similar success. 79
Using synthetic mRNAs to upregulate TSGs in cancer
Given the recent improvements in the stability and immunogenicity of synthetic mRNAs and the emergence of delivery systems such as lipid nanoparticles to facilitate the entry of large synthetic mRNAs into the cell, research into the use of such nucleoside-modified, in vitro transcribed mRNAs as therapeutic agents has become more established. mRNA therapy provides an advantage over small molecule drugs because of its ability to increase or restore protein expression, and therefore activity, whereas small molecules target proteins that are already present, often in an inhibitory manner. A particular area that has gained much traction in recent years is cancer immunotherapy. [80][81][82] Alongside this, some groups have started to investigate the potential of using synthetic mRNAs to upregulate or restore the function of the TSGs that are commonly downregulated or lost in various cancers. Although synthetic mRNAs are a promising area of research and restoring gene function in other disease contexts has been explored extensively, such as CFTR in cystic fibrosis, 83 research into the use of synthetic mRNAs to restore TSGs is currently somewhat lacking. This review discusses the literature demonstrating the potential of using synthetic mRNAs as a means of restoring TSG function in the context of cancer.
PTEN in prostate cancer and melanoma
Islam et al. 84 (2018) have delivered synthetic mRNA encoding PTEN encapsulated in hybrid lipid nanoparticles into the prostate cancer cell line PC3. The hybrid nanoparticles were prepared using cationic lipid-like compound G0-C14 (for mRNA complexation) and poly(lactic-co-glycolic acid) polymer coated with a lipid-polyethyl glycol (PEG) shell to make a stable nanoparticle core. 84 The group showed that transfection of the encapsulated mRNA was more effective than transfection of naked mRNA using a commercial cationic lipid transfection reagent. An increase in PTEN protein expression was seen 48 h after transfection, alongside a decrease in cell viability. Furthermore, the synthetic PTEN mRNA provoked a downregulation of the PI3K-AKT signaling pathway, as demonstrated by decreased phosphorylation of eukaryotic translation initiation factor 4E-binding protein 1, proline-rich AKT substrate of 40 kDa, and Foxo3a. 84 Decreased phosphorylation of these proteins indicates the inhibition of AKT activity and its downstream pro-proliferative and pro-survival effects. In vivo injection of the PTEN mRNA encapsulated in nanoparticles caused a suppression of growth of PC3 xenograft tumors compared with control injections of PBS or mRNA encoding EGFR. 84 The same group went on to develop a nanoparticle containing PTEN mRNA for the delivery of the oligonucleotide into Pten-null or mutated tumor cells and explored the effects of such therapy on the tumor microenvironment. 85 The treatment of Pten-deficient murine cells with nanoparticles containing PTEN mRNA led to the induction of apoptosis and autophagy pathways in vitro, as well as calreticulin exposure on the cell membrane and ATP release into the extracellular environment, indicating that PTEN upregulation may influence the tumor microenvironment. The hypothesis was explored further in vivo in a melanoma tumor model, and the group saw that injection of PTEN mRNA-containing nanoparticles led to an increase in CD3 + CD8 + T cells and decrease in regulatory T cells and myeloidderived suppressive cells, which helps to reverse the immunosuppressive tumor microenvironment. 85 Furthermore, the upregulation of PTEN in melanoma and prostate cancer tumor models led to an increase in efficacy of anti-programmed cell death protein 1 therapy, highlighting the potential benefits that activating oligonucleotides can have in the clinical oncology field. 85 Bax in malignant melanoma Okumura et al. (2008) used cationic liposomes to deliver mRNA encoding Bax, a pro-apoptotic protein into HMG malignant melanoma cells. Bax stimulates the release of cytochrome C from the mitochondria, which in turn leads to the initiation of apoptosis. 86,87 The group found that encapsulation of the mRNA into cationic liposomes was sufficient to complex the mRNA and deliver it to the cells. Furthermore, an increase in the levels of Bax protein and caspase-3 activity were observed 24 h after transfection, alongside a significant decrease in cell survival. 88 The only modifications added to the synthetic mRNA were anti-reverse cap analogs (ARCA) 5 0 cap and poly(A) tail. ARCAs are commonly used in the synthesis of commercial mRNAs to increase translational efficiency. 89 They prevent incorporation of the 5 0 cap in the reverse orientation, which would stop the mRNA from being recognized by the ribosome and translated.
BENEFITS AND CHALLENGES OF USING DIFFERENT OLIGONUCLEOTIDE THERAPIES
saRNAs and synthetic mRNAs are two types of oligonucleotide therapy that can be used to upregulate the expression of TSGs in cancer. However, they differ in terms of size and mechanism of action, as well as their potential for gene upregulation ( Table 2). Given that saRNA activity requires entry into the nucleus and interaction with the promoter region of the target gene, saRNAs have the potential to upregulate only functional genes. In contrast, synthetic mRNAs operate within the cytoplasm and provide the cell with the correct mRNA sequence, which is then directly translated into functional protein.
This allows for synthetic mRNAs to be used to restore otherwise absent proteins in disease, as evidenced by the use of this approach Review in protein replacement therapies, such as mRNA encoding the cystic fibrosis transmembrane conductance regulator protein in cystic fibrosis. 83 The requirement for increased transcription of endogenous DNA in the saRNA mechanism means that protein production is slow, taking up to 72 h in many in vitro models. 19 However, protein production can be seen within 24 h of administration of synthetic mRNAs. [90][91][92] A major hurdle in translating the use of these classes of oligonucleotide therapy into the clinic is the delivery of the oligonucleotides in vivo. Some of the challenges faced by oligonucleotides include degradation by nucleases in the extracellular space, renal clearance, crossing the capillary endothelium once at the target organ, crossing the cell membrane, and lysosomal degradation. 94 Oligonucleotides are hydrophilic and so are unable to penetrate the plasma membrane without assistance. Furthermore, the introduction of any foreign oligonucleotide can result in stimulation of the innate immune system and TLRs. Therefore, a suitable delivery system must be used to ensure that the oligonucleotide reaches its target site, and enters the target cell before it is degraded by extracellular nucleases or stimulates the innate immune response.
The most common non-viral delivery system used for oligonucleotide therapies is lipid nanoparticles. This process involves complexing the anionic oligonucleotide with cationic lipids to produce a nanoparticle. For in vivo delivery, more complex structures may be used to prevent the nanoparticle from being cleared from the circulation by reticuloendothelial system phagocytes. For example, the surface can be coated with a neutral polymer such as PEG to prevent protein adherence to the nanoparticle and clearance. 95 saRNAs are small structures and can be delivered in vitro using simple transfection reagents. However, synthetic mRNAs are much larger and require encapsulation within a nanoparticle for both in vitro and in vivo delivery (Table 2).
Another method for oligonucleotide delivery that can be used as an alternative to nanoparticles, or in conjunction with them, is bioconjugation. This is where the oligonucleotide or carrier (i.e., nanoparticle) is conjugated to a ligand that promotes interaction of the molecule with the target cell. For example, a commonly used ligand for selective delivery of oligonucleotides to the liver is N-acetylgalactosamine (GalNAc). The GalNAc ligand is recognized by the asiaglycoprotein receptor, which is highly expressed on the surface of hepatocytes and internalizes the ligand via Ca 2+ -dependent endocytosis. 96,97 The GalNAc ligand can be directly attached to small oligonucleotides, such as saRNAs and siRNAs, but this is not possible for larger molecules such as synthetic mRNAs, which require encapsulation within a nanoparticle for both in vitro and in vivo delivery. Some oligonucleotide therapies approved by the U.S. Food and Drug Administration use the GalNAc conjugation system, such as givosiran, an siRNA to treat acute hepatic porphyria. 98
CONCLUSIONS AND FUTURE PERSPECTIVES
TSGs are commonly downregulated in cancer and, despite the idea gaining much attention within the scientific community, efforts to upregulate or reactivate TSGs thus far have remained relatively fruitless. However, emerging classes of therapy, such as oligonucleotide therapies, offer an advantage over traditional small molecule drugs in their capability to work at the gene level, rather than by targeting a protein. Oligonucleotide therapies have the potential to be combined with small molecules or other oligonucleotides to produce a synergistic effect. saRNAs and nucleoside-modified RNAs are two classes of oligonucleotide therapy with the potential to upregulate expression of target genes, or in the case of synthetic mRNAs, replace a lost or faulty protein in cancer. saRNAs offer a benefit over synthetic mRNAs in terms of their size and ease of delivery; however, they are an inappropriate treatment option for patients with a mutated allele of the target, as the upregulation of a mutated target will not have a beneficial effect and may even be detrimental. Future work should aim to address challenges relating to delivery and immunogenicity to improve the potential of oligonucleotide therapies to progress through clinical trials and to meet the need for TSG upregulation in cancer. Furthermore, as targeted oligonucleotide therapies start to emerge into the clinic, the use of tumor genome sequencing could help to match a patient with an ideal therapy based on the genetic profile of their tumor. The field of research is still in its infancy and | 7,926.8 | 2022-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Implicit Hybrid Delay Functional Integral Equation: Existence of Integrable Solutions and Continuous Dependence
: In this work, we are discussing the solvability of an implicit hybrid delay nonlinear functional integral equation. We prove the existence of integrable solutions by using the well known technique of measure of noncompactnes. Next, we give the sufficient conditions of the uniqueness of the solution and continuous dependence of the solution on the delay function and on some functions. Finally, we present some examples to illustrate our results.
Introduction
The study of implicit differential and integral equations has received much attention over the last 30 years or so. For instance, Nieto et al. [1] have studied IFDE via the Liouville-Caputo Derivative. The integrable solutions of IFDEs has been studied in [2]. Moreover, IFDEs have recently been studied by several researchers; Dhage and Lakshmikantham [3] have proposed and studied hybrid differential equations. Zhao et al. [4] have worked at hybrid fractional differential equations and expanded Dhage's approach to fractional order. A fractional hybrid two-point boundary value problem had been studied by Sun et al. [5]. The technique of measure of noncompactness is found to be a fruitful one to obtain the existence results for a variety of differential and integral equations, for example, see [6][7][8][9][10][11][12][13][14].
Srivastava et al. [15] have studied the existence of monotonic integrable a.e. solution of nonlinear hybrid implicit functional differential inclusions of arbitrary fractional orders by using the measure of noncompactness technique.
Here, we investigate the existence of integrable solutions of an implicit hybrid delay functional integral equation g(t, x(t)) = f 1 t, x(t) − h(t, x(t)) g(t, x(t)) , s, x(s) − h(s, x(s)) g(s, x(s)) ds , t ∈ [0, 1]. (1) where ϕ : [0, 1] → [0, 1], ϕ(t) ≤ t is nondecreasing continuous function. The main tool of our study is the technique of measure noncompactness. Furthermore, we studied the continuous dependence on the delay function ϕ and on the two functions f 1 and f 2 . Our article is organized as follows: In Section 2 we introduce some preliminaries. Existence results are presented in Section 3. Section 4 contains the continuous dependence of the unique solution on the delay function ϕ and of the two functions f 1 and f 2 . Section 5 presents two examples to verify our theorems. Lastly, conclusions are stated.
Preliminaries
We present here some definitions and basic auxiliary results that will be needed to achieve our aim.
Let L 1 = L 1 (I) be the class of Lebesgue integrable functions on the interval Now, let (E, . ) denote an arbitrary Banach space with zero element θ and X a nonempty bounded subset of E. Moreover, denote by B r = B(θ, r) the closed ball in E centered at θ and with the radius r.
The measure of weak noncompactness defined by De Blasi [16] is given by The function β(X) possesses several useful properties that may be found in De Blasi's paper [16]. The convenient formula for the function β(X) in L 1 was given by Appell and De Pascale [17] as follows: where the symbol meas D stands for Lebesgue measure of the set D.
Next, we shall also use the notion of the Hausdorff measure of noncompactness χ [6] defined by In the case when the set X is compact in measure, the Hausdorff and De Blasi measures of noncompactness will be identical. Namely, we have the following [16].
Theorem 1. Let X be an arbitrary nonempty bounded subset of L 1 . If X is compact in measure, then β(X) = χ(X).
Now, we will recall the fixed point theorem from Banaś [18].
Theorem 2. Let Q be a nonempty, bounded, closed, and convex subset of E, and let T : Q → Q be a continuous transformation, which is a contraction with respect to the Hausdorff measure of noncompactness χ, that is, there exists a constant α ∈ [0, 1] such that χ(TX) ≤ αχ(X) for any nonempty subset X of Q. Then, T has at least one fixed point in the set Q.
We present some criterion for compactness in measure in the next section; the complete description of compactness in measure was given in Banaś [6], but the following sufficient condition will be more convenient for our purposes [6]. Theorem 3. Let X be a bounded subset of L 1 . Assume that there is a family of measurable subsets (Ω c ) 0≤c≤b−a of the interval (a, b) such that meas Ω c = c. If for every c ∈ [0, b − a], and for every x ∈ X, then, the set X is compact in measure.
Main Results
Now, let I = [0, 1] and consider the following: (H 1 ) (i) f 1 : I × R × R → R is a Carathéodory function which is measurable in t ∈ I, ∀u, v ∈ R × R and continuous in u, v ∈ R × R, ∀t ∈ I.
(ii) There exists a measurable and bounded function m 1 : I → I and nonnegative constant b 1 such that (iii) f 1 is nondecreasing on the set I × R × R with respect to all the three variables, i.e., for almost all (t 1 , t 2 ) ∈ I 2 such that t 1 ≤ t 2 and for all u 1 ≤ u 2 and v 1 ≤ v 2 (H 2 ) f 2 : I × I × R → R is a Carathéodory function, and a continuous function m 2 : I × I → R, and nonnegative constant b 2 such that such that Moreover, f 2 is nondecreasing on the set I × R × R with respect to all the three variables. (H 3 ) ϕ : I → I, ϕ(t) ≤ t is nondecreasing. function. (H 4 ) g : I × R → R \ {0} and h : I × R → R satisfy the following: (i) They are nondecreasing on the set I × R with respect to both variables, i.e., for almost all (t 1 , t 2 ) ∈ I 2 such that t 1 ≤ t 2 and for all (ii) They are measurable in t ∈ I for every x ∈ R and continuous in x ∈ R for every t ∈ I, and there exist two integrable functions a i ∈ L 1 (I) and two positive constants l i , (i = 1, 2.) such that |h(t, x)| ≤ |a 1 (t)| + l 1 |x| and |g(t, x)| ≤ |a 2 (t)| + l 2 |x|. Let then the integral Equation (1) can be reduced to where x satisfies the Equation.
Thus, we have proved the following result.
Proof. Define the set
Consider the integral Equation (6) and define the operator Let y ∈ Q ρ , then Hence the operator maps the ball B ρ into itself where Now, Q ρ contains all positive and nondecreasing functions a.e. on I. obviously the set Q ρ is nonempty, bounded and convex. To prove that Q ρ is closed we have {x m } ⊂ Q ρ , which converges strongly to x. Then {x m } converges in measure to x and we deduce the existence of a subsequence {x k } of {x m } which converges to x a.e. on I (see [19]). Therefore, x is nondecreasing a.e. on I which means that x ∈ Q ρ . Hence the set Q ρ is compact in measure(see Lemma 2 in [7], p. 63).
Using (H 1 )-(H 3 ), then maps Q ρ into itself, is continuous on Q ρ , and transforms a nondecreasing a.e. and positive function into a function with same type (see [7]).
To show that the operator is a contraction with respect to the weak noncompactness measure β. Let us start by fixing > 0 and X ⊂ Q ρ . Furthermore, if we select a measurable subset D ⊂ I as such meas D ≤ , then for any x ∈ X using the same assumptions and argument as in [6,7], we obtain with β is the De Blasi measure of weak noncompactness. The set X is compact in measure, so Hausdorff and De Blasi measures of noncompactness will be identical [16], then where χ is the Hausdorff measure of noncompactness. Since b 1 + b 1 b 2 < 1, it follows, from fixed point theorem [18], that is a contraction with regard to the measure of noncompactness χ and has at least one fixed point in Q ρ which show that Equation (6) has at least one positive a.e. nondecreasing solution y ∈ L 1 .
Solvability of Equation (4)
In this section, the existence of a.e. nondecreasing solutions x ∈ L 1 for the Equation (7) will be studied x(t)), t ∈ I Theorem 5. Let the assumptions (H 2 ), (H 4 ) be satisfied. Let the assumptions of Theorem 4 be satisfied. Assume that l 1 + M l 2 < 1. Then there is at least one a.e. nondecreasing solution x ∈ L 1 to (7).
Proof. Interpret the set in the form
Let x ∈ L 1 and M = sup t∈I |y(t)|, then by assumptions (H 2 )-(H 4 ), we find that Then for t ∈ I, we have Hence A maps B r into itself where Allowing Q r to be a subset of B r containing all functions that are nonnegative and a.e. nondecreasing on I, we may conclude that Q r is nonempty, closed, convex, bounded, and compact in measure ( [6], p. 780). Now Q r is a bounded subset of L 1 that contains all positive and nondecreasing a.e. functions on I, then Q r is compact in measure (see Lemma 2 in [7], p. 63). As a result of assumption (H 4 ), A maps Q r into itself, is continuous on Q r , and turns a nondecreasing a.e. and positive function into a function of the same type (see [7]). Thus, A is shown to be a contraction with regard to the weak noncompactness measure β. Let us start by fixing > 0 and X ⊂ Q r . Furthermore, if we select a measurable subset D ⊂ I as such meas D ≤ , then for any x ∈ X using the same assumptions and argument as in [6,7], we obtain Then we find β(Ax(t)) ≤ (l 1 + M l 2 ) β(x(t)).
This implies
with β is the De Blasi measure of weak noncompactness. The set X is compact in measure, so Hausdorff and De Blasi measures of noncompactness will be identical [16], then where χ is the Hausdorff measure of noncompactness. Since l 1 + M l 2 < 1, it follows, from fixed point theorem [18], that A is a contraction with regard to the measure of noncompactness χ and has at least one fixed point in Q r which show that Equation (7) has at least one positive nondecreasing a.e. solution x ∈ L 1 . Now, we are in position to state an existence result for the hybrid implicit functional Equation (1). Theorem 6. Let the assumptions of Theorems 4 and 5 be satisfied. Then the implicit hybrid delay functional integral Equation (1) has at least one nondecreasing a.e. solution x ∈ L 1 which satisfies (7) where y ∈ L 1 is the nondecreasing a.e. solution of (6).
Continuous Dependence
Here, we investigate the continuous dependence of the unique solution x ∈ L 1 on the delay function ϕ and on the two functions f 1 and f 2 .
(iii) f 1 is nondecreasing on the set I × R × R with respect to all the three variables, i.e., for almost all (t 1 , t 2 ) ∈ I 2 such that t 1 ≤ t 2 and for all u 1 ≤ u 2 and v 1 ≤ v 2 Moreover, f 2 is nondecreasing on the set I × R × R with respect to all the three variables. (H * 3 ) g : I × R → R \ {0}, and h : I × R → R, are measurable in t ∈ I for every x, y ∈ R as well as meet the Lipschitz condition for all t ∈ I and u, v ∈ R. Moreover h, g are nondecreasing a.e. in the two arguments. Note Proof. Let y 1 , y 2 be solutions of Equation (6), then Taking supermum for t ∈ I, we have which implies y 1 = y 2 . Hence the solution of the problem (6) is unique.
Next, we prove the following result. Proof. Firstly, Theorem 5 proved that the functional Equation (7) has at least one solution. Now let x 1 , x 2 ∈ L 1 (I) be two solutions of (7). Then for t ∈ I, we have Then for t ∈ I, and |y(t)| < M, we obtain Hence and then the solution of (7) Proof. Let y be the unique solution of the functional integral Equation (6) and y * is the one of the equation Then | f 2 (t, s, y(s)) − f 2 (t, s, y * (s))|ds. Now, |ϕ(t) − ϕ * (t)| ≤ δ and by Lebesgue Theorem [20], we have | f 2 (t, s, y(s))|ds Hence Therefore, y ∈ L 1 of the problem (6) depends continuously on ϕ. This completes the proof.
Continuous
(ii) The solution y ∈ L 1 of Equation (6) depends continuously on the function f * 2 (t, s, y(s))ds | ≤ δ ⇒ y − y * ≤ , t ∈ I. Theorem 11. Assume that assumptions of Theorem 7 are verified. Then the solution y ∈ L 1 of Equation (6) depends continuously on the function f 1 .
Proof. Let y be the unique solution of the functional integral Equation (6) and y * is the solution of the functional integral equation Then f 2 (t, s, y(s)) − f 2 (t, s, y * (s)) ds Taking supermum for t ∈ I, we have Hence ||y − y * || ≤ .
Hence, the solution of (6) depends continuously on the function f 1 . This completes the proof.
By the same way we can prove the following theorem.
Theorem 12. Assume that assumptions of Theorem 7 are verified. Then the solution y ∈ L 1 of the functional integral Equation (6) depends continuously on the function f 2 . (7) depends continuously on the function y, if
Definition 3. The solution of functional Equation
Theorem 13. Let the assumptions of Theorem 7 be satisfied. Then the solution of the Equation (7) depends continuously on the function y.
Then for t ∈ I, |x(t)| < N and |y * (t)| < M, we have Hence the solution of problem (7) depends continuously on the function y.
Special Cases and Examples
We can deduce the following particular cases. | 3,535 | 2021-12-14T00:00:00.000 | [
"Mathematics"
] |
Oral literature and the evolving Jim-goes-to-town motif : Some early Northern Sotho compared to selected post-apartheid novels written in English
The continuation of the discourses of apartheid era African language literature characterised by the makgoweng motif in post-apartheid English literature written by black people has not been studied adequately. In this study I explored ways in which characters of Northern Sotho linguistic and cultural groups represented the same consciousness in both categories of novels across time. I used the qualitative method and analysed some Northern Sotho primary texts, written before democracy in South Africa, as well as selected post-apartheid English novels written by black people. I focused on the mokgoweng motif to examine the nature of continuity in theme and outlook. I found that the novels considered pointed to a sustainable consciousness, transcending linguistic boundaries and time. The social function of such characterisation representing the formerly oppressed black people, is a revelation of their quest towards selfdefinition in a modern world. The portrayed characters significantly point to resilience among black people to appropriate modernity by making sense of the world in a manner sustaining their distinctive outlook. In this way, the Northern Sotho-speaking cultural groups display a consistent consciousness enabling them to manage properly their adaptation to an evolving modern or globalising environment across time. The implication was that a comparison of South African English literature written by black people with indigenous language literature enriched the study of black South African English literature.
The continuation of the discourses of apartheid era African language literature characterised by the makgoweng motif in post-apartheid English literature written by black people has not been studied adequately.In this study I explored ways in which characters of Northern Sotho linguistic and cultural groups represented the same consciousness in both categories of novels across time.I used the qualitative method and analysed some Northern Sotho primary texts, written before democracy in South Africa, as well as selected post-apartheid English novels written by black people.I focused on the mokgoweng motif to examine the nature of continuity in theme and outlook.I found that the novels considered pointed to a sustainable consciousness, transcending linguistic boundaries and time.The social function of such characterisation representing the formerly oppressed black people, is a revelation of their quest towards selfdefinition in a modern world.The portrayed characters significantly point to resilience among black people to appropriate modernity by making sense of the world in a manner sustaining their distinctive outlook.In this way, the Northern Sotho-speaking cultural groups display a consistent consciousness enabling them to manage properly their adaptation to an evolving modern or globalising environment across time.The implication was that a comparison of South African English literature written by black people with indigenous language literature enriched the study of black South African English literature.
Introduction
The study intends to demonstrate that there is a significant thematic and stylistic link between some African writers writing in indigenous languages and those writing in English.The novels Tsiri (1953) and Nkotsana (1963) by Madiba, written in Northern Sotho, are compared with Mpe's Welcome to our Hillbrow (2000) and Moele's The book of the dead (2009).The narratives Tsiri and Nkotsana were written in the 1950s and 1960s in Northern Sotho, while Welcome to our Hillbrow and The book of the dead were written in the post-apartheid era in English.A common feature of the four novels is that they were written by writers coming from a Northern Sotho background.The characters in the four novels can therefore be assumed to share the cultural consciousness and lifestyle of the people they represent.The study can thus explore ways in which the themes, exploited in the earlier novels written in Northern Sotho, are taken forward in the later novels written in English, from a common cultural perspective.In other words, the study probes how cultural consciousness is refracted in the later novels in order to respond to an changed environment.I argue that the selected post-apartheid novels may be described as playing a part in ensuring sustainable cultural development by means of the literary techniques I explore in this study.
The distinctive cultural consciousness and lifestyle of a people constitute such a people's identity.This is why Castells (1997:2-3) defines identity as 'the process of construction of meaning on the basis of a cultural attribute, or related set of cultural attributes, that is and/or are given priority over other sources of meaning'.He further remarks that identity is people's source of meaning and experience (Castells 1997:3).While the books by Madiba, Mpe and Moele may handle different themes, using styles characteristic of the three different writers, I demonstrate in this article that at the underlying level the cultural identity of the characters aligning the way they construct meaning, is common.I stress this as a further dimension to what Milazzo (2016:135) has cogently described as the macro quality of apartheid era and post-apartheid fiction written by black people to 'continue to address racial oppression'.
In this way, I scrutinise how the four novels may be said collectively to create literary art that delineates a sustainable cultural development of the people their characters represent.Sustainable development, clearly broader than mere cultural development that is the goal of my analysis of the four novels, is 'development that meets the needs of the present without compromising the ability of future generations to meet their own needs' (Parry-Davies 2007).
That is why this study traces how the outlook, represented in literature of the 1950s and 1960s and written in Northern Sotho, is sustained in selected literature written in the postapartheid period.I examine the attainment of a sustainability that should grapple with an even more modern environment of the post-apartheid era, keeping up with current levels of globalisation, in a manner that does not undermine the wholeness of the people represented by the characters through time.
By globalisation I mean what Ogude (2004:276) describes as 'the ravages of Western modernity'.Mphahlele (2004:277) A comparison of the two groups of novels written in different languages is deliberate, so as to explore how linguistic boundaries hinder or facilitate the cultural representation that this study seeks to examine in the four novels.According to Pradervand (1989:129, 147, 197, 213), culture and identity are fundamental to the development process of a people.The kind of sustainable development that this study traces in the two groups of literature recognises that the economic development of a people has to take into account social or cultural development.
Sustainable development requires both cultural and any economic development of a people because, according to Parry-Davies (2007), sustainable development should hinge on care and respect for people, the planet and economic prosperity.The three pillars of people, planet and prosperity should be kept in balance simultaneously to ensure that the development does not suffer an imbalance.Care and respect for people implies cherishing and conserving such a people's social and cultural identity.In order to be congruent with sustainable cultural development, the 'care for people, the planet and economic prosperity', that Parry-Davies (2007) refers to, implies letting African cultures be, even in today's globalising world culture.
If the writings of Africans, both in African languages and in English, are to remain meaningful in current debates about literature as a means to sustainable cultural development, analyses of such writings should, as Munck (2008Munck ( :1228) ) remarks about any worthy discussion today, 'be embedded within the broader debates around the political economy of globalization and its implications for development'.The implications of globalisation for the cultural development of any population group within South Africa is that 'South Africa exists within the context in which the West dominates and the impetus of West-led globalization is to be at the leading edge of modern capitalism ' Hall (1991:31).As Africans and the West belong to separate, distinct cultural clusters (Mphahlele 2002:135-136), alongside their development Africans have to counter what writers like Pradervand (1989:73, 75) describe as the 'suppression of indigenous knowledge systems' that has resulted in 'stereotypes about Africa'.The way in which the culture of the Northern Sotho is represented in the four novels will be tested against the assertion of 'indigenous knowledge systems' that, according to Pradervand (1989), is necessary to rectify 'stereotypes about Africa'.Pradervand (1989:64-65) sees culture as 'a certain way of relating to time, objects, money, history, and the environment' whereby the quality of the relationships that African people develop among themselves and with themselves form 'a hierarchy of values' in which 'being' is central, as opposed to 'doing or having'.Another dimension of culture, according to Geertz (1973:250) determinant of social interaction' because it is 'a system of symbols by which man confers significance upon his own experience' that 'to some measurable extent, gives shape, direction, particularity and point to an ongoing flow of activity'.The study will specifically trace the existence of cultural symbols embedded in oral literary devices, like the narrative formula of African travel folktales and the linguistic use of proverbs and idiomatic expressions.
If the cultural development of Africans in today's globalising world is to be sustainable, the cultural symbols reflected in the artistic practice of African artists should assert an African consciousness that is resilient and adaptable through the ages.Geertz's (1973:250) description of culture as 'a system of symbols' implies that the cultural symbols in the fiction of any group of African writers like Madiba (1953Madiba ( , 1963)), Mpe (2000) and Moele (2009) should contribute to sustaining African consciousness and lifestyle through the content of their fiction in such a way that an Africanness efficiently remains available for future African literary practitioners and societies to use in ongoing meaning making.
In a way that is akin to the protagonists of many Moele extends the assertion of African thinking and lifestyle to the urban milieu of Pretoria, where it contends with foreign cultural influences that merely reshape it without stamping it out.Unlike the fiction of Madiba in which untainted African culture is portrayed as invincible, the African characters of Moele appropriate the urban setting and deploy their African cultural repertoire to forge a novel African culture in which the traditional and the modern co-exist.The benefit is that African ways of life and notions like marriage, polygamy, infidelity and friendship are redefined to account for the more complex space and time of the post-apartheid era.That is why Moele's dialogue projects the theme of reconceptualisation of African institutions, for example when the character Ntsako vacillates between traditionalism and modernism in the words: We can't stop being what we are … We can't stop being men.Our forefathers enjoyed their women freely, but we can't.We are in danger.But, unlike our forefathers, we have our god-condom (Moele 2009:94).
Origins in the Northern Sotho folktale
The central theme of the fears and consequences of travelling to unfamiliar territory underlying the Jim-goes-to-town motif of the four novels under consideration, has its origins in Northern Sotho folktales.The common Northern Sotho cultural immersion of the three novelists from eras elicits justifiable expectations that in crafting their modern literature they not only appropriate linguistic constructs like proverbs and idiomatic expressions from oral literature, but go further to adopt the travel motif that recurs in a huge number of Northern Sotho folktales.
In the two Northern Sotho folktales entitled 'Mokgadi le ledimo lejabatho' and 'Mohlare wa Mokadiathola', there are common phrases that accentuate the stock theme of travel and its attendant mystery.The cannibal who steals the daughter Mokgadi from her parents in the pretext of wanting to marry her (Makopo 1994:24) is said to wela tsela/hit the road at dawn with the young woman in the former folktale, while in the latter the polygamist patriarch Mokadiathola, too, is said to tšea leeto/undergo a journey (Makopo 1994:27).In the former tale the atmosphere created by the significant phrases nyalana le monna yo a sa mo tsebego/married to a strange man (p.24), Mahlo a Mmaphupi a be a hwibitše ka go lla/Mmaphuti's eyes were red with sobbing (p.25), a itiwa ke letswalo/was scared (p.25) and letile gona mo thoko ga tsela/wait here by the roadside (p.25) is that of apprehension and insecurity associated with exploration of unknown spaces.
The latter folktale similary evokes a loathing of the road through the cumulative effect of phrases like o tla dišwa ke mang/who will guard the forbidden tree (p.27) and re tla ya e le mantšiboa/we shall take the cover of darkness (p.28).
Respectively, fear of plumbing new spaces is justified when the lady Mmaphuti sees the cannibal lick blood from her thistle pierced soles (p.25) and the head of the family of Mokadiathola's entire household steals the forbidden fruit and die while he is visiting an unknown, distant village (p.28).
With these features of Northern Sotho folktales highlighted, it becomes clear that folktales of this category are the prototype upon which Jim-goes-to-town tales like those of Madiba (1953Madiba ( , 1963)), Mpe (2000) and Moele (2009)
Conclusion
The surface level use of Northern Sotho idioms in Madiba's two novels, and the linguistically mediated use of the idioms in the case of Mpe's and Moele's works, signify a continuity of outlook across the language medium and time.The fact that social issues handled by Madiba in his novels of 1953 and 1963 differ in texture from those handled later in the fiction of Mpe and Moele does not mean that the African characters portrayed by the three writers do not use the same cultural filter to inflect reality as they make meaning out of it.This is shown by the consistent presence of language forms which are repositories of a common cultural perspective.It is significant that the continuity of the discourses of apartheid era indigenous African language novels is present in post-apartheid novels that not only are of great merit per se but, according to Milazzo (2016:130), function within a recognised canon of novels of this category produced 'by black writers' that have 'won literary prizes or garnered international attention'.
The fact that Madiba writes in Northern Sotho while Mpe and Moele write in English does not affect the Northern Sotho cultural lens by which the characters experience reality.
In the case of Mpe and Moele, one has to detect the Northern Sotho language and idiom beneath the surface of the narration that is through the English language.In the same way as Barra (1960:i) observes of the function of Kikuyu proverbs and other oral literary devices in understanding the thinking of the Kikuyu, cognisance of these cultural mediations in the writings of Madiba, Mpe and Moele are 'the key for understanding the point of view and psychology' of the Northern Sotho-speaking people, represented in the characterisation of the novels under scrutiny.While Madiba's novels explicitly exploit the Jim-goes-totown motif, the later novels of Mpe and Moele continue to handle the theme, yet in more subtle and nuanced ways.The reconstruction of the Jim-goes-to-town motif in effective ways calls to mind a similar feat by the writers of 'Siyagruva novels' (see Kaschula 2007).Like the protagnonists in Madiba's Tsiri and Nkotsana, Mpe's and Moele's characters move away from their rural upbringing to confront the insecurity of facing the monster of unknown space, in much the same way the Northern Sotho folktales have their protagonists venture out into unchartered territories and survive monstrous encounters by means of cultural resource which they have available.
The difference between the examples of oral literature cited above and the written literature under consideration, is that the former is set in a milieu that may be described as free from globalising effects, while the latter progressively grapples with intensifying threats of globalisation.However, Mpe and Moele prove equal, in their crafting of the novels discussed above, to the challenge of addressing social issues within a more complex frame of globalising tendencies.
For this reason, it is understandable that, unlike earlier writers such as Madiba, later writers such as Mpe and Moele have to handle a bigger set of even more intransigent social issues than just the clash between tradition and Christianity, or the evils of the urban space as opposed to the morality of the rural landscape.The progressive intensity of globalisation that forms a continuum from the Sotho-speaking world depicted by Madiba in the Northern Sotho novels Nkotsana and Tsiri, to that of Mpe and Moele in the post-apartheid period should not be mistaken as implying that Madiba's novels are banal.On the contrary, the value of his work is demonstrated by Madiba's skilful use of proverbs to signify communal thinking (shown earlier in this study), as well as by his handling of theme and language in no less a profound manner than is the case with Mpe and Moele.
Prowess is seen in his naming of the urban area to which the character Nkotsana escapes (Madiba 1963).It is called Bokgalaka (Madiba 1963:33), a word which means the place where the dead go in a Northern Sotho idiomatic expression, symbolising the character Nkotsana's moral/ spiritual death from the point of view of traditional African culture.The plot of the novel enhances the symbolism of the name giving when the main character engages in irregular economic activities (Madiba 1963:34) that eventually land him in a Zimbabwean prison.Nkotsana's return to his rural homestead of Moletši (Madiba 1963:39) symbolises moral regeneration, manifested materially in his perfection as a Christian married man and in the economic success of his farming and other ventures with childhood friend Maseroka in Makgabeng.
His demise comes about as a result of addiction to Western liquor, significantly.His childhood friend and kinsmen survive, as they only revel ritualistically in consuming organic traditional beer.
Although the modernised society that is represented in Madiba's fiction of the 1950s is vastly different from that of the post-apartheid novels of Mpe and Moele, my analysis has shown that the Africanist thinking contained in the oral literary devices that inform the common Jim-goes-to-town narratives is continued into the present day.In this way, the cultural development of African communities represented in the post-apartheid novels is rendered sustainable by virtue of mutating as the new environment dictates, yet remaining an African identity within its own consciousness and lifestyle.
The outlook is ingrained in the linguistic constructs analysed in this essay, the hotbed of which is the folktales discussed in this study.
Johannesburg can be seen as a symbol, replacing the figure of the cannibal or monster in Northern Sotho folktales.In the folktales, the character who strays from proper conduct ends up falling victim to the stratagems of a monster or cannibal, similarly to the way the character Tsiri is metaphorically consumed by Johannesburg.Of course the 'monster' also denotes the evils of racial oppression, without the Northern Sotho writers necessarily confronting the demoralising apartheid conditions in overt terms.They rather convey it through the corruption of black characters as soon as they come into contact with the urban environment.The African mythic monster remains the urban environment that destroys Nkotsana and is externalised behaviourally in Nkotsana's addiction to Western liquor (p.65).This tragic flaw naturally leads to a reversal of fortunes.Nkotsana's prosperous business enterprises founded in the traditional notion of health associated with livestock rearing, flourish until he dies after a car crash caused by intoxication with Western spirits (p.66).Welcome to ourHillbrow (2000), the characters Refentše, Refilwe and the other friends come from the rural village of Tiragalong where gossip travels through busybodies recounting myths in a manner reminiscent of the oral storyteller of earlier times.Judgement whether someone's conduct deserves approbation or censure is arrived at communally through the use of proverbs translated by the writer literally from Northern Sotho into English.beenfor stream of consciousness, the novel would have been amenable to the tight packaging of events into those that happened in rural Tiragalong before the characters travel to Hillbrow, and those incidents that can be neatly boxed as pertaining to the urban locale of Hillbrow, coinciding with the present juncture of the narration.Such a technique defies the strict division of setting into urban and rural, or past and present, thus allowing the characters' African consciousness to travel in interesting ways across space and time.
(Makopo 1994) folktales, like Mokgadi le ledimo lejabatho and Mohlare wa Mokadiathola(Makopo 1994), Tsiri migrates to an unfamiliar space that is Johannesburg (p.14), after escaping from school.After exploring Johannesburg and acquiring criminal ways, he returns to his home, where the decadence catches up with him and he ends up losing all the livestock and other riches bequeathed to him by his parents.He is then abandoned by everyone (p.28).This simple plot does not differ much from that ofMadiba's next novel, Nkotsana (1963), except that in Nkotsana the traditional African ethos of the Moletši village (p.27) and alien lifestyles in a westernised Zimbabwean city, populated by nationals who have returned with vile Johannesburg manners (p.34), are pitted against each other in a fairly more intriguing manner.while the characters are physically negotiating modern-day urban problems like HIV and/or AIDS xenophobia and rampant consumerism in urban Hillbrow through the African filter suggested by these oral narrative techniques.In this way, the indices of an African worldview are remain intact across the urban or rural or past or present divide.Had it not http://www.literator.org.za1971:143), or 'mahlale a ja monye'/shenanigans result in the perpetrator's pain [author's own translation] (Madiba1953:22; Rakoma 1971:160).When the Afrikaner government's laws abolish levy by white people on whose land black people of the former Northern Transvaal stay as vassals,Madiba (1963:12)vocalises the communal verdict of Nkotsana's fellow villagers by means of the proverb 'A tlala a epšha madiba, a hutelela madibana'/old wells dry up in order for younger ones to fill [author's own translation](Madiba 1963:12).By this Northern Sotho proverb the choral voice protests that old ways of exploitation merely give way to newer ones.Such a resilience in the African thinking of Northern Sotho speakers is evident also in Moele's novel The book of the dead.
are modelled. | 5,028.8 | 2016-09-16T00:00:00.000 | [
"History",
"Linguistics"
] |
Tin and Tin Compound Materials as Anodes in Lithium-Ion and Sodium-Ion Batteries: A Review
Tin and tin compounds are perceived as promising next-generation lithium (sodium)-ion batteries anodes because of their high theoretical capacity, low cost and proper working potentials. However, their practical applications are severely hampered by huge volume changes during Li+ (Na+) insertion and extraction processes, which could lead to a vast irreversible capacity loss and short cycle life. The significance of morphology design and synergic effects-through combining compatible compounds and/or metals together-on electrochemical properties are analyzed to circumvent these problems. In this review, recent progress and understanding of tin and tin compounds used in lithium (sodium)-ion batteries have been summarized and related approaches to optimize electrochemical performance are also pointed out. Superiorities and intrinsic flaws of the above-mentioned materials that can affect electrochemical performance are discussed, aiming to provide a comprehensive understanding of tin and tin compounds in lithium(sodium)-ion batteries.
INTRODUCTION
Since the commercialization of lithium-ion batteries (LIBs) by the Sony Corporation in 1991, LIBs are widely used in portable devices, electric vehicles and energy storage equipment for their benefits of having no memory effect, long cycle life and high energy density (Tarascon and Armand, 2010;Kim et al., 2012;Wang et al., 2019). With largely depleting lithium resources, the existing limited and unevenly distributed lithium reserves cannot meet the increasing demands of LIBs (there is an estimated 17 ppm in the earth's crust; Grosjean et al., 2012). Due to abundant sodium reserves (there is an estimated 23,000 ppm in the earth's crust), sodium-based batteries can be an attractive alternative. Traditional Na-S batteries require operating temperatures between 300 and 350 • C to allow sufficient Na + conductivity of NaAl 11 O 17 , but safety issues and energy loss from maintaining the operating temperature are inevitable (Wen et al., 2008;Xin et al., 2014;Kou et al., 2019). Motived by the similar chemical properties of sodium and lithium, researchers have shifted their attention to ambient temperature sodium-ion batteries (SIBs), but lots of problems need to be addressed for the practical application of SIBs (Yabuuchi et al., 2014;Li et al., 2018;Liu Y. et al., 2019). The main issue is the larger radius size of Na + (1.09 Å) compared with Li + (0.74 Å), which brings about sluggish reaction kinetics with low capacity, poor rate capability, and short cycling life (Chevrier and Ceder, 2011;Xu et al., 2013;Li et al., 2018). Extensive studies have been carried out to understand the requirements of commercial SIBs, which are great choices for low cost and large-scale energy storage equipment required for intermittent renewable energy and smart grids (Palomares et al., 2012;Pan et al., 2013). Comparitively, the energy density of LIBs cannot fully satisfy an increasingly growing need for electronic energy storage devices (Xiao et al., 2018;Fang et al., 2020). The present conventional anode in LIBs is graphite, which follows a intercalation/de-intercalation reaction pathway with a low theoretical capacity (378 mAh/g) and is electrochemically unfavorable for SIBs owing to the larger size of Na + (Qian et al., 2014). Therefore, not all successful experiences from LIBs are viable to be applied in SIBs. Usually, graphene and non-graphitic carbon (like hard carbon and carbon black) are conventional anodes in SIBs. Additionally, TiO 2 , Na 2 Ti 3 O 7 , Sn, SnO 2 , SnS 2 , Sb, and P, etc. are potential anode materials for Na + storage in SIB systems (Slater et al., 2013;Li et al., 2018;Guan et al., 2020). Thanks to a similar charging-discharging mechanism, tin-based anodes' alloying/dealloying reactions have attracted considerable attention because they are applicable to both LIBs and SIBs with a high theoretical capacity (Stevens and Dahn, 2000;Zhu et al., 2013). Environmental benignity, low costs, and lower operating potentials than graphite are also attractive features for tin and tin compounds, but they contain the following intrinsic defects (Fu et al., 2016). Tin and tin compounds as anodes in LIBs (SIBs) sustain colossal volume changes during Li + (Na + ) insertion and extraction processes, which leads to pulverization of the active materials as well as losing electrical contact with the collector (Zhang, 2011;. Moreover, a continuously regenerated solid electrolyte interphase (SEI) layer between the electrode and electrolyte interface will consume extra lithium (sodium) ions, causing large irreversible capacity loss and poor cycle stability (Beaulieu et al., 2001). Last but not least, the electronic conductivity of SnO 2 (0.1 S/m) and SnS 2 (1 S/m) is much inferior to Sn (9.1 × 10 6 S/m) (Thangaraju and Kaliannan, 2000;Saadeddin et al., 2006;Nie et al., 2020). To cope with these problems, many measures have been taken and summarized as follows.
Firstly, according to comprehensive investigations nano-scale tin and tin compounds can alleviate the inter stress brought on by volume changes, to some extent, and can shorten the transfer paths of lithium (sodium) ions and electrons. Additionally, more reactive sites on the interface between electrodes and electrolytes are generated (Uchiyama et al., 2008;Park and Park, 2015;Park et al., 2018). The second method is to incorporate tin and tin compounds with one or more stress-accommodating phases that have can assure electronic conductivity, such as carbonaceous materials, metals and some transitional metal compounds (Kepler et al., 1999;Takamura et al., 1999). In 2005, Sony commercialized the first tin-based amorphous anode with the trademark "Nexelion" and this anode is composed of Sn, Co and C, where Co and C are identified as conductive and stressreleasing phases. According to Sony, Nexelion has a capacity of 900 mAh, which is 28 % higher than conventional graphite (700 mAh) at 0.2 • C. Extensive investigations have been made to find a feasible and low-cost way to synthesize tin-and tin compound-based anodes with satisfactory physicochemical and electrochemical properties for both LIBs and SIBs at the same time. In this review, we focus on the recent progress of Sn, SnO 2 , and SnS 2 as anodes in LIBs and SIBs. This comprehensive review provides an in-depth account of the similarities and differences between Sn, SnO 2 , and SnS 2 as used in LIBs (SIBs) as well as clear directions for the structure design and fabrication procedures regarding anode material syntheses in LIBs and SIBs.
Sn-Based Composites
Sn has a high theoretical specific capacity of 993.4 mAh/g, according to the reversible reaction Sn+xLi + +xe − ↔Li x Sn (0≤x≤4.4) (Lee et al., 2003). However, huge volume changes and aggregation of Sn particles during the alloying/dealloying process are the main obstacles for practical applications (Beaulieu et al., 2001). Generally, carbonaceous materials and Sn-based intermetallics are believed to address these issues efficiently and largely improve the battery performance of Sn-based anode materials Ying and Han, 2017). Carbon materials, either acting as the support or coating, can effectively ease volume changes and aggregation of Sn particles and increase the overall conductivity, especially with graphene (Wen et al., 2016). Zhou et al. have reported a high-performance anode where tin nanoparticles are impregnated into nitrogen-doped graphene (Zhou et al., 2013a). The graphene coating can facilitate electron transport and prevent aggregation of tin particles. Add void spaces between graphene and tin nanoparticles avail the accommodation of volume changes. As a result, the final composite delivers a reversible capacity of 481 mAh/g at a current density of 100 mA/g. Some Sn-based intermetallics have also been considered as a promising choice, such as Sn-Cu, Sn-Co, Sn-Sb, Sn-Bi, Sn-Se, Sn-Fe and Sn-Ni etc (Yang et al., 1999;Yoon et al., 2009;Xue et al., 2010;Dang et al., 2015;Qin et al., 2017). Among all these types of intermetallics, Sony's Nexelion-consisting of Sn, Co, and C-is the first commercialized tin-based anode, but the composition is not fully revealed. Hence, it is important to further investigate the role and mechanism of cobalt in the Sn-Co intermetallic system. In principle, cobalt is considered an inactive component used to buffer the volume changes. However, according to the systematic study of Sn 1−x Co x (0<x<0.6) and [Sn 0.55 Co 0.45 ] 1−y C y (0<y<0.5) conducted by Dahn et al., the Sn 1−x Co x system is amorphous when 0.28<x<0.43 and an amorphous structure can hold part of the capacity in place of alloying anodes in LIBs. In addition, cobalt does not form intermetallic Co-carbides which avoids the exclusion of crystalline tin, improving the cycle stability of the composite (Tamura et al., 2004;Dahn et al., 2006;Todd et al., 2007;Li et al., 2011).
Sn-Cu alloy is another extensively explored anode in LIBs, especially in the stable Cu 6 Sn 5 intermetallic phase. According to the detailed in-situ X-ray study of Cu 6 Sn 5 by Larcher and his coworker, the two reverse phase transitions of Cu 6 Sn 5 reacting with Li + are listed as follows (Larcher et al., 2000): Li 2 CuSn↔Li 4.4 Sn + Cu As the Cu content in the Cu-Sn alloy increases, the final obtained product will significantly improve in cyclability, because Cu is used as an inactive buffering matrix to relieve the volume expansion. However it also results in a relatively lower discharge capacity, for example, the theoretical discharge specific capacity of Cu 6 Sn 5 in LIBs is 584 mAh/g (Trahey et al., 2009). Core/shell Cu 6 Sn 5 @SnO 2 -C anode materials are prepared by boiling Sn and Cu powders in a sucrose solution with air, as reported by Hu's group, in which Cu 6 Sn 5 as an inert foundation replaces the electrochemically inactive Cu, SiC and Ni (Hu et al., 2015). As a consequence, the composite exhibits a high discharge specific capacity of 619 mAh/g at 1.0 • C after 500 cycles, and SEM images before and after the first cycle show that the maximum volume change ratio decreases to 12.7%. On the other hand, some Sn-based intermetallics with electrochemically active metals, like Sb, Bi, and Ge, have shown higher initial capacities and better electrochemical properties than the individual active materials, which is due to the different potentials vs. Li + /Li of these active metals. The temporarily separated charge-discharge process of these active materials guarantees that Sn and the electrochemically active metals can operate as volume-releasing phases for each other alternately (Trifonova et al., 2002;Zhang, 2011). He and his co-workers have reported a colloidal synthesis of monodisperse SnSb nanocrystals that deliver high specific capacities of 700 and 600 mAh/g at 0.5 and 4.0 • C after 100 cycles, respectively (He et al., 2015).
Graphene with its excellent electrical conductivity, flexibility, and high specific surface area can be an ideal buffering matrix for tin-based anodes (Li and Kaner, 2008). In 2015, Luo et al. synthesized a novel anode where tin nanoparticles were encapsulated into graphene backboned carbon foam (Luo B. et al., 2016). Graphene and the outermost carbon coating serve as a physical boundary to prevent the aggregation of welldistributed tin nanoparticles and alleviate the huge volume changes of tin particles. The unique structure is prepared by uniformly growing SnO 2 on the surface of graphene oxide and coating with porous carbon through a hydrothermal processes, finally calcinating in a reducing atmosphere. The resulting composite shows excellent cycle stability and exceptional rate performance in LIBs as well as in SIBs. A reversible specific capacity of 506 mAh g −1 can be achieved at a current density of 400 mAh/g and retained at 270 mAh/g, and even at 3,200 mA/g after 500 cycles (Figure 1). A summary of anode materials, synthetic methods, and electrochemical performance in tin-based anode composites is shown in Table 1 for comparison.
SnO 2 -Based Composites
Tin oxide materials were first discovered and applied in LIBs with a high specific capacity by Idato et al. from Fuji Photo Film in 1997 (Idota et al., 1997). From then on, SnO 2 -based anodes in LIBs have drawn considerable attention because of their high theoretical capacity, resource availability, environmental benignity, and low operating potentials (0.3 and 0.5 V vs. Li + /Li in charge and discharge processes; Li R. et al., 2019). The chemical reactions of SnO 2 with lithium electrodes involve the following two steps (Courtney and Dahn, 1997;Chen and Lou, 2013; Zhou et al., 2013b): The theoretical specific capacity for bulk SnO 2 electrodes is 780 mAh/g, which includes conversion reactions and further alloying/dealloying reactions. It is worth noting that the conversion reactions of bulk SnO 2 to Sn are irreversible but can be partly reversible for nanosized SnO 2 and the theoretical specific capacity can be up to 1,484 mAh/g (Kim et al., 2005;Zhang et al., 2009). Like Sn, the as-formed Sn from SnO 2 suffers from huge volume changes (250%) in alloying/dealloying processes and what's worse, the inner stress originating from volume changes causes pulverization of the SnO 2 electrodes. The conversion reaction and pulverization of the SnO 2 electrode brings about a severe capacity decrease in the SnO 2 . Another issue that needs to be mentioned is that the Sn particles from conversion reactions tend to agglomerate into Sn clusters that will weaken the electrochemical activity Deng et al., 2016). These flaws are the main limitations for the commercialization of SnO 2 -based anodes in LIBs.
To deal with the defects of SnO 2 -based electrodes, the adopted strategies are summarized as follows. The first strategy is to convert bulk SnO 2 particles into nanosized particles and simultaneously design nanostructures such as nanospheres, nanotubes, and nanofilms . The nanostructures can accommodate volume changes and shorten the diffusion length for electrons and lithium ions, but the accompanying negative effect for nanostructure materials is that the high surface energy will lead to the agglomeration of nanoparticles, which is electrochemically unfavorable (Chen and Lou, 2013).
Additionally, structure design alone cannot compensate for the whole volume change whilst producing the desired electrochemical performance. Hence, another strategy is proposed, which is to combine the designed architecture with carbonaceous materials including carbon nanotubes, amorphous carbon, hard carbon, and graphene (Read et al., 2001;Yang et al., 2013;Zhou et al., 2016). Carbonaceous materials not only prevent nano SnO 2 and as-formed Sn grains from agglomeration by creating a physical barrier, but they also improve the overall electronic conductivity of the SnO 2 -based composite.
When it comes to size control of SnO 2 in LIBs, it is not found that as the SnO 2 particles get smaller, the better the electrochemical performance becomes. As the size of SnO 2 particles decreases, the SEI layer becomes larger, which hinders SnO 2 from reacting with lithium ions . According to Ahn et al., the optimum size of colloidal synthesis of SnO 2 particles is ∼11 nm during Li + insertion/extraction processes (Ahn et al., 2004). A series of sizes of SnO 2 hollow spheres as investigated by Kim et al. demonstrated that SnO 2 hollow spheres with a size of 25 nm showed the best electrochemical performance (750 mAh/g after 50 cycles at a current density of 100 mA/g; Kim et al., 2013). Moreover, SnO 2 nanoparticles synthesized via the hydrothermal method with a size of 3 nm deliver the best reversible capacity (740 mAh/g after 60 cycles at 1,800 mA/g) compared to the ones at 4 and 8 nm (Kim et al., 2005). As a consequence, the optimum size for SnO 2 nanoparticles varies for different fabrication processes.
Recently, Jiang et al. have shown that well-designed cob-like SnO 2 nanoparticles coated with polydopamine and prepared by a hydrothermal processes exhibit an excellent rate capability and a long cycle life at around 1,400 mAh/g at a current density of 160 mA/g after 300 cycles . Bushlike hydroxypropyl cellulose-graft-poly(acrylic acid) (HPC-g-PAA) and Na 2 SnO 3 ·3H 2 O were used as the template and SnO 2 precursor, respectively. SnO 2 particles with an average size of 5 nm were uniformly grown on the graft of HPC-g-PAA template, and gaps of 3-5 nm among SnO 2 particles could be observed, which allowed it to accommodate for volume changes of SnO 2 particles in the electrode. Moreover, the final carbonized polydopamine coating was shown to help form stable SEI layers, which is helpful to enhance the cycle stability (Figure 2).
Beyond the use of carbon, transition metal compounds are also regarded as an effective component to be introduced into SnO 2 electrodes with syngeneic effects of combined materials. TiO 2 , for example, is a very stable LIB anode material because of its outstanding electrochemical stability with only a slight volume change (3-4%) even in a high current density . However, TiO 2 is restricted by a low theoretical capacity (178 mAh/g), so TiO 2 is often used as a supporting backbone or a protective layer for unstable active materials like SnO 2 (Liu H. et al., 2015). Tian et al. have proposed a well-designed nanostructure where SnO 2 particles are encapsulated in TiO 2 hollow nanowires (Tian et al., 2014). The composite employs SnO 2 embedded carbon nanowires as a template after being coated with TiO 2 and calcinated in air. Void spaces between SnO 2 particles and TiO 2 shells have been demonstrated through TEM analysis. The voids offer space to accommodate volume changes of SnO 2 nanoparticles during the charge/discharge process. With this unique yolk-shell structure and the role of TiO 2 in the composite, the final SnO 2 @TiO 2 composite exhibits a great cycle stability (445 mAh/g at a current density of 800 mA/g after 500 cycles). A summary of anode materials, synthetic methods, and electrochemical performances upon some SnO 2based anodes are pointed out in Table 2. Momma et al. and Brousse et al. have revealed that tin sulfides could also be used as novel anode materials in LIBs (Brousse et al., 1998;Momma et al., 2001). SnS 2 materials possess superior physicochemical properties with a theoretical specific capacity of 645 mAh/g and a unique layered hexagonal CdI 2 -type crystal structure that is composed of tin cations sandwiched between two layers of close-packed sulfur anions in octahedral coordination, in which adjacent sulfur layers are linked with weak Van der Waals interactions and the interlayer intervals are about 0.59 nm (Morales et al., 1992;Lefebvre et al., 1997;Song et al., 2013;Deng et al., 2014;Li R. et al., 2019). Layer voids in this unique configuration are beneficial for the Li + insertion process and can partially accommodate the volume change (Chen et al., 2017). However, integral volume changes and poor electronic conductivity of SnS 2 are inevitable, which needs to be improved and one set of adopted electrochemical reactions have been put forward, which are the following (Momma et al., 2001;Kim et al., 2007):
SnS 2 -Based Composites
Sn + xLi + + xe↔LixSn(0≤x≤4.4) It can be obviously observed from the above equations that the reaction mechanism of SnS 2 with lithium is very similar to the lithiation and delithiation of SnO 2 . In the first discharge cycle, metallic tin and amorphous Li 2 S are formed during the irreversible conversion of SnS 2 , where active Sn can be coated with the inactive Li 2 S, mitigating the volume changes of the electrode to some extent . With further charge and discharge processes, alloying/dealloying reactions of tin with lithium ions are reversible, but the capacity reduces rapidly due to the irreversible conversion and severe pulverization of SnS 2 electrodes. Analogously, morphology design and the introduction of a conductive phase that accommodates volume changes, like amorphous carbon and graphene, can largely alleviate the volume changes of SnS 2 in charge and discharge processes (Zhuo et al., 2012). Since the microstructure of layered SnS 2 materials has some resemblance to 2D graphene, the combination of them is more compatible than other dissimilar materials like SnO 2 , Sn, and Si (Bin et al., 2019). Few-layer SnS 2 /graphene hybrid materials synthesized using L-cysteine as a ligand in the solution-phase method have been reported by Chang et al. which can deliver a reversible specific capacity of 920 mAh/g at a current density of 100 mA/g . Additionally, graphene can be functionalized by doping with nitrogen, fluorine, or sulfur elements, and the doped graphene generates more defects and active sites which significantly enhances the electrochemical activity and conductivity (Guo et al., 2011). Zheng et al. have reported a large-scale and facile synthetic route for SnS 2 nanoparticles coated with S-doped graphene (SnS 2 /S-rGO). The electrochemical stability of SnS 2 /S-rGO particles is much better than that of the undoped SnS 2 /rGO, in which the SnS 2 /S-rGO can possess a discharge specific capacity of 947 mAh/g whereas the SnS 2 /rGO is about 700 mAh/g after 200 cycles at 1A/g . This result can be mainly ascribed to the stronger interaction of S-doped graphene with SnS 2 particles. Wu et al. have presented a well-designed stable H-TiO 2 @SnS 2 @PPy composite by growing SnS 2 sheets on hydrogen treated TiO 2 (H-TiO 2 ) nanowires and coating with carbonized polypyrrole (PPy), in which H-TiO 2 provides some advantages over untreated TiO 2 . The key reason is that H-TiO 2 structurally possesses more defects than the untreated TiO 2 , which provides increased conductivity and stronger chemical interactions with SnS 2 (Ti-S) . Furthermore, the outermost carbonized PPy layer can accommodate the volume change to some degree as well as boosting the electronic conductivity. With the synergistic effects of the mentioned materials, the final H-TiO 2 @SnS 2 @PPy composite can deliver an outstanding electrochemical stability with a high discharge specific capacity of 508.7 mAh/g at 2.0 A/g after 2,000 cycles (Figure 3). A summary of anode materials, synthetic methods, and electrochemical performance of SnS 2 -based composites in LIBs have been displayed in Table 3.
TIN AND TIN COMPOUNDS IN SIBs
The revival of sodium-ion batteries (SIBs) owes mainly to the low cost and abundance of sodium on earth. Although the intercalation mechanism of sodium and lithium are similar when used as electrodes in secondary alkali metal batteries, the larger radius size of Na + (1.09 Å) compared to Li + (0.74 Å) makes it challenging to find a suitable Na + host with both excellent cycle stability and a relatively high capacity (Luo W. et al., 2016;. Graphite is the most used anode in commercial LIBs but cannot insert Na + effectively, which is due to the mismatching of graphite's interlayer interval (0.334 nm) with the larger radius of Na + (Chevrier and Ceder, 2011). Moreover, Si is a very promising anode material for LIBs as it has a theoretical discharge specific capacity of 3,579 mAh/g, and some Si-based materials have been commercialized, but it cannot react with Na + in the same manner as LIBs. This is because Na-induced lattice disturbance are remarkable in Si materials as they become endowed with small interstitial space and high stiffness (Chou et al., 2015;Fang et al., 2019). Interestingly, Sn, SnO 2 , and SnS 2 can be applied in SIBs with relatively high capacity, low cost, and proper low charge/discharge potentials vs. Na/Na + , due to the minor Na-induced lattice disturbance in Sn-based materials (Guo et al., 2011;Zhu et al., 2013;. However, these active materials still go through huge volume changes, and the volume change is even severer in SIBs, which leads to serious pulverization of these brittle active materials ending up with rapid capacity decay and poor cycle stability (Ellis et al., 2012). The coping strategies of Sn, SnO 2 , and SnS 2 in SIBs are analogous to ones in LIBs, which are the nanostructure design of these active materials and the process of simultaneously introducing a second phase that buffers the volume change (Nayak et al., 2018). Major improvements for Sn, SnO 2 , and SnS 2 in SIBs have been separately detailed in the following sections and the anode materials, synthetic methods and electrochemical performance of FIGURE 3 | SEM images of H-TiO 2 @SnS 2 (A) and H-TiO 2 @SnS 2 @PPy (B), cycling performance (C) of SnS 2 @PPy, H-TiO 2 @SnS 2 @PPy, and N-TiO 2 @SnS 2 @PPy at 2.0 A/g. Reproduced from Wu et al. (2019) with permission from Copyright (2019) WILEY-VCH. Sn, SnO 2, SnS 2 -based anode composites in SIBs are summarized in Table 4.
Sn-Based Composites
The theoretical capacity of Sn as anode materials in SIBs (Na 15 Sn 4 ) is about 847 mAh/g, but volume changes of Sn electrodes during charge-discharge processes are as high as 525 %, which is much higher than Sn in LIBs (Qian et al., 2014). As reported by Qian et al., the capacity of pure Sn electrodes in SIBs falls to zero in only five cycles, which can be explained by the pulverization of active materials during Na + insertion/extraction processes (Ellis et al., 2013). Sn-based intermetallic alloy anodes have been demonstrated to be a reasonable solution to address the short cycle life of Sn (Li J. et al., 2019). A Sn-Cu alloy is a stable active/inactive alloy with a relatively high capacity in LIBs where the addition of Cu significantly increases the stability of the alloy. As mentioned in the LIBs section, the Cu 6 Sn 5 alloy is more stable than other Sn-Cu intermetallics, but the application of Cu 6 Sn 5 in SIBs is hampered by the short diffusion depth owing to the larger size of Na + . Regarding this, Lin et al. have reported using a Sn 0.9 Cu 0.1 alloy in SIBs (Lin et al., 2013). In spite of a low initial discharge specific capacity of 250 mAh/g, the capacity gradually increased to 440 mAh/g in 20 cycles without capacity loss after 100 cycles. Sn-P intermetal is an emerging SIB anode material with balanced properties (Luo W. et al., 2016). Although the theoretical specific capacity of Sn 4 P 3 (1,132 mAh/g) is significantly inferior to the pure P (2,560 mAh/g), electronic conductivity and theoretical volumetric capacity are much better than pure P in SIBs (Kim et al., 2014;Lan et al., 2017). Liu et al. have synthesized uniform yolk-shell Sn 4 P 3 @C nanoparticles for SIBs where Sn 4 P 3 nanoparticles are encapsulated in hollow carbon spheres rendering some void space for the volume change of Sn 4 P 3 whilst maintaining an intact microstructure (Liu J. et al., 2015). The carbon shell helps to form a stable SEI layer and strengthen the overall electronic conductivity of the composite. An initial discharge specific capacity of 790 mAh/g for yolk-shell Sn 4 P 3 @C nanospheres was determined and retained a high reversible specific capacity of 515 mAh/g after 50 cycles at 100 mA/g (Figure 4).
SnO 2 -Based Composites
Sodiation/desodiation reactions of the SnO 2 electrode are very similar to the lithiation/delithiation process, which include the conversion of SnO 2 and reversible alloying/dealloying reactions contributing to the total theoretical specific capacity of 1,378 mAh/g (Su et al., 2013). SnO 2 is one of the most extensively investigated anode materials in LIBs and nowadays, some of the SnO 2 -based composites have reached the theoretical capacity of SnO 2 with an excellent cycle life.
Herein, successful strategies in LIBs to address volume changes are advised for employment in SIBs as well (Chen and Lou, 2013).
Huang et al. have reported a facile in situ synthesis of 3D porous carbon encapsulated SnO 2 nanoparticles (SnO 2 -PC) that exhibits a great cycle stability with a discharge specific capacity of 208.1 mAh/g at 100 mA/g after 250 cycles and SnO 2 -PC with a SnO 2 weight percentage of 74.47 % demonstrated an extraordinary rate capability with a discharge specific capacity of 100 mAh/g at 1,600 mA/g after 1,000 cycles (Huang et al., 2016). The greatly improved electrochemical performance of the asprepared SnO 2 -PC composite owes to the porous carbon matrix that can alleviate volume changes of SnO 2 in the sodiation/desodiation process and improve the electronic conductivity of the composite.
Heterostructure has the advantage of high-speed electron transfer because of the interface effect. The heterojunction of nanocrystals with different band-gaps has been proven to enhance surface reaction kinetics and to provide increased charge transport. Zheng et al. have employed SnS in a C@SnO 2 @graphene composite in SIBs. The C@SnS/SnO 2 @graphene composite exhibits a high rate capability and long cycle life with a high capacity, which can be ascribed to the heterostructure of SnS/SnO 2 which further improves the electronic conductivity and FIGURE 4 | SEM (A) and TEM images (B) of yolk-shell Sn 4 P 3 @C. Cycling performance (C) of yolk-shell Sn 4 P 3 @C at 100 mA/g. Reproduced from Liu J. et al. (2015) with permission from Copyright (2015) Royal Society of Chemistry. diffusion of Na + in the electrode (Zheng et al., 2016). C@SnS/SnO 2 @graphene achieves a reversible discharge specific capacity of 713 mAh/g at 30 mA/g after 70 cycles, which is higher than C@SnS@graphene (around 600 mAh/g) and C@SnO 2 @graphene (around 400 mAh/g). By increasing the current density to 810 and 2,430 mA/g, the discharge specific capacity can be retained at 520 and 430 mAh/g, respectively ( Figure 5).
SnS 2 -Based Composites in SIBs
As mentioned, SnS 2 has a special layered structure where tin cations are sandwiched between two layers of sulfur anions. The spacing between two adjoining two layers (d 002 = 5.90 Å) is larger than the radius of Na + (d 002 = 1.09 Å), which allows the intercalation and diffusion of Na + throughout the electrode effectively (Zheng et al., 2016). However, a pure SnS 2 electrode contends with poor conductivity and severe pulverization. It has been demonstrated from previous studies that combining SnS 2 with conductive materials will notably strengthen the electrochemical performance (Ren et al., 2017;. The unique 2D layer structure of SnS 2 means it is highly compatible with graphene and can provide an increase in electronic conductivity. In 2014, Liu et al. discovered that exfoliated SnS 2 restacked on graphene showed a remarkable electrochemical performance with a discharge specific capacity of 650 mAh/g at 200 mA/g after 100 cycles . The excellent performance can be ascribed to the ultrasmall exfoliated-SnS 2 layers being utilized fully when used as the electrode. Jiang et al. have reported a sandwich-like SnS 2 /graphene/SnS 2 composite with expanded interlayers produced by a one-step hydrothermal synthesis, where both sides of the reduced graphene oxide sheets is covalently decorated with ultrathin SnS 2 nanosheets . The enlarged interlayer distance of SnS 2 is about 8.03 Å, which assists the insertion/extraction of Li + /Na + with rapid transport kinetics. As a result, SnS 2 /graphene/SnS 2 composites have excellent electrochemical properties both in LIBs (see also in the LIBs section) and SIBs. To be specific for SIBs, the reversible discharge specific capacities of 1,295 mAh/g and 765 mAh/g are delivered at a current density of 0.1 and 10 A/g, respectively (Figure 6). Additionally, according to the structural characterizations of SnS 2 /graphene/SnS 2 electrodes after 200 cycles, morphology changes and significant particle agglomeration cannot be clearly detected. Some reasons for the superiority of the SnS 2 /graphene/SnS 2 composite are that the graphene sheet is sandwiched between SnS 2 layers with enhanced conductivity and it has a strong structural integrity.
SUMMARY AND OUTLOOK
Sn, SnO 2 , and SnS 2 have been extensively studied as substitutes for graphite in LIBs and for potential application in SIBs. Either in LIBs or SIBs, the ultimate problem that needs to be addressed is the huge volume change of Sn with Li + or Na + during the alloying/dealloying processes. This problem has been largely addressed by introducing one or more metals and/or compounds into the system and at least one additive which can act as an inactive buffering matrix. Also, the use of reasonable nanostructure design can tactfully mitigate the volume change and facilitate the diffusion of Li + (Na + ) and electrons. Due to these efforts, some of these tin-based anode materials have reached their maximum theoretical capacity. So far, the real practical uses of tin-based anodes is still very scarce in both LIBs and SIBs, which is mainly due to the tedious synthetic procedures, high costs and low yields. Recently, much work has focused on large-scale synthetic methods. We believe that a cost-effective and facile fabrication process which takes morphology into consideration can promote the application of tin-based anodes in commercial LIBs and large-scale energy storage equipment in SIBs.
AUTHOR CONTRIBUTIONS
HM and WX contributed conception and design of the study. CM, RL, and LY organized the database. HM wrote the first draft of the manuscript. WX revised the whole manuscript.
FUNDING
This work was financially supported by the National Natural Science Foundation of China (Nos. 51874046 and 51404038). | 7,575.4 | 2020-03-19T00:00:00.000 | [
"Materials Science"
] |
Oscillation and Asymptotic Properties of Differential Equations of Third-Order
: The main purpose of this study is aimed at developing new criteria of the iterative nature to test the asymptotic and oscillation of nonlinear neutral delay differential equations of third order with noncanonical operator ( a ( where ι ≥ ι 0 and w ( ι ) : = x ( ι ) + p ( ι ) x ( ι − τ ) . New oscillation results are established by using the generalized Riccati technique under the assumption of (cid:82) → . Our new results complement the related contributions to the subject. An example is given to prove the significance of new theorem.
Introduction
The objective of this paper is to provide oscillation theorems for the third order equation as follows: where a(ι), b(ι), p(ι), q(ι) ∈ C([ι 0 , +∞)), a(ι), b(ι) > 0, a (ι) ≥ 0, q(ι) ≥ 0, β ≥ 1 and 0 ≤ p(ι) ≤ p 0 ≤ 1. The main results are obtained under the following assumptions: We intend that for a solution of (1), we mean a function x(ι) ∈ C([T x , ∞)), T x ≥ ι 0 , which has the property w ∈ C 1 ([T x , ∞)), b w ∈ C 1 ([T x , ∞)), a ((b w ) ) β ∈ C 1 ([T x , ∞)) and satisfies (1) on [T x , ∞). We only consider those solutions x of (1) which satisfy sup{|x(ι)| : ι ≥ T} > 0 for all T ≥ T x . We start with the assumption that Equation (1) does possess a proper solution. A proper solution of Equation (1) is called oscillatory if it has a sequence of large zeros lending to ∞; otherwise, we call it non-oscillatory. Neutral/delay differential equations of the third order are used in a variety of problems in economics, biology, and physics, including lossless transmission lines, vibrating masses attached to an elastic bar, and as the Euler equation in some variational problems; see Hale [1]. As a result, there is an ongoing interest in obtaining several sufficient conditions for the oscillation or non-oscillation of the solutions of different kinds of differential equations; see as examples of instant results on this topic.
However, to the best of our knowledge, only a few papers have studied the oscillation of nonlinear neutral delay differential equations of third order with distributed deviating arguments; see, for example, [2][3][4][5]. Recently, Haifei Xiang [6] and Haixia Wang et. al [7] studied the oscillatory behavior of Equation (1) under the following assumption: Motivated by this above observation, in this paper, we extend the results under the following assumption: Motivated by these reasons mentioned above, in this paper, we extend the results using generalized Riccati transformation and the integral averaging technique. We establish criteria for Equation (1) to be oscillatory or converge to zero asymptotically with the assumption of (2). As is customary, all observed functional inequalities are assumed to support eventually; that is, they are satisfied for all ι that are large enough.
Main Results
For our further reference, let us denote the following: and and Then, every solution x(ι) of (1) is either oscillatory or tends to 0.
We will present an example to illustrate the main results.
A Concluding Remark
We established new oscillation theorems for (1) in this paper. The main outcomes are proved via the means of the integral averaging condition, and the generalized Riccati technique under the assumptions of ι ι 0 a −1/β (s)ds < ι ι 0 1 b(s) ds = ∞ as ι → ∞. Examples are given to prove the significance of the new results. The main results in this paper are presented in an essentially new form and of a high degree of generality. For future consideration, it will be of great importance to study the oscillation of (1) when −∞ < p(ι) ≤ −1 and |p(ι)| < ∞. | 924 | 2021-08-18T00:00:00.000 | [
"Mathematics"
] |
Factors Affecting the Occurrence of Suspected Contact Dermatitis in the Traditional Fishery Processing Area (PHPT) of Muara Angke
Contact dermatitis is considered trivial for some people. There are many environmental factors could be the main factors causing contact dermatitis on salted-fish processor in Pengolahan Hasil Perikanan (PHPT) or Fishery Processing Area of Muara Angke such as water, temperature, fish, and humidity. These were due to the geographical location of the coastal determine the environmental factors which related to the contact dermatitis suspected on salted-fish processor in the PHPT Muara Angke area. The study was analytic observational with cross sectional design, with subjects were 112 samples. Results showed that 53,6% of salted-fish processor were suspected contact dermatitis. The suspected contact dermatitis increased by poor length of contact (OR= 9.42, 95% CI= 2.91 to 30.53, p=0,000), poor frequency of contact (OR= 4.70, 95% CI= 1.10 to 20.20, p=0,000), and bad temperature (OR= 3.74, 95% CI= 1.34 to 10.45, p=0,003). The length of contact can makes the lack of permeability of the skin, so the material of irritant can infiltrate properly. To prevent contact dermatitis the distribution of clean water must be comprehensive and do as the standards. The building of the processing room has to protect the worker from the heat temperature. Personal hygiene and personal protective must be increase to protect the worker from disease and to keep the quality of the product.
Introduction
One of the factors that affects health is the environment. Environmental health is all physical, chemical, and biological factors outside the body that affect behavior including assessment and control that have the potential to affect health [45].
Many environmental exposures can affect the skin, thereby reducing the mechanism of skin regulation and skin repair that cause dermatological diseases [26]. Skin is a part of the body that often receives damages from work, one of which is the work of processing fish. The work of the majority of Muara Angke residents, one of many regions in North Jakarta Indonesia, is fisherman or fish trader due to the geographical location on the coast. One disease that can attack fish processors is a skin disorder such as dermatitis. Dermatitis is an inflammatory reaction that occurs on the skin in response to exogenous and endogenous influences. Contact dermatitis is a dermatitis caused by a substance or substance that attaches to the skin [10]. Symptoms of contact dermatitis can be itchiness, redness, flaking of the skin, to the appearance of vesicles. In Jakarta, dermatitis is found in 100 people per 1000 population. The results of the 2007 RISKESDAS also suggested that there were 80.3 cases of dermatitis per 1000 residents in the North Jakarta area. Although North Jakarta is not the region with the most dermatitis in Jakarta, but in publications carried out by Floating Hospital Dr. Lie, from 631 patients in Muara Angke on March 16, 2014 it was noted that dermatitis was the second most common disease after ARI (Acute Respiratory Infection). In the initial survey of researchers on November 18, 2017, it was found that 3 out of 5 salted fish processors experienced symptoms of contact dermatitis such as itchiness and redness. Contact dermatitis can occur due to repeated contact with weak irritants, one of which is water [10], while water is one of the basic elements and is needed for human life [36]. The coastal area is an area that often experiences difficulties with clean water. The 2017 Geological Agency of the Ministry of Energy and Mineral Resources stated that 80% of groundwater in the Jakarta Groundwater Basin (CAT) area did not meet the Minister of Health's standard No.492 of 2010 concerning Requirements for Qualification of Drinking Water. North Jakarta is the worst area where in general the CAT contains high levels of Fe (iron) and the content of Na (Sodium), Cl (Chloride), TDS (Total Dissolve Solid) and DHL (Electrical Conductivity) due to the influence from water intrusion. These elements can be potential irritants / allergens that heavy metals (metals) are the most common materials causing allergic contact dermatitis [24]. The type of dug well water source (groundwater) affected the incidence of dermatitis in Kedungrandu Village, Banyumas [18].
It is not impossible that this can happen in Muara Angke where groundwater quality data is indeed bad. Heavy metals can also be found in marine fish as the main ingredient in processed salted fish. The Jakarta Bay is one of the world's bays that has marine pollution, which resulted in some of the marine products being polluted including the fish. [25].There was contamination of lead (Pb) and cadmium (Cd) in a number of fishes in Jakarta Bay [25], while it was proven that several types of fishes contain elements of mercury (Hg) [13], [32], although from these studies it is said that it is still below the threshold according to Minister of Health Regulation 492 of 2010 but if contacted repeatedly it is not impossible to cause contact dermatitis in salted fish processors.
Coastal areas are low-lying areas that results in the air temperature being hotter than other plains. One of the factors that influence the onset of contact dermatitis is temperature [10]. According to the Indonesian Meteorology and Climatology Agency (BMKG) the temperature in North Jakarta reaches 24-33˚C. An average temperature of Muara Angke is 27.7˚C which is quite hot. It was proven that the temperature was related to the incidence of contact dermatitis in tofu makers [12]. Besides temperature, humidity is also one of factors that influence the emergence of contact dermatitis [10]. The Tanjung Priok Maritime Meteorological Station for the North Jakarta region in 2013 recorded an average air temperature of 28.7˚C with a maximum temperature of 35.4˚C and a minimum temperature of 23˚C. Average humidity is 75% with a maximum humidity of 97% and a minimum humidity of 42%. The high temperature and humidity can affect the symptoms of contact dermatitis.
Based on the above and seeing that the majority of fish processors are salted fish processors, the researchers wanted to examine environmental factors related to the incidence of suspected contact dermatitis in the processing of salted fish in the Muara Angke Traditional Fisheries Processing Area (PHPT) North Jakarta in 2018.
Research Design
The research conducted was observational analytic. Observations of salted fish processors in the PHPT area were carried out using cross sectional data collection techniques, where the dependent variable and independent data are observed at the same time period. The researcher will ask the sample using a modified questionnaire from various validated sources.
Population And Samples
The population of this study was the salted fish processor in the Muara Angke PHPT area which had agreed to become a research sample and entered into the inclusion criteria. The number of samples used in this study were 112 samples. The processing houses to be studied are 56, so that each 3 processing house is taken by two respondents. The inclusion criteria for this study were the salted fish processors in the PHPT Muara Angke area in which they were willing to be sampled, salted fish processors who had at least become processors for -/ + 1 year, salted fish processors working without using personal protective equipment rubber gloves and shoes, and salted fish processors with ages 18 to 64 years (productive age according to the Central Bureau of Statistics)
Data Collection Hygrometer
The type of data used in this study is the primary data, which was taken through questionnaires that have been tested for validity and reliability. Questionnaire to determine whether the respondent is suspected of contact dermatitis. The questionnaire will be examined by a doctor at the Pluit Health Center as a related health facility to determine the diagnosis of salted fish processing. Also a direct assessment of temperature and humidity with a thermo.
Result And Discussion
Respondent Characteristics Male respondents are 79 (70.5%) more than female respondents. The highest age of respondents was the age range of 31-40 years at 47 (42.0%). Although that male skin is much thicker than a woman's skin which causes women to be more susceptible to contact dermatitis but this is due to more overall male workers in the Muara Angke Traditional Fisheries Processing Area [28]. In the analysis of the characteristics of the respondent's age the most were ages with a range of 31-40 years which amounted to 47 with a percentage of 42%. Based on researchers' observations this can be due to younger age workers who prefer other jobs such as online motorcycle or online taxi, etc. Given that work as a fish processor is included as heavy labor, is also the cause of fish processors being above the age of 40 years less.
3.2.
Univariate Analysis a. The results of the primary data that have been examined by doctors of the Pluit Health Center show that as many as 60 salted fish processors were suspected of contact dermatitis with a percentage of 53.6% while as many as 52 salted fish processors were not suspected contact dermatitis with a percentage of 46.4%. This is in accordance with the publication of the 2007 RISKESDAS which stated that there were 80.3 cases of dermatitis per 1000 residents in the North Jakarta area. Although North Jakarta is not the region with the most dermatitis in Jakarta, but in publications conducted by the Floating Hospital Dr. Lie, 631 patients in Muara Angke on March 16, 2014 it was noted that dermatitis was the second most common disease after ARI. Also added to the number of potential environmental factors of contact dermatitis that researchers observed in the processing of salted fish in the Muara Angke Traditional Fisheries Processing Area can be one of the causes of the many processes that are suspected of contact dermatitis. Contact dermatitis is dermatitis caused by a substance that attaches to the skin. Two types of contact dermatitis are known, namely irritant contact dermatitis and allergic contact dermatitis. Both can be acute or chronic [10]. b. In the study it was found that the source of water used for processing salted fish is mostly derived from groundwater. From the results of the analysis, 46 (82.1%) processing houses used groundwater sources while only 10 (17.9%) processing houses used other water sources. This was in accordance with the second survey that researchers conducted on April 20, 2018 that most fish processors still use a lot of ground water as their source of water due to lack of costs and the ability to use PAM Jaya water flow, and PAM Jaya has not been able to meet the needs of clean and evenly distributed water [36]. c. There were 30 processors (26.8%) with length of contact of more than 5 hours, followed by 30 processors (26.8%) with length of contact 5-10 hours and as many as 52 processors (46.4%) with a contact duration of more than 10 hours in a day d. There were 16 processors (14.3%) with contact frequency <5 times, 41 (36.6%) processors with contact frequency 5-10 times, and 55 processors (49.1%) with contact frequency more than 10 times a day. The large number of respondents with long contact duration and frequent contact frequency due to a long and frequent treatment process in a day. This makes the two variables potentially a factor associated with suspected contact dermatitis. e. The temperature and humidity of the processing room has an inversely proportional state. At room temperature analysis 36 (64.3%) the processing house with temperature is not optimal. The average temperature is 30,014˚C with a minimum temperature of 25,0˚C and a maximum temperature of 33,2˚C. While only 14 (25.0%) processing houses with humidity are not optimal with an average humidity of 74.25% as well as a minimum humidity of 60.0% and maximum humidity of 97.0%. This is due to the condition of processing houses that cannot withstand the temperature of hot air from outside caused by building materials that still use plywood boards as the main material for processing houses. Meanwhile the dominant temperature of the room's hot air causes the humidity of the room is quite optimal even though there are some processing houses that have moisture below the value due to the heat of the home processing temperature. These two variables can also be the main factors causing contact dermatitis in salted fish processors.
Bivariate Analysis
Bivariate Chi-square analysis showed that there was no significant association between groundwater sources and the incidence of suspected contact dermatitis with p value = 0.396 (p> 0.05). It can be due to the different frequency of contact of each fish processor to groundwater sources so that there are several salted fish processors which, although using ground water as a source of water, do not show symptoms of contact dermatitis. As stated that contact dermatitis can occur due to repeated contact with weak irritants, one of which is water [10]. The results of the bivariate Chi-Square analysis also showed a p value of 0.677 (p> 0.05) which indicated that there was no association between the humidity of the processing house with suspected contact dermatitis in salted fish processing in the PHPT Muara Angke Region. The Chi-square analysis performed showed a significant relationship between length of contact and the incidence of suspected contact dermatitis with a p value = 0,000 (p <0.05). This result is in accordance where contact time is one of the factors causing contact dermatitis [10]. This result is also found a significant relationship between the duration of contact with the incidence of contact dermatitis [46]. This might mean that there was a long contact with the substance in the form of heavy metals that exist in marine fish which, although below the threshold, but with prolonged contact will make it take longer -even if in a small amount for a long period of time it will enter into the skin and cause inflammation so that the length of contact with the suspected incidence of contact dermatitis has a significant relationship. In table 2. showed the value of p value of 0.000 (p <0.05) which indicates that there is an association between the frequency of contact with suspected contact dermatitis in salted fish processor. Frequencies will often induce sensitization to the skin so that if the worker is sensitized even if only a small amount of substance is exposed, it can cause contact dermatitis [10]. Other studies also show that the frequency of contact is related to contact dermatitis [12]. Just like contact time, the frequent frequency of marine fish suspected of being contaminated with heavy metals can cause contact dermatitis in salted fish processors even though the amount of the ingredients is small. High frequency contact can emerge the sensitization phase to the substance on the skin, causing allergic contact dermatitis. In this temperature variable the results of the bivariate Chi-Square analysis showed a p value of 0.003 (p <0.05) which indicated that there was a relationship between the processing house temperature and suspected contact dermatitis in salted fish processors. Only 20 salted fish processing houses have optimal temperatures with a percentage of 35.7%. The average temperature is 30,014˚C where about healthy industrial site requirements the optimum temperature ranges from 18-30 ˚C [21]. This result is in accordance where temperature is also affecting contact dermatitis [10]. Temperatures that are not optimal can facilitate the entry of substances from the environment into the skin [14]. The temperature also became one of the variables that had a relationship with the incidence of contact dermatitis in the custodians of Kaasan Ciputat [12]. The less optimal room temperature is probably caused by processing house buildings that are only made of plywood and wood so that it is less able to maintain the temperature of the room from coastal attacks of hot air.
The results of bivariate Chi-Square analysis showed a p value of 0.677 (p> 0.05) which indicated that there was no relationship between processing house moisture and suspected contact dermatitis in salted fish processors. This is due to the fact that most of the processing house moisture is in the optimal humidity range, which is 65% -90% [20]. Only 14 processing houses with humidity were not optimal with a percentage of 25.0%. With an average humidity of 74.25% which was still within the optimal humidity range. This optimal humidity was likely to be affected by hot weather which causes moisture to be maintained. . The contact length Odds Ratio values was the highest with 9,42 which means that salted fish processors with a contact time> 10 hours 9,42 times were more at risk of being suspected of contact dermatitis than salted fish processors with a contact duration of 5-10 hours. Based on the above, it can be said that length of contact is the variable that most influences the occurrence of suspected contact dermatitis. This result is in accordance with the theory where contact time was one of the factors causing contact dermatitis [10]. The presence of sea water pollution makes some types of marine fish contaminated with heavy metals [43]. There was contamination of Lead (Pb) and Cadmium (Cd) metals in a number of fish in Jakarta Bay Several types of fish contain elements of Mercury (Hg) [13] [32]. Even though these studies were said to be still below the threshold according to Minister of Health Regulation 492 of 2010, the substance in the form of heavy metals that exist in marine fish even though it was below the threshold, but with prolonged contact, it can create a substance which, although in small amounts, will enter the skin and cause inflammation so the contact duration of suspected contact dermatitis had a significant relationship. The sensitizing phase can occur along with the length of contact with the heavy metal, causing allergic contact dermatitis [14].
4.Conclusion
Based on this study it can be concluded that the suspected contact dermatitis increased by poor length of contact, poor frequency of contact and bad temperature in traditional salted fish processors at the Muara Angke PHPT area. The nature of marine fish that has sharp scales can also make physical trauma to fish processors so that heavy metal substances present in fish can enter and make inflammation of the skin for a long time so contact dermatitis occurs. | 4,229.4 | 2020-07-07T00:00:00.000 | [
"Engineering"
] |
Facultative methanotrophs are abundant at terrestrial natural gas seeps
Background Natural gas contains methane and the gaseous alkanes ethane, propane and butane, which collectively influence atmospheric chemistry and cause global warming. Methane-oxidising bacteria, methanotrophs, are crucial in mitigating emissions of methane as they oxidise most of the methane produced in soils and the subsurface before it reaches the atmosphere. Methanotrophs are usually obligate, i.e. grow only on methane and not on longer chain alkanes. Bacteria that grow on the other gaseous alkanes in natural gas such as propane have also been characterised, but they do not grow on methane. Recently, it was shown that the facultative methanotroph Methylocella silvestris grew on ethane and propane, other components of natural gas, in addition to methane. Therefore, we hypothesised that Methylocella may be prevalent at natural gas seeps and might play a major role in consuming all components of this potent greenhouse gas mixture before it is released to the atmosphere. Results Environments known to be exposed to biogenic methane emissions or thermogenic natural gas seeps were surveyed for methanotrophs. 16S rRNA gene amplicon sequencing revealed that Methylocella were the most abundant methanotrophs in natural gas seep environments. New Methylocella-specific molecular tools targeting mmoX (encoding the soluble methane monooxygenase) by PCR and Illumina amplicon sequencing were designed and used to investigate various sites. Functional gene-based assays confirmed that Methylocella were present in all of the natural gas seep sites tested here. This might be due to its ability to use methane and other short chain alkane components of natural gas. We also observed the abundance of Methylocella in other environments exposed to biogenic methane, suggesting that Methylocella has been overlooked in the past as previous ecological studies of methanotrophs often used pmoA (encoding the alpha subunit of particulate methane monooxygenase) as a marker gene. Conclusion New biomolecular tools designed in this study have expanded our ability to detect, and our knowledge of the environmental distribution of Methylocella, a unique facultative methanotroph. This study has revealed that Methylocella are particularly abundant at natural gas seeps and may play a significant role in biogeochemical cycling of gaseous hydrocarbons. Electronic supplementary material The online version of this article (10.1186/s40168-018-0500-x) contains supplementary material, which is available to authorized users.
Background
Methane is an integral component of the global carbon (C) cycle and one of the most significant contributors to climate change since it has a global warming potential approximately 34 times greater than carbon dioxide [1]. Atmospheric concentrations of methane have been steadily rising since the Industrial Revolution, currently around 1.8 ppm by volume [2]. Approximately 70% of the total 500 to 600 million tonnes methane emitted [2] is new methane, i.e. produced by methanogens during microbial degradation of organic matter, largely under anaerobic conditions. This biological process is particularly prevalent in wetlands, landfills, rice paddies, the rumen of cattle and the hindgut of termites. The remaining 30% of the methane released into the atmosphere arises from the thermogenic decomposition of fossil organic material to geological methane and other gases collectively known as natural gas [2]. Natural gas consists usually of geological methane and substantial amounts of the short chain alkanes ethane, propane and butane [3], and from subsurface reservoirs it reaches the surface of the Earth through natural seepage or mining and extraction activities.
Globally, geological methane from natural gas seeps is the second largest natural source, after wetlands, and apart from methane, it also contributes up to 3-6 million tonnes of climate-active ethane and propane per year [4]. Seepage of natural gas occurs in a wide range of environments, e.g. hydrocarbon-prone sedimentary basins, both as visible features including dry gas seeps and mud volcanoes, or in the marine realm as hydrothermal vents or shallow marine methane seeps, but also as invisible microseepage [3,[5][6][7][8]. Volcanic and geothermal systems, hot and cold springs or alkaline soda lakes, may also release non-negligible amounts of methane [9][10][11]. Spectacular releases of natural gas are observed at the Eternal Flame Falls in Chestnut Ridge Park, New York, where seep gas contains methane plus 35% ethane and propane [12]. Gas releases caused by human activity range from incidents such as the Deepwater Horizon disaster of 2010 (where 170,000 tonnes of natural gas escaped to the marine environment) to operational releases including leaking gas pipelines and coal mining activities [13]. Unintentional releases of natural gas are widespread and likely to increase, especially with the exploitation of unconventional resources including shale gas extraction, with associated concerns of environmental pollution and climate change [14][15][16][17].
Although a vast amount of methane escapes to the atmosphere, much more would escape if it were not for the activity of microbes that consume methane. Over half of the methane produced by methanogens in wetlands has been reported to be consumed by aerobic methanotrophs [18][19][20]. These methane-oxidising bacteria are a remarkable group of microbes that use methane as their sole source of carbon and energy. Aerobic methanotrophs are mainly Gram-negative bacteria of the classes Alphaproteobacteria and Gammaproteobacteria. They are usually obligate methanotrophs, unable to grow on other alkanes and multi-carbon compounds, except for only a few strains, which can grow on acetate and ethanol [21,22]. During the metabolism of methane by aerobic bacteria, the first step is the oxidation of methane to methanol, which is catalysed by one of two enzymes: a membrane-bound, copper-containing particulate methane monooxygenase (pMMO) or a diiron centre containing soluble methane monooxygenase (sMMO) [23][24][25][26]. Both conventional enrichment experiments and cultivation-independent studies indicate that obligate methanotrophs are widespread in the environment, especially in areas rich in methane [27][28][29][30].
Specialised microbes growing on other gaseous alkanes such as ethane or propane (propanotrophs) have also been characterised, including metabolically versatile Actinobacteria (Rhodococcus and Mycobacterium) [31][32][33], Gammaproteobacteria (Psuedomonas) [34] or Betaproteobacteria (Thauera) [35,36] that grow on many multi-carbon compounds. Most propanotrophs contain a propane monooxygenase enzyme (PrMO) with similarities to sMMO but do not grow on methane [37,38]. An exciting development in the study of biological methane oxidation was the isolation of facultative methanotrophic strains of the genus Methylocella from acidic peat, tundra and forest soils [39][40][41]. These unusual methanotrophs grow on methane as well as some multi-carbon compounds including acetate, pyruvate, succinate and gluconate [42,43]. Methylocella belong to alphaproteobacterial family Beijerinckiaceae containing generalist organotrophs (e.g. Beijerinckia indica), facultative methanotrophs (e.g. Methylocella silvestris) and obligate methanotrophs (e.g. Methylocapsa acidiphilia) [44]. Examination of the Methylocella silvestris BL2 genome revealed that unlike most methanotrophs, Methylocella did not contain genes for pMMO but oxidised methane using the sMMO enzyme only [45]. Surprisingly, genes encoding a PrMO were also identified in the Methylocella silvestris BL2 genome. Crombie and Murrell [46] reported for the first time that Methylocella silvestris BL2 derives growth benefits from oxidising methane and propane simultaneously using two distinct enzymes, sMMO and PrMO. This discovery overturned the dogma that degradation of methane and other alkane components of natural gas requires different groups of microbes.
The unique metabolic capabilities of Methylocella have profound implications for the biological consumption of natural gas in the environment. Methylocella, being able to use most components of natural gas for growth, may have a competitive edge over less versatile obligate methanotrophs and propanotrophs in environments rich in natural gas. As little is known about the distribution of Methylocella in the environment, the purposes of this study were to improve molecular methods for detection of Methylocella in environmental samples and to test the hypothesis that Methylocella-like facultative methanotrophs are prevalent in thermogenic, natural gas seep environments.
Results and discussion
Methanotrophs present at biogenic methane and natural gas seep environments Since Methylocella species are the only methanotrophs known to use methane and the other components of natural gas such as ethane and propane simultaneously [46], we hypothesised that Methylocella may be abundant in environments exposed to thermogenic natural gas seeps. For centuries, natural gas seeps have been reported in New York state, part of the Appalachian Basin in the USA [47][48][49], exemplified by towns such as Gasport (Niagara County), named in 1826. Many of these seeps emit natural gas, which can be ignited (Additional file 1: Figure S1). The best-known example of such seep sites is the "Eternal Flame" in Chestnut Ridge National Park [12]. We explored many such documented and undocumented thermogenic gas seeps in this region for sample collection and found that methane and also considerable amounts of ethane and propane were present in gas collected directly from the seep sites (Additional file 1: Table S1).
Libraries of 16S rRNA genes were generated from DNA extracted from samples from diverse environments known to be exposed to biogenic methane or thermogenic natural gas seeps (Fig. 1). Illumina Mi-Seq yielded 617,613 good quality sequence reads in total from 15 samples, averaging 41,178 sequences per sample (Additional file 2: Table S2). Sequence analysis showed that out of 20 phyla at an abundance of higher than 1% in one or more of the samples, Proteobacteria (alpha, beta and gamma), Actinobacteria, Acidobacteria, Bacteroidetes, Chloroflexi, Firmicutes, Planctomycetes and Verrucomicrobia contributed substantially to the bacterial communities ( and Beijerinckiaceae (Pipe Creek). When analysed at the genus level, 16S rRNA gene sequences were resolved into 1062 operational taxonomic units (OTUs), of which 129 OTUs were found at an abundance of higher than 1% in at least one sample (Additional file 2: Table S2).
Detailed analysis of 16S rRNA gene amplicons revealed that of the methanotrophs, the genera Methylobacter, Methylocella, Methylococcus, Methylocystis, Methylosinus and possibly Verrucomicrobia dominated all samples (Fig. 2). Methanotrophs accounted for 0.62-17.90% of the total bacterial population present in all 15 environmental samples (Fig. 2). Methylocystis, Methylosinus, Methylocella and Verrucomicrobia dominated in samples from sites of biogenic methane emissions (Lakenheath Fen Nature Reserve and Moor House Nature Reserve) (Fig. 2). Methylococcus, Verrucomicrobia and Methylocella were abundant in Andreiasu Everlasting Fire (Fig. 2), a Romanian mud volcano site reported to have largely thermogenic natural gas emissions [5,6] but with the potential of biogenic methane emissions as revealed by the presence of methanogenic archaea in nearby mud volcanoes [50]. Methylocystis, Methylobacter and Methylocella were the dominant methanotrophs in microbial mat samples from Movile Cave, a very unusual, dark chemoautotrophic habitat also known to contain methane from both biogenic and thermogenic activities [51][52][53]. Like other environmental samples tested in this study, Verrucomicrobia were also found in Movile Cave samples, but we are not certain if the Verrucomicrobia detected by 16S rRNA in these environmental samples were methanotrophic or non-methano trophic. Many other methanotrophic genera such as Clonothrix, Methylohalobius, Methylomagnum, Methylomarinovum, Methyloparacoccus, Methyloprofundus, Methylosarcina and Methyloterricola were not found in any of the tested samples. Interestingly, the facultative methanotroph, Methylocella, was the most abundant methanotroph in samples from natural gas seep environments (with the exception of Gasport samples) and accounted for 25-64% of total methanotrophs in samples from thermogenic natural gas seeps of Ellicott Creek, Pipe Creek, Eternal Flame Falls and Eighteen Mile Creek (Fig. 2). Methylocella appeared to be the indicator methanotrophic genus in most natural gas seep sites (Additional file 1: Figure S2). Methylocella abundance and the proportion of ethane and propane showed a positive correlation (Spearman's rank correlation coefficient = 0.80, P value = 0.03). Metabolic versatility and the capability of utilising ethane and propane along with methane may confer an advantage over obligate methanotrophs, allowing Methylocella to colonise environments exposed to propane and ethane as well as methane. Methylocella silvestris BL2 exhibits higher growth rates and carbon conversion rates when grown under a mixture of propane and methane as compared to growth on either of these gases alone [46]. The presence of ethane and propane alongside methane at certain sites (Additional file 1: Table S1) and the abundance of Methylocella in those environments tested here supports our hypothesis that Methylocella may have a competitive advantage over obligate methanotrophs in natural gas seep sites.
Distribution and abundance of Methylocella in different environments
Since 16S rRNA gene taxonomy might not distinguish between methanotrophic and non-methanotrophic members of the Beijerinckiaceae family (Additional file 1: Table S3) [44], we developed a Methylocella-specific PCR assay targeting mmoX (encoding the sMMO active site subunit) to study the distribution of Methylocella at various sites. The use of probes targeting a bacterial functional gene rather than the 16S rRNA gene enables a much more sensitive evaluation of microbial diversity in complex environments as it limits the investigation to the functional group being studied [54]. PCR conditions for Methylocella-specific mmoX were optimised and validated with DNA from pure cultures of known methanotrophs (Additional file 1: Figure S3); including a newly isolated strain Methylocella silvestris TVC [55]. DNA extracted from various environmental samples was PCR-screened for Methylocella-specific mmoX genes (Table 1). Of the 31 samples originating from diverse locations where there are biogenic methane and/or thermogenic natural gas emissions, Methylocella-specific mmoX gene PCR products were detected in 25 samples (Table 1), from both types of environments, i.e. biogenic methane emitting and/or thermogenic natural gas emitting. Methylocella-specific mmoX was detected in samples from biogenic methane-emitting environments with slightly acidic to moderately acidic pH (e.g. Lakenheath Fen Nature Reserve and Moor House Nature Reserve) and was detected in all the samples from natural gas-emitting sites regardless of pH (Table 1). There are a few environments, e.g. Movile Cave, which have previously been reported to be negative for Methylocella [56], but we now detect Methylocella-specific mmoX from wall scrapings in this environment (Table 1), in agreement with a recent metagenomics study [57]. This suggests that our newly designed PCR assay for mmoX showed better sensitivity and specificity for Methylocella as compared to the previously reported assay [56]. Another PCR assay to detect mmoX of Methylocella was described earlier, but the authors were unable to show any Methylocella-specific mmoX amplified from environmental samples [58]. Specificities of the new primers and an optimised protocol to detect Methylocella-specific mmoX genes in DNA from environmental samples reported here were verified by constructing mmoX-amplicon clone libraries and Illumina amplicon sequencing.
Biogenic methane emitting environments
Thermogenic natural gas emitting environments Thermogenic with Biogenic potential Fig. 1 Relative abundance (%) of dominant bacterial classes in different environments as revealed by 16S rRNA gene sequencing. Amplicon sequencing was performed on DNA samples from environments exposed to biogenic methane and/or natural gas emissions Previously, only a few cultivation-dependent studies [39][40][41][59][60][61] and cultivation-independent studies (for example [56,[62][63][64][65][66][67]) have detected Methylocella in a relatively small number of environments. Methylocella had been reported in many studies to be abundant in acidic soils [68], and the known Methylocella species had been isolated only from acidic soil environments including peat bogs, forest and tundra soils [39][40][41]. Their abundance in acidic environments may be due to the ability of Methylocella to use readily available acetate [42], a major intermediate of carbon turnover in these soils [42,69]. Rahman et al. [56] reported for the first time that Methylocella are not limited to acidic environments as they detected Methylocella-specific mmoX in the alkaline environments of Lonar Lake (pH 10). Here, we also confirmed that the distribution of Methylocella is not limited to acidic environments as mmoX of Methylocella was detected in all environmental samples from thermogenic natural gas-emitting sites of acidic and basic pH (Table 1), possibly because of the metabolic flexibility and ability of Methylocella to utilise methane and propane in the environments where these gases co-occur. Our results show that Methylocella thrive best in environments with thermogenic natural gas emissions under various pH conditions (Table 1, Fig. 2).
Our results show that Methylocella not only dominated natural gas seep sites but they are also abundant in other environments (Fig. 2) confirming previous observations [56]
Fig. 2
Relative abundance (%) of methanotrophic bacteria in environmental samples as revealed by 16S rRNA gene sequencing. Amplicon sequencing was performed on DNA samples from environments exposed to biogenic methane and/or natural gas emissions. The proportion (%) of the combined methanotrophic population in each environment is shown above each bar, based on the abundance of 16S rRNA gene sequences of known methanotrophs (data filtered from Additional file 2: Table S2) [45]. The abundance of Methylocella in the different environments tested in this study reveals that facultative methanotrophs may have been overlooked in many cultivation-independent studies that targeted only pmoA. The use of both pmoA and mmoX, as genetic markers for ecological studies, is therefore important to avoid underestimating the diversity and abundance of methanotrophs in the environment. More methanotrophs that only contain sMMO and lack pMMO are being discovered [70,71]. Therefore, there is a need to re-examine functional gene primers targeting mmoX to detect all methanotrophs containing only sMMO. In addition to the 16S rRNA amplicon sequencing data, Methylocella abundance was also estimated by a newly developed qPCR assay targeting the Methylocella-specific mmoX. The abundance of Methylocella detected in selected environmental samples varied from 4.59 (± 0.19) × 10 6 (Moor House Nature Reserve, UK) to 2.55 (± 0.06) × 10 8 cells g −1 sample (Pipe Creek main seep, New York, USA) (Fig. 3). The abundance of Methylocella-specific mmoX was an order of magnitude higher in Pipe Creek main seep samples compared to other tested samples (Additional file 1: Figure S4). In contrast, the abundance of pmoA-containing methanotrophs in these environmental samples varied from 3.68 (± 0.12) × 10 7 (Movile Cave Microbial Mat) to 1.61 (± 0.30) × 10 8 pmoA copies g −1 sample (Lakenheath Fen Nature Reserve) (Fig. 3). Remarkably, in the samples from the Pipe Creek main seep, the Methylocella population alone constituted 5-12% of the total bacteria or 60-85% of the total methanotroph population (as estimated by 16S rRNA gene amplicon sequencing and Methylocella-specific mmoX qPCR respectively) (Figs. 2 and 3). In comparison to peat soils previously described as favourable habitats for Methylocella [56,68], the Methylocella population was an order of magnitude higher in the natural gas seep site of Pipe Creek.
Phylogenetic analysis of mmoX from Methylocella
Sequence analysis of the clone libraries generated from the Methylocella-specific mmoX PCR products obtained with different environmental DNA samples showed that most sequences are similar to known Methylocella-specific mmoX sequences in the NCBI database. These similarities ranged from 80 to 100% suggesting the possibility of novel diversity in Methylocella-specific mmoX sequences. Detailed analyses of the composition and diversity of Methylocella using Illumina Mi-Seq sequencing of Methylocella-specific mmoX PCR amplicons from environments exposed to thermogenic natural gas seeps and/or biogenic methane emissions were also performed. Methylocella-specific mmoX amplicon sequencing yielded 849,221 quality-filtered sequences in total for 15 samples averaging 56,615 per sample. Following sequence analysis using SwarmV2 [72], 34 OTUs with a relative abundance of higher than 1% were recovered from all samples (Additional file 3: Table S4). Phylogenetic analysis based on the DNA nucleotide sequences of the library clones and OTUs recovered from amplicon sequencing show that mmoX sequences clustered in several distinct clades (Fig. 4). OTUs Table S4). Interestingly, a few Methylocella-specific mmoX clones and OTUs originating from Lakenheath Fen Nature Reserve, Ellicott Creek and Eighteen Mile Creek did not cluster with other known mmoX sequences (cluster IV and V in Fig. 4). BLAST analyses of the clones (e.g. clone AM1-6 Ellicott Creek 1, AM2-2 Ellicott Creek 2, AM2-3 Ellicott Creek 2) from this cluster further revealed their best hit to the mmoX from Methylocella silvestris BL2 but with only 81% nucleotide identity, suggesting that these environments harbour novel strains, possibly related to Methylocella. In some environments where Methylocella was not abundant, we also detected some sequences (OTUs 6, 7, 8, 25, 34 and 80) more closely related to mmoX from other methanotrophs. These false positives made up approximately 10% (less than 5% in clone libraries) of the total sequences reads (Fig. 4). However, this non-Methylocella mmoX OTU appeared to be a dominant taxon (72%) based on mmoX amplicon sequencing in Eighteen Mile Creek sample (Fig. 4, Additional file 3: Table S4). Therefore, a clone library or amplicon sequencing analysis should be performed to validate the results of Methylocella-specific mmoX PCR or qPCR. Phylogenetic clustering of the sequences from clone libraries and amplicon sequencing from the same samples was remarkably congruent (Fig. 4, Additional file 3: Table S4). Comparison of the different environments revealed that Pipe Creek natural gas seep was the most diverse in terms of Methylocella-specific mmoX (Additional file 3: Table S4). Although it was not possible to link Methylocella-specific mmoX diversity with either biogenic methane-emitting environments or thermogenic natural gas-emitting environments, this phylogenetic analysis suggests the possibility of novel diversity in Methylocella-specific mmoX sequences and has suggested several target sites for future isolation of new Methylocella strains.
Conclusions
New biomolecular tools designed in this study have expanded our knowledge of the environmental distribution of the facultative methanotroph Methylocella. Methylocella-like facultative methanotrophs are particularly abundant at natural gas seeps and may therefore play a significant role in biogeochemical cycling of these gaseous alkanes. This study is timely since the release of natural gas into the environment globally will increase considerably with the exploitation of unconventional sources of oil and gas. A detailed mechanistic understanding of how Methylocella-like facultative methanotrophs mitigate these fugitive gases can now be undertaken using the tools and knowledge obtained in this study. In situ estimates of the activity of Methylocella oxidising methane and other alkanes simultaneously at natural gas seeps are now required to
Methylocella
Other bacteria pmoA containing methanotrophs Methylocella Fig. 3 Abundance of Methylocella in relation to total bacteria (a) and pmoA-containing methanotrophs (b). Bacterial populations were enumerated by qPCR of 16S rRNA (for total bacteria), Methylocellaspecific mmoX (for Methylocella) and pmoA (for pmoA-containing methanotrophs) genes on environmental DNA samples. Methylocella cell numbers equate to Methylocella-specific mmoX gene copies, whereas bacteria and pmoA-containing methanotrophs were assumed to contain two 16S rRNA or pmoA gene copies per cell. Bacteria other than Methylocella were enumerated by subtracting Methylocella from total bacterial cell numbers. Error bars represent the propagated errors based on standard deviation of triplicate samples determine their impact on the cycling of these atmospheric trace gases.
Chemicals and reagents
All chemicals and reagents (purity > 99%) were obtained from Sigma-Aldrich unless otherwise stated. Buffers, culture media and solutions were prepared in ultra-pure water, and sterilisation was done by autoclaving (15 min, 121°C, 1 bar) or by filtration (0.2 μm).
Bacterial strains and growth conditions
Methylocella strains were grown in 20 ml diluted nitrate mineral salt (DNMS) medium, and other methanotrophs were grown using nitrate mineral salt (NMS) medium, in 120 ml serum vials, with methane (20% v/v in headspace) as the only source of C and energy, as described previously [73,74]. The growth of liquid cultures was monitored by measuring the optical density at 540 nm.
Sample collection and characterisation
To study the distribution of Methylocella-like methanotrophs, samples (soil or sediment and water) were taken from diverse environments with known emissions of biogenic methane and/or thermogenic natural gas (see Table 1 and Additional file 1: Table S1 for details). Several natural gas seeps have been reported in New York state, USA [12], and Romania [5,6]. Five locations in New York state known to emit thermogenic natural gas were sampled in June 2017 (Table 1 and Additional file 1: Table S1). Gas bubbles from the natural gas seeps for Fig. 4 Phylogenetic tree of Methylocella-specific mmoX clones and operational taxonomical units (OTUs) retrieved by amplicon sequencing from various environmental DNA samples. Methylocella-specific mmoX clones and OTUs are grouped either around Methylocella tundrae T4 (red circles), Methylocella silvestris BL2 (green circles), Methylocella palustris K (blue circles) or distantly (black circles) from any known Methylocella strains (solid symbols). Environments where a particular OTU is abundant are shown in brackets. Partial mmoX sequences of representative clones and OTUs (abundance higher than 1%) and mmoX sequences from characterised methanotrophic bacterial strains were aligned using Mega 7.0. The optimal tree with the sum of branch length = 2.98 is shown where the evolutionary history was inferred using the neighbour-joining method taking into account a total of 323 nucleotide positions in the final dataset. The percentage (greater than 50%) of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches. Scale bar represents 0.05 substitutions per site which the concentrations of methane, ethane and propane have not been reported were also sampled for subsequent assays using gas chromatography (Table 1). Seeps from Romania mainly releasing thermogenic methane [5,6] or potentially biogenic methane [50] were also sampled (Table 1 and Additional file 1: Table S1). The investigated features appear as mud volcanoes (Paclele Mari, Paclele Mici, Beciu) or dry seeps generating everlasting fires (Andreiasu). Samples were also taken from wetland environments known for biogenic methane emissions as a result of microbial methanogenic activity, e.g. Lakenheath Fen Nature Reserve (Norfolk, UK), Moor House Nature Reserve (Pennine Hills, UK), Church Farm soil (Bawburgh, Norfolk, UK) and Stiffkey and Warham salt marshes (Norfolk, UK). Samples from Movile Cave (Romania), an unusual habitat known to have both methanogenic and thermogenic natural gas emissions, were also obtained [53]. Two to five sub-samples were taken from each sub-site in sterile 50 ml plastic tubes, which were pooled together before DNA extraction in the lab. The pH of samples was measured in the lab using a pH meter (Jenway) using 1:5 (w/ w) soil water suspensions in the case of soil or sediment samples or directly in the case of water samples.
Measurement of gaseous hydrocarbons by gas chromatography
Alkane (C1-C3) concentrations in the gas samples at seep sites were quantified using a gas chromatograph (GC). From bubbles, 5 ml gas was taken into a syringe and injected into 30 ml pre-sealed serum vial. These vials were analysed in the lab using an Agilent 7820A GC equipped with a Porapak Q column (Supelco) coupled to a flame ionisation detector (FID) to measure methane, ethane and propane concentrations as previously described [46].
Extraction of DNA and PCR amplification of 16S rRNA and mmoX genes DNA was extracted from pure cultures of methanotrophic strains using standard methods [73]. DNA was extracted from soils, sediments or slurries using the FAST DNA spin kit for soil (MP Biomedicals), following the manufacturer's instructions. Qubit (Invitrogen), NanoDrop (ThermoFisher Scientific) and gel electrophoresis methods were used to check quantity and quality of DNA samples. All primers used in this study are listed in Table S5 (Additional file 1). Extracted DNA was used as the template for PCR to amplify 16S rRNA and mmoX genes. Initially, the absence of PCR inhibitors, such as humic acids, was confirmed by amplifying bacterial 16S rRNA genes from template DNA, extracted from all the samples, using universal primers 27F and 1492R [75].
Reactions were carried out in a 20-μl volume consisting of 10 μl PCRBIO Taq mix red (2×) (PCRBIO), 0.8 μl of each of forward and reverse primers (10 μM) and 0.8 μl of template DNAs (1 to 10 ng). The cycling conditions for PCR amplification of 16S rRNA genes were 95°C for 3 min, followed by 30 cycles of 94°C for 20 s, 55°C for 20 s and 72°C for 40 s, with a final extension at 72°C for 5 min.
A new semi-nested PCR protocol was optimised targeting Methylocella-specific mmoX using a newly designed forward primer (mmoXLF2) and a previously designed reverse primer (mmoXLR) (Additional file 1: Table S5). For the amplification of mmoX genes specifically from Methylocella by conventional PCR, a first round of PCR was adopted using primers mmoXLF and mmoXLR, while a second round of PCR was performed with primers mmoXLF2 and mmoXLR. PCR reactions were carried out in a 20-μl volume containing 10 μl PCRBIO Taq mix red (2×) (PCRBIO), 0.8 μl of each of forward and reverse primers (10 μM) and 0.8 μl of template DNA (5 to 20 ng) or 0.8 μl first round PCR product. PCR cycling conditions for both PCR assays consisted of a touchdown programme, i.e. 95°C for 3 min, followed by 10 cycles of 94°C for 20 s, 70 to 61°C (decreasing 1 each cycle) for 20 s, 72°C for 20 s and then 25 cycles of 94°C for 20 s, 60°C for 20 s and 72°C for 20 s, with a final extension at 72°C for 5 min. For assays targeting specifically mmoX from Methylocella, PCR conditions were optimised with DNA from pure cultures of Methylocella silvestris, Methylocella palustris, Methylocella tundrae and from Methylosinus trichosporium OB3b and Methylococcus capsulatus Bath as negative controls (Additional file 1: Figure S3). Specificity of the primers to detect mmoX of Methylocella in environmental DNA was verified by clone library analysis using the pGEMT easy (Promega) cloning kit according to the manufacturer's instructions (Table 1) before carrying out Illumina amplicon sequence analyses (described below). Ninety-three clones (from 17 representative samples) were sequenced and analysed. All sequences obtained were mmoX sequences (Table 1), of which only 5% were mmoX sequences related to methanotrophs other than Methylocella, while all other sequences obtained appeared to be mmoX sequences related to Methylocella, with 80-100% nucleotide identity to mmoX from Methylocella. Moreover, no false-positive mmoX sequences were detected in clone libraries from environments such as Moor House Nature Reserve and Andreiasu Everlasting Fire, where other mmoX-containing methanotrophs (Methylocystis, Methylococcus) were abundant (Fig. 2).
Quantitative real-time PCR
Quantification of Methylocella and other methanotrophs was estimated by qPCR assays targeting Methylocella-specific mmoX (using mmoXLF2 and mmoXLR primer pair yielding an amplicon size of 389 bp) and pmoA (using A189F and Mb661R primer pair yielding an amplicon size of 472 bp) (see primer sequences in Additional file 1: Table S5). Quantification of 16S rRNA genes was also performed by qPCR using 519F and 907R primers (yielding an amplicon size of 388 bp). All qPCR assays were performed using StepOne Plus real-time PCR system (Applied Biosystems). Reactions were carried out in a 96-well qPCR plate (Applied Biosystems), in a total reaction volume of 20 μl, containing 10 μl of 2× SensiFAST SYBR Hi-ROX reagent (Bioline), 0.8 μl of each of forward and reverse primers (10 μM) and 0.8 μl of template DNAs or standards. Conditions for Methylocella-specific mmoX qPCR reactions consisted of an initial denaturation step at 95°C for 3 min, followed by 40 cycles of 95°C for 20 s, 65°C for 30 s and 72°C for 30 s. Specificity of amplification was determined from dissociation curves obtained by increasing 1°C per 30 s from 65 to 90°C and after gel electrophoresis and clone library construction from qPCR products (data not shown). Conditions for 16S rRNA gene and pmoA qPCR reactions consisted of an initial denaturation step at 95°C for 3 min, followed by 40 cycles of 95°C for 20 s, 55°C for 30 s and 72°C for 30 s. The gene copy numbers of Methylocella-specific mmoX and methanotrophic pmoA genes per microgram of template DNA were determined using calibration curves obtained from qPCR of tenfold dilution series of DNA standards (Additional file 1: Figure S5). The detection limit of the qPCR assay was ten copies of mmoX of Methylocella per 20 μl PCR reaction (Additional file 1: Figure S5). The qPCR assay was validated by spiking Warham salt marsh soil (Norfolk, UK) with known numbers of Methylocella silvestris BL2 and Methylocella palustris K cells (ranging from 10 3 to 10 6 cells g −1 soil) and by detecting the mmoX copies from the spiked soil (Additional file 1: Figure S6). Assuming two copies of 16S rRNA gene per cell, a single copy of mmoX gene per Methylocella cell [45] and two copies of pmoA per cell [30] for other methanotrophs, the abundance of methanotrophs in different samples was estimated. Controls to check for any inhibition of the qPCR assay were also performed by carrying out a qPCR assay targeting the mmoX of Methylocella, using tenfold serial dilutions of environmental DNA samples and also by doing another Methylocella-specific mmoX qPCR assay where templates were environmental DNA samples spiked with known amounts of Methylocella silvestris BL2 genomic DNA. Both inhibition control experiments did not show any inhibition of amplification during PCR reactions (data not shown).
Illumina Mi-Seq sequencing of PCR amplicons
Illumina Mi-Seq sequencing of PCR amplicons obtained from environmental DNA samples and control samples with genomic DNA of Methylocella silvestris BL2 was performed for both 16S rRNA genes and Methylocella-specific mmoX genes. For 16S rRNA genes, universal primers 341F and 785R primers [76] targeting the V3 and V4 regions were used. PCR reactions were carried out in 25 μl containing 12.5 μl 2× PCRBIO Ultra Polymerase (PCR BIO), 1 μl of each of forward and reverse primers (10 μM) and 1 μl of template DNA. The cycling conditions were 95°C for 3 min, followed by 25 cycles of 94°C for 20 s, 55°C for 20 s and 72°C for 30 s, with a final extension at 72°C for 5 min. Duplicate PCR reactions for each sample were pooled before purifying using a NucleoSpin Gel and PCR Clean-up Kit (Macherey-Nagel). Gel electrophoresis and a NanoDrop machine were used to assess the quantity and quality of the purified PCR products, and concentrations of all samples were adjusted to 15-20 ng per microliter. Similarly, samples were prepared for Methylocella-specific mmoX amplicon sequencing using the primers and PCR assay described above for the detection of Methylocella in the environment. Purified PCR products were used to prepare DNA libraries following the Illumina TruSeq DNA library protocol and sequenced (2 × 300 bp paired-end reads) at MR DNA (Shallowater, TX, USA) using the Illumina MiSeq platform.
16S rRNA sequence data were processed using MR DNA proprietary analysis pipeline (www.mrdnalab.com). Sequences were depleted of barcodes and primers then short sequences < 200 bp were removed, sequences with ambiguous base calls removed, and sequences with homopolymer runs exceeding 6 bp removed. Sequences were then denoised, and 16S rRNA gene OTUs were defined with clustering at 3% divergence (97% similarity) followed by removal of singleton sequences and chimeras [77][78][79][80][81][82]. Final OTUs were taxonomically classified using BLASTn against a curated database derived from GreenGenes [83], RDPII (http://rdp.cme.msu.edu) and NCBI (www.ncbi.nlm.nih.gov), and compiled into each taxonomic level. Abundance data of 16S rRNA gene OTUs related to Methylocella retrieved from the environments with known concentrations of ethane and propane (Additional file 1: Table S1) were used to calculate Spearman's correlation coefficient at the statistical significance level of 0.05. | 7,917 | 2018-06-28T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Single Core Hardware Module to Implement Partial Encryption of Compressed Image
: Problem statement: Real-time secure image and video communication is challenging due to the processing time and computational requirement for encryption and decryption. In order to cope with these concerns, innovative image compression and encryption techniques are required. Approach: In this research, we have introduced partial encryption technique on compressed images and implemented the algorithm on Altera FLEX10K FPGA device that allows for efficient hardware implementation. The compression algorithm decomposes images into several different parts. We have used a secured encryption algorithm to encrypt only the crucial parts, which are considerably smaller than the original image, which result in significant reduction in processing time and computational requirement for encryption and decryption. The breadth-first traversal linear lossless quadtree decomposition method is used for the partial compression and RSA is used for the encryption. Results: Functional simulations were commenced to verify the functionality of the individual modules and the system on four different images. We have validated the advantage of the proposed approach through comparison, verification and analysis. The design has utilized 2928 units of LC with a system frequency of 13.42MHz. Conclusion: In this research, the FPGA prototyping of a partial encryption of compressed images using lossless quadtree compression and RSA encryption has been successfully implemented with minimum logic cells. It is found that the compression process is faster than the decompression process in linear quadtree approach. Moreover, the RSA simulations show that the encryption process is faster than the decryption process for all four images tested.
INTRODUCTION
The rapid growth of image and video communication nowadays is powered by ever-faster systems demanding greater speed and security. Realtime secure image and video communication is challenging due to the processing time and computational requirement for encryption and decryption. In order to cope with these concerns, innovative image compression and encryption techniques are required.
Although a vast number of compression and encryption algorithms exist, they have been traditionally developed independently of each other. A partial encryption scheme for images that takes advantage of the image compression algorithm has been proposed by Liu et al. (2011); Cheng and Li (1996; and Cheng (1998). The scheme makes use of a compression algorithm that decomposes an image into several different parts. A secure encryption algorithm is then used to encrypt only the crucial parts, which are considerably smaller than the original image. This will result in significant reduction in processing time and computational requirement for encryption and decryption.
Other researchers have also proposed partial encryption, or combined compression and encryption methods (Liu et al., 2011;Ahmed, 2010;Akter et al., 2008a;2008b;Reaz et al., 2006a;2007a;Tho et al., 2004). Dang and Chau (2000) has proposed the joint image compression and encryption scheme using Discrete Wavelet Transform (DWT) and Data Encryption Standard (DES). Jakobsson et al. (1999) developed a "Scramble All, Encrypt Small" technique that encrypts only a small block of an arbitrarily long message. However, the former is less efficient than a partial encryption scheme and utilizes an encryption algorithm (DES) that is no longer secure. The latter requires an ideal hash function that is hard to realize and may not be suitable for images as it was designed for data encryption. In another work, Prasad and Kurupati (2010) proposed a combination of Arnold scrambling and DWT for secured image compression. But Arnold scrambling alone is not sufficient enough to provide significant security with the implementation of RSA (Wei et al., 2009).
Traditionally, image compression and encryption algorithms have been restricted to the software realm and developed separately. Although the advantages of software are ease of update, flexibility and portability, hardware implementation is faster and more physically secure, especially when secret key storage security are concerned.
The Field-Programmable Gate Arrays (FPGA) offers a potential alternative to speed up the hardware realization (Marufuzzaman et al., 2010;Reaz et al., 2007b). From the perspective of computer-aided design, FPGA comes with the merits of lower cost, higher density and shorter design cycle (Choong et al., 2005). It comprises a wide variety of building blocks. Each block consists of programmable look-up table and storage registers, where interconnections among these blocks are programmed through the hardware description language (Reaz et al., 2004a;Reaz et al., 2003). This programmability and simplicity of FPGA made it favorable for prototyping digital system. FPGA allows the users to easily and inexpensively realize their own logic networks in hardware. FPGA also allows modifying the algorithm easily and the design time frame for the hardware becomes shorter by using FPGA (Choong et al., 2006;Ibrahimy et al., 2006).
This study aims to investigate the hardware feasibility and performance of a novel partial encryption scheme for compressed images using FPGA by means of using a standard hardware description language VHDL. The use of VHDL for modeling is especially appealing since it provides a formal description of the system and allows the use of specific description styles to cover the different abstraction levels (architectural, register transfer and logic level) employed in the design (Pang et al., 2006;Reaz et al., 2006b). In the computation of method, the problem is first divided into small pieces; each can be seen as a submodule in VHDL. Following the software verification of each submodule, the synthesis is then activated. It performs the translations of hardware description language code into an equivalent netlist of digital cells. The synthesis helps integrate the design work and provides a higher feasibility to explore a far wider range of architectural alternative 2004b).
The FPGA implementation combines compression and a secure encryption algorithm that encrypts only crucial parts of the compressed image. The algorithms chosen for implementation are the lossless quadtree compression and the RSA algorithm. The hardware implementation was done using Altera FLEX10KE device.
MATERIALS AND METHODS
The partial encryption scheme depends on a compression algorithm that decomposes the input image into a number of different logical parts. The output consists of parts that provide significant amount of information about the original image, referred to as the important parts. The remaining parts have little meaning without the important parts, hence known as the unimportant parts. In this partial encryption approach, only the important part needs to be encrypted by a secure encryption algorithm. When the important part is considerably smaller than the total output of the compression, the encryption and decryption time can be reduced significantly.
Quadtree compression: The quadtree decomposition method converts an image into a quadtree structure with intensity values attached to the leaf nodes of the tree. The quadtree structure reveals the outline of objects in the original image (Cheng and Li, 2000). Since the quadtree indicates the location and size of each homogeneous block in the image while the intensity values do not reveal much information, partial encryption is possible by encrypting only the quadtree structure. Here, the quadtree structure is the important part whereas the intensity values form the unimportant part. In the case of lossless compression on a b-bit image, the total size of the leaf values is b(3k +1) bits, where k is the number of internal nodes, which is equivalent to multiplying the size of each leaf value with the number of leaf nodes in the quadtree. An approximate upper bound on the relative quadtree size, which is the ratio of the size of the quadtree and the total size of the compressed image, is given in Eq. 1: 3b 4 3b 4 k Where size of quadtree = number of nodes = 4k +1 size of compressed image = size of quadtree + size of leaf values = 4k +1+ b(3k +1).
For 8-bit images, b = 8, the size of the quadtree relative to the lossless quadtree compression output is at most 14.3%. The approximation is valid for large value of k, which is typically at least 1000 for 256×256 images and greater for larger images. For lossy compression, this calculation is not applicable because variable number of bits is used to represent leaf values. Results collected from experiments performed by Cheng (1998) on test images show that for typical images, the relative quadtree size is between 13 and 27%. Therefore, only 13-27% of the output of lossy quadtree algorithm is encrypted for typical images.
The lossless quadtree compression algorithm with Leaf ordering II has been used in this research, as it is computationally simpler and secure.
Linear lossless quadtree: Representing quadtree in a tree structure requires the use of pointers. However, the amount of space required for pointers from a node to its children is not trivial. Samet (1985) suggested that each node in a quadtree is stored as a record containing six fields. The first five fields contain pointers to the node's parent and its four children labeled as NW, NE, SW and SE; whereas the sixth field describes the intensity value (color) of the image block that the node represents. The pointers would occupy nearly 90% of the memory space required to store the quadtree (Dang and Chau, 2000). As a result, several pointerless quadtree representations have been proposed by researchers such as Lin (1996) and Gargantini (1982).
This research is based on the breadth-first traversal of linear quadtree proposed by Chan and Chang (2001) and Chang et al. (2008). It consists of two lists, i.e., a tree list and a color list. The tree list stores the quadtree structure, where '0' denotes a leaf node and '1' denotes an internal node. The color list simply stores the pixel values of the image in a sequence defined by the tree structure.
RSA Encryption:
Since the encrypted part of the proposed partial encryption scheme is preferably small, public key algorithms has been applied directly to it.
In RSA a plaintext block M is encrypted to a cipher-text block C by: e C M mod n = And the plaintext block is recovered by: RSA encryption and decryption are mutual inverses and commutative, due to symmetry in modular arithmetic. Also, (2-3) show that both encryption and decryption are based on the same operation, which is modular exponentiation. Therefore, hardware implementation of RSA allows the encryption and decryption to share the same architecture, which helps reduce the hardware size.
VHDL modeling: The VHDL model for the proposed work consists of four sub-modules. The overall implementation is known as the PARTIAL_ENCRYPT chip and it consists of the functional sub-modules Compression, Encryption, Decryption and Decompression.
Linear quadtree compression/decompression: Linear quadtree compression and decompression are implemented in two separate blocks, QT_ENCODER and QT_DECODER respectively. The combination of these two functional blocks is named QT_CODEC. The linear quadtree codec connects both QT_ENCODER and QT_DECODER in parallel and to the memory block RAM256X8. There are four input control signals, i.e., CLK, RESET, GO and E_D. The architectures of QT_CODEC, QT_ENCODER and QT_DECODER are implemented using Moore state machines with asynchronous reset. The reset signal (RESET) is used to set the state machine to its initial idle state, while a high GO signal switches it from idle state to the next state. A low E_D signal activates the QT_ENCODER while a high E_D activates the QT_DECODER. The READY signal is high when the compression operation is completed.
For compression, the input image is scanned in an order, where each quadrant is scanned in the NW, NE, SW and SE directions. The input image is stored in the RAM from addresses 00 to 3F (hex) in raster scan order, i.e., from left to right and from top to bottom. For a pixel in an 8×8 image indexed by row I and column J where I, J = 0, 1, 2, …7, its corresponding address in the RAM is expressed by: Address = (8× I ) + J For 8×8 input images, the sequence of RAM addresses in the appropriate scan order is: Where, K5, K4, K3, K2, K1 and K0 are the individual bit (0 or 1) values of a 6-bit counter that counts from 0 to (111111) 2 .
The output of the compression is a tree list that describes the quadtree structure ('0' for leave node and '1' for internal node) and a color list that contains the intensity values of the quadtree. On the other hand, the linear quadtree decompression performed by the QT_DECODER block is just the reverse process of the compression. In linear quadtree compression, if a 2× 2 block of an image is homogeneous, it is reduced to one block containing the pixel value; otherwise it is reduced to an 'I' block. The intensity values in an 'I' block are stored in a list. This continues recursively until the 8×8 image is reduced to only one block. The tree list and color list are stored at RAM addresses beginning with 40(hex) and 80(hex) respectively.
RSA encryption implementation: RSA Module
consists of 3 sub-modules. They are RSA_LOAD, RSA_CORE and RSA_OUTPUT. RSA_CORE performs encryption and decryption, RSA_LOAD serially captures incoming message to be encrypted or decrypted and RSA_OUTPUT serially outputs the decrypted/encrypted message. A RAM with 2048 bits in size was design to provide the storage element for the RSA encryption modules.
Arithmetic Logic Unit accepts 32 bits data as input and produce 32 bits output. The input data is stored temporary in a larger register (34-bits). Arithmetic operations are performed on the temporary register. The working result is then moved to the output port when the operation is done. The design uses four large registers (34 bits) to hold the working results and 2 small registers (5 bits) to hold the loop variables (i, j). The extra 2 bits in the 4 registers are used in order to prevent overflowing during addition operations.
After consideration on the trade-off between security and speed, the size of parameters and signals of the RSA_CORE module for the VHDL model are chosen as follows: • M is the 32-bit plaintext for encryption, or the 32bit cipher-text for decryption • E and N_C are the 32-bit public key (e, n) used for encryption, or the 32-bit private key (d, n) used for decryption • CLK is the clock input signal • RST sets the state machine implemented in RSA_CORE architecture to the initial idle state • GO switches the state machine from idle state to the next state • C is the 32-bit cipher-text produced by encryption, or the 32-bit plaintext recovered by decryption • DONE is high when the encryption or decryption operation is completed, otherwise it is always low Top level design: The overall design incorporates the RSA_CORE module into the linear quadtree codec. The top level entity is named as PARTIAL_ENCRYPT where a low E_D signal activates the QT_ENCODER block to perform linear quadtree compression on the input image stored in the RAM256X8 block. When compression is completed, the RSA_CORE is activated to encrypt the tree list stored at RAM addresses 80 to 82 (hex). The encrypted tree list is then stored at addresses 88 to 8B. On the other hand, a high E_D signal starts the decryption operation of RSA_CORE on the encrypted tree list to recover the tree list. Then decompression is performed by the QT_DECODER to reconstruct the original image. Four test images are used as inputs to verify the correctness of the design using functional simulation. All of the test images are grayscale with dimensions 8×8. For clarity, each image is arranged in a 8×8 table, in which the cells correspond to the pixel intensity values (grayscale level or color). The size of each pixel is 8 bits and its value is expressed in 2 hexadecimal digits.
Theoretical results for test image 1 and its quadtree:
The output of linear lossless quadtree compression is a tree list that contains the quadtree nodes and a color list that contains the pixel values of the image. In the tree list, binary '0' denotes a leaf node and '1' denotes an internal node. The results of linear lossless quadtree compression are: Tree list = 1001110000012 = 9C1 Color list = 00 FF 00 FF 00 00 FF 00 00 FF FF 00 FF 00 FF FF FF 00 00 Size of image = 64×8 bites = 512 bits Size of tree list = 12 bits Size of color list = 152 bits Compression ratio = Size of image / (Size of tree list + Size of color list): Fig. 3.
DISCUSSION
During the formulation of theoretical results for test image 1, we have omitted the root node and bottommost leaf nodes in the tree list in order to achieve better compression ratio, as the decompression algorithm does not need them. Since decompression is simply the reverse process of compression, its results can be deduced from those of the compression.
Functional simulation of the linear quadtree codec (QT_CODEC) is performed on the four test images with 20ns simulation clock period (50 MHz). The time interval between high GO signal and high READY signal is divided by the simulation clock period to calculate the processing time for compression or decompression. Form the results of functional simulation for linear quadtree compression and decompression as shown in Table 1, it is observed that the processing time is longer with smaller compression ratio and decompression is faster than compression.
In the functional simulation for partial encryption, the time interval between high GO signal and high READY signal is divided by the simulation clock period to calculate the processing time for combined compression and partial encryption or partial decryption and decompression and the results are compared in Table 2. It is concluded that the partial decryption and decompression is much slower because the decryption time of the RSA_CORE module is twice longer than the encryption time. Throughout the synthesis results, there are a few points worth to be discussed. Firstly, from the synthesis results, the RSA Core module utilized around 20% of the chosen FLEX 10KE device. Nevertheless, the clock frequency report showed the critical frequency is only 34.7MHz. This has given the limitation of the frequency of the RSA Module, even though the serial to parallel and parallel to serial converters could achieve 133.9MHz and 89.6MHz respectively. For the RSA Core module, though the 34.7MHz is acceptable, it is not fast enough compare to today's FPGA technology. However, the critical frequency can possibly be increased further by optimizing the circuit through place and route the internal probes. The synthesis of the whole RSA encryption, which included the RAM implementation, has taken up 554 units of logic cell (LC). This is about 35% utilization of the chosen device. Lastly the top level design, which is the PARTIAL ENCRYPT entity, was synthesized. A total of 2928 units of LC were used and it is about 58% utilization of the device (Altera EPF10K100EQC208-1). The frequency achieved was 13.42 MHz.
CONCLUSION
In this research project, the FPGA prototyping of a partial encryption of compressed images algorithm that allows for efficient hardware implementation had been implemented. The lossless quadtree compression and RSA encryption algorithms are chosen for implementation due to their computational simplicity in hardware. It is found from the simulation results that in linear quadtree approach the compression process is faster than the decompression process. Moreover, the RSA simulations show that the encryption process is faster than the decryption process for all four images tested. | 4,463.2 | 2011-06-13T00:00:00.000 | [
"Computer Science"
] |
Electrochemical Performance of Photovoltaic Cells Using HDA Capped-SnS Nanocrystal from bis (N-1,4-Phenyl-N-Morpho-Dithiocarbamato) Sn(II) Complexes
Great consideration is placed on the choice of capping agents’ base on the proposed application, in order to cater to the particular surface, size, geometry, and functional group. Change in any of the above can influence the characteristics properties of the nanomaterials. The adoption of hexadecylamine (HDA) as a capping agent in single source precursor approach offers better quantum dots (QDs) sensitizer materials with good quantum efficiency photoluminescence and desirable particles size. Structural, morphological, and electrochemical instruments were used to evaluate the characterization and efficiency of the sensitizers. The cyclic voltammetry (CV) results display both reduction and oxidation peaks for both materials. XRD for SnS/HDA and SnS photosensitizers displays eleven peaks within the values of 27.02° to 66.05° for SnS/HDA and 26.03° to 66.04° for SnS in correlation to the orthorhombic structure. Current density–voltage (I–V) results for SnS/HDA exhibited a better performance compared to SnS sensitizers. Bode plot results indicate electrons lifetime (τ) for SnS/HDA photosensitizer have superiority to the SnS photosensitizer. The results connote that SnS/HDA exhibited a better performance compared to SnS sensitizers due to the presence of HDA capping agent.
Introduction
Quantum dots sensitized solar cells (QDSSCs) are emerging innovations for photovoltaic cells as a replacement for the ideal dye-sensitized solar cells (DSSCs). The maximum efficiency of 13.4% obtained from inorganic sensitizer [1] has led many scientists to the research on fabricating better photosensitizes. That can portray a good generation of multiple excitons [2,3], panchromatic characteristics [4], and photostability [5] that will deal with the shortfall of molecular dyes in the traditional DSSCs. The fabrication of quantum dots (QDs) sensitizer materials to enhance this kind of solar cells has gained ground due to their diversity [6,7], tunable band-gap, cost-friendliness, and easy fabrication. QDs size can be controlled to obtain optimum band gap, as well as incorporate other semiconductor QDs materials as co-sensitized to enhance their photospectral properties and increase the conversion efficiency of QDSSCs (as seen in Figure 1) [8][9][10]. Moreover, the challenges involving metal-oxide surface and the absorbers poor contact due to the inabilities of photoinduced charge separation to directly penetrate each other served as a major restriction [11]. Other limitation like electrons diffusion length and the electrode nanoporous oxide geometry [12], hinder the surface area optimum adsorption capacity of molecule dye. To increase the cell spectral properties by adopting various photosensitizers will lead to poor optical density [13]. This implies that the amount of solar radiation absorbed by the cells are linked directly to the nanoporous surface area of the electrodes. To solve this shortfall, nanoporous oxide can be separated from absorbers by injection of electrons to improve better overlap with the solar spectrum and higher optical density without destroying the electron absorption performance [14][15][16].
This can be achieved with synthetic process, which is divided into nucleation and growth. The depth knowledge about both steps has resulted in new nanosynthesis route. Giving a better uniform surface morphology, size and monodispersity materials by enabling optimum control on the synthesis process. Factors, such as solvent, reducing agent, and capping agent, are of very great importance in monodisperse nanoparticles synthesis. The use of capping agents in stabilization and colloidal synthesis of nanoparticles is known to control materials size, surface passivation and particle morphology. The adoption of energy-saving and less or non-toxic capping agents will promote green synthetic route of nanoparticle fabrication of large scale commercialization [17][18][19]. Great consideration is placed on the choice of capping agents' base on the proposed application, in other to cater for the particular surface, size, geometry and functional group. Change in any of the above mention can influence the characteristics properties of the materials. Capping agents, such as trioctylphophine oxide (TOPO), trioctylphophine (TOP), and hexadecylamine (HDA), etc., offer excellent stability with organic solvents for nanoparticles [20,21]. The injection of HDA capping agent could offer desirable particles size and better QDs sensitizer materials with good quantum efficiency. Therefore, enhancing the assembly patterns of the fabricated cells [22]. In the present study, the main objective is exploring the beneficial effects of both HDA capped and uncapped materials on surface treatments. Therefore, leading to improving electrochemical performance of quantum dots sensitizers' absorber in photovoltaic cells.
Material
All materials were purchased and used without modification. The complete test kits containing fluorine-doped tin oxide (FTO) as glass substrate of TiO2, platinum FTO, HI-30 electrolyte iodide, masks, gaskets, chenodeoxycholic acid (CDC) and hot seal were purchased from Solaronix Other limitation like electrons diffusion length and the electrode nanoporous oxide geometry [12], hinder the surface area optimum adsorption capacity of molecule dye. To increase the cell spectral properties by adopting various photosensitizers will lead to poor optical density [13]. This implies that the amount of solar radiation absorbed by the cells are linked directly to the nanoporous surface area of the electrodes. To solve this shortfall, nanoporous oxide can be separated from absorbers by injection of electrons to improve better overlap with the solar spectrum and higher optical density without destroying the electron absorption performance [14][15][16].
This can be achieved with synthetic process, which is divided into nucleation and growth. The depth knowledge about both steps has resulted in new nanosynthesis route. Giving a better uniform surface morphology, size and monodispersity materials by enabling optimum control on the synthesis process. Factors, such as solvent, reducing agent, and capping agent, are of very great importance in monodisperse nanoparticles synthesis. The use of capping agents in stabilization and colloidal synthesis of nanoparticles is known to control materials size, surface passivation and particle morphology. The adoption of energy-saving and less or non-toxic capping agents will promote green synthetic route of nanoparticle fabrication of large scale commercialization [17][18][19]. Great consideration is placed on the choice of capping agents' base on the proposed application, in other to cater for the particular surface, size, geometry and functional group. Change in any of the above mention can influence the characteristics properties of the materials. Capping agents, such as trioctylphophine oxide (TOPO), trioctylphophine (TOP), and hexadecylamine (HDA), etc., offer excellent stability with organic solvents for nanoparticles [20,21]. The injection of HDA capping agent could offer desirable particles size and better QDs sensitizer materials with good quantum efficiency. Therefore, enhancing the assembly patterns of the fabricated cells [22]. In the present study, the main objective is exploring the beneficial effects of both HDA capped and uncapped materials on surface treatments. Therefore, leading to improving electrochemical performance of quantum dots sensitizers' absorber in photovoltaic cells.
Synthesis of SnS Nanoparticles with HDA Capping Agent
Nanoparticles were fabricated according to the literature method [23] 0.20 g of bis (N-1,4-Phenyl-N-Morhpo-dithiocarbamato) tin(II) complex (as seen in Figure 2), was added to 4 mL oleic acid (OA) and were injected into hot HDA of 3 g at 360 • C for surface passivation and particle morphology. 20-30 • C initial temperature was attained for the mixture. The reaction was stabilized at 360 • C and the process lasted for 1 h. The process was allowed to drop to 70 • C signifying the completion of the process and about 50 mL methanol were used the remover of excess OA and HDA. Centrifugation was used to separate the flocculent precipitate and re-dispersed with toluene. Low air pressure was used to remove solvent giving rise to metals sulfides of SnS/HDA nanoparticles. Nanomaterials 2020, 10, x FOR PEER REVIEW 3 of 10 Company (Aubonne, Switzerland). Water, Oleic acid (OA), methanol, HDA, SnS/had, and SnS nanoparticles from bis (N-1,4-Phenyl-N-Morhpo-dithiocarbamato) Sn(II) complexes.
Synthesis of SnS Nanoparticles with HDA Capping Agent
Nanoparticles were fabricated according to the literature method [23] 0.20 g of bis (N-1,4-Phenyl-N-Morhpo-dithiocarbamato) tin(II) complex (as seen in Figure 2), was added to 4 mL oleic acid (OA) and were injected into hot HDA of 3 g at 360 °C for surface passivation and particle morphology. 20-30 °C initial temperature was attained for the mixture. The reaction was stabilized at 360 °C and the process lasted for 1 h. The process was allowed to drop to 70 °C signifying the completion of the process and about 50 mL methanol were used the remover of excess OA and HDA. Centrifugation was used to separate the flocculent precipitate and re-dispersed with toluene. Low air pressure was used to remove solvent giving rise to metals sulfides of SnS/HDA nanoparticles.
Synthesis of SnS Nanoparticles Without HDA Capping Agent
Synthesis of SnS nanocrystals was obtained through high-temperature thermal decomposition of bis (N-1,4-Phenyl-N-Morhpo-dithiocarbamato) tin(II) complex using Perkin Elmer TGA 4000 ThermoGravimetric Analyser (TGA) (San Jose, CA, USA). About 25 mg of the complex was loaded into an alumina pan and weight changes were recorded as a function of temperature for a 10 °C min −1 temperature gradient between 30-900 °C. A purge gas of flowing nitrogen at a rate of 20 mL min −1 was used. At temperatures between 360 and 900 °C, the complex end product was converted into residue, which was expected for the formation of SnS nanocrystals from the residue obtained from the TGA.
Fabrication and Assembling of Solar Cells
QDSSC were prepared with 2 × 2 cm 2 FTO-glass plates of platinum and TiO2 electrodes were purchase from Solaronix (Aubonne, Switzerland) with 6 × 6 mm p active areas of TiO2 screen coated. Dye loading for sensitization was done using 10 mL of warm water with MO-SnS/HDA and MO-SnS. Co-adsorbents were added (Co-adsorbent/dye) of chenodeoxycholic acid (CDC) were used. The mediating solution was the commercial HI-30 electrolyte solution (Solaronix), with content of iodide species at 0.05 M. The TiO2 thin film was soaked into a solution of photosensitizers for 24 h. The two substrates, one coated with TiO2 loaded with photosensitizers and the other with platinum, were held together using polyethylene and soldering iron. The syringe was used to inject the HI-30 electrolyte (iodide).
Synthesis of SnS Nanoparticles Without HDA Capping Agent
Synthesis of SnS nanocrystals was obtained through high-temperature thermal decomposition of bis (N-1,4-Phenyl-N-Morhpo-dithiocarbamato) tin(II) complex using Perkin Elmer TGA 4000 ThermoGravimetric Analyser (TGA) (San Jose, CA, USA). About 25 mg of the complex was loaded into an alumina pan and weight changes were recorded as a function of temperature for a 10 • C min −1 temperature gradient between 30-900 • C. A purge gas of flowing nitrogen at a rate of 20 mL min −1 was used. At temperatures between 360 and 900 • C, the complex end product was converted into residue, which was expected for the formation of SnS nanocrystals from the residue obtained from the TGA.
Fabrication and Assembling of Solar Cells
QDSSC were prepared with 2 × 2 cm 2 FTO-glass plates of platinum and TiO 2 electrodes were purchase from Solaronix (Aubonne, Switzerland) with 6 × 6 mm p active areas of TiO 2 screen coated. Dye loading for sensitization was done using 10 mL of warm water with MO-SnS/HDA and MO-SnS. Co-adsorbents were added (Co-adsorbent/dye) of chenodeoxycholic acid (CDC) were used. The mediating solution was the commercial HI-30 electrolyte solution (Solaronix), with content of iodide species at 0.05 M. The TiO 2 thin film was soaked into a solution of photosensitizers for 24 h. The two substrates, one coated with TiO 2 loaded with photosensitizers and the other with platinum, were held together using polyethylene and soldering iron. The syringe was used to inject the HI-30 electrolyte (iodide).
Where FF is fill factor, and V OC is an open circuit voltage; J SC is a short circuit current density.
The above equation was adopted where Pin is a light intensity of 100 mW cm −2 .
The final equation for overall conversion efficiency was derived from the following equation.
Physical Measurements
Electrochemical studies were carried out using Metrohm 85695 Autolab with Nova 1.10 software (Metrohm South Africa (Pty) Ltd., Sandton, South Africa). A platinum electrode was adopted as a counter electrode, with TiO 2 as the anode, while HI-30 iodode electrode was used as a reference electrode. Cyclic voltammetry (CV) was performed at scan rates from 0.05 to 0.35 V s −1 with an increment of 0.05 V s −1 . All the experiments were performed at room temperature. Electrochemical impedance spectroscopy (EIS) was carried out in the frequency range of 100 kHz to 100 mHz. Current density-voltage (I-V) parameters were collected through a Keithley 2401 source meter and a Thorax light power meter. Lumixo AM1.5 light simulator (RS Components (SA), Midrand, South Africa) was employed, and the lamp was fixed at 50 cm high to avoid illumination outside of the working area. To avoid cells degradation, temperature was kept below 60 • C and the light power density at 100 mW cm −2 (AM1.0). X-ray diffractometer (XRD) were employed to evaluate the structural pattern of the samples, and the diffraction structure was obtained between 10 and 90 • at the interval of 0.05 • . The surface roughness of the SnS/HDA and SnS FTO substrates were identified through the use of atomic force microscopy (AFM) (JPK NanoWizard II AFM, JPK Instruments, Berlin, Germany) at a scan rate of 0.8 Hz in contact mode. JEOL JEM 2100 High-Resolution Transmission Electron Microscope (JEOL Inc., Pleasanton, CA, USA) (HRTEM) operating at 200 KV with selected area electron diffraction (SAED) patterns was used.
Results and Discussion
SnS/HDA and SnS sensitizers were employed to investigate the optimized electrochemical performance of both materials. Figure 3 shows comparative CV curves of two the materials at fix applied amplitude of 50 mV s −1 . The measurement shows precise redox peaks at the voltage of 0.0 and 0.6 V, confirming the charge storage by redox reactions. The kinetic irreversibility of both displaced materials has asymmetry, i.e., reduction and oxidation of the CV curves [24,25]. SnS sensitizer exhibited a higher area than SnS/HDA, which is preferred for higher performance abilities.
SnS/HDA and SnS sensitizers were employed to investigate the optimized electrochemical performance of both materials. Figure 3 shows comparative CV curves of two the materials at fix applied amplitude of 50 mV s −1 . The measurement shows precise redox peaks at the voltage of 0.0 and 0.6 V, confirming the charge storage by redox reactions. The kinetic irreversibility of both displaced materials has asymmetry, i.e., reduction and oxidation of the CV curves [24,25]. SnS sensitizer exhibited a higher area than SnS/HDA, which is preferred for higher performance abilities. Based on the EIS results of SnS/HDA and SnS photosensitizer, shown in Figure 4, the charge transfer resistance (R ct ) of the SnS sensitizer film decreases compare to the SnS/HDA sensitizer with lower charge transfer resistivity. These will enhance fast electron transfer, lower charge recombination, and better conductivity compared to SnS/HDA photosensitizer. This connotes improved contact between the redox couple with SnS sensitizer and a better degree of electron growth [26,27]. By this, we concluded that SnS sensitizer offers preferable efficient electron transfer. Based on the EIS results of SnS/HDA and SnS photosensitizer, shown in Figure 4, the charge transfer resistance (Rct) of the SnS sensitizer film decreases compare to the SnS/HDA sensitizer with lower charge transfer resistivity. These will enhance fast electron transfer, lower charge recombination, and better conductivity compared to SnS/HDA photosensitizer. This connotes improved contact between the redox couple with SnS sensitizer and a better degree of electron growth [26,27]. By this, we concluded that SnS sensitizer offers preferable efficient electron transfer. The appropriate instrument to investigate structural phase unit cell dimension and lattice parameters of SnS/HDA and SnS photosensitizer films is X-ray diffraction (XRD) analysis. Figure 5 reveals the XRD configurations. Their peaks of 2θ range between 27-82° that correlated to orthorhombic SnS crystals (JCPDS 039-0354). Furthermore, both materials display no traces of other impurities as observed in the XRD patterns. The purity and quality of the crystalline are displayed by the strong and sharp diffraction peaks observed in both samples. The improvement in crystalline quality could be linked to nucleation control, which promotes the growth process of SnS [28]. In addition, there is no change in the preferential orientation along (201), (210), (111), (301), (311), (511), (610), and (512) for both samples, which confirms successful fabrication of SnS and SnS/HDA into the lattice of TiO2 [29]. The preferential orientation of SnS/HDA is linked to the nucleation control of the growth process due to excess HDA used during the synthesis [30]. The presence of lower intensities and minor phases are usually linked to defect chemistry, thermodynamics of solid, and thermal gradient. The appropriate instrument to investigate structural phase unit cell dimension and lattice parameters of SnS/HDA and SnS photosensitizer films is X-ray diffraction (XRD) analysis. Figure 5 reveals the XRD configurations. Their peaks of 2θ range between 27-82 • that correlated to orthorhombic SnS crystals (JCPDS 039-0354). Furthermore, both materials display no traces of other impurities as observed in the XRD patterns. The purity and quality of the crystalline are displayed by the strong and sharp diffraction peaks observed in both samples. The improvement in crystalline quality could be linked to nucleation control, which promotes the growth process of SnS [28]. In addition, there is no change in the preferential orientation along (201), (210), (111), (301), (311), (511), (610), and (512) for both samples, which confirms successful fabrication of SnS and SnS/HDA into the lattice of TiO 2 [29]. The preferential orientation of SnS/HDA is linked to the nucleation control of the growth process due to excess HDA used during the synthesis [30]. The presence of lower intensities and minor phases are usually linked to defect chemistry, thermodynamics of solid, and thermal gradient.
addition, there is no change in the preferential orientation along (201), (210), (111), (301), (311), (511), (610), and (512) for both samples, which confirms successful fabrication of SnS and SnS/HDA into the lattice of TiO2 [29]. The preferential orientation of SnS/HDA is linked to the nucleation control of the growth process due to excess HDA used during the synthesis [30]. The presence of lower intensities and minor phases are usually linked to defect chemistry, thermodynamics of solid, and thermal gradient. Table 1 and Figure 6 display I-V parameters of SnS/HDA and SnS photosensitizers quantum dot solar cells (QDSCs). TiO2 photoanode sensitized with the fabricated dye, and Pt counter electrode and redox couple HI-30 electrolyte was used. In order to evaluate the efficiency of the photosensitized materials, these materials were fabricated with the same test conditions. The SnS/HDA and SnS JSC is observed at 10.99 mA cm −2 and 1.802 mA cm −2 , producing conversion efficiency at 1.25% and 0.42%. Their photovoltaic characteristics of QDSSCs films based for VOC are 0.423 V and 0.375 V. The lower conversion efficiency of SnS compared to SnS/HDA could be linked to the low electrocatalytic activity, which reduce HI-30 iodide and low electrical conductivity. These hinder electron transport from recombining with holes because of redox couple [31]. Table 1 and Figure 6 display I-V parameters of SnS/HDA and SnS photosensitizers quantum dot solar cells (QDSCs). TiO 2 photoanode sensitized with the fabricated dye, and Pt counter electrode and redox couple HI-30 electrolyte was used. In order to evaluate the efficiency of the photosensitized materials, these materials were fabricated with the same test conditions. The SnS/HDA and SnS JSC is observed at 10.99 mA cm −2 and 1.802 mA cm −2 , producing conversion efficiency at 1.25% and 0.42%. Their photovoltaic characteristics of QDSSCs films based for V OC are 0.423 V and 0.375 V. The lower conversion efficiency of SnS compared to SnS/HDA could be linked to the low electrocatalytic activity, which reduce HI-30 iodide and low electrical conductivity. These hinder electron transport from recombining with holes because of redox couple [31]. The Bode plot, as seen in Figure 7, exhibits the phase angle shift and frequency response magnitude. According to Bode plot, the phase angle of SnS photosensitizer is higher compared to SnS/HDA photosensitizer. This depicts that SnS has a porous nature (confirmed by surface area analysis and AFM image) and lower charge transfer resistance. Furthermore, this enhances the injection and flow of electrolyte in the cell [32]. The Bode plot, as seen in Figure 7, exhibits the phase angle shift and frequency response magnitude. According to Bode plot, the phase angle of SnS photosensitizer is higher compared to SnS/HDA photosensitizer. This depicts that SnS has a porous nature (confirmed by surface area analysis and AFM image) and lower charge transfer resistance. Furthermore, this enhances the injection and flow of electrolyte in the cell [32].
The Bode plot, as seen in Figure 7, exhibits the phase angle shift and frequency response magnitude. According to Bode plot, the phase angle of SnS photosensitizer is higher compared to SnS/HDA photosensitizer. This depicts that SnS has a porous nature (confirmed by surface area analysis and AFM image) and lower charge transfer resistance. Furthermore, this enhances the injection and flow of electrolyte in the cell [32]. The topographical analysis was carried out using atomic force microscope (AFM). The evaluated particles size varied in the range of 0.11-1.18 um for SnS/HDA and 0.054-0.54 um for SnS films grown at 360 °C with size height of 16.8% and 8.4%, respectively, as seen in Figure 8. These images revealed little significant changes in the particles size of both samples fabricated at the same temperature. The ad-atoms move over the surface brings bonding togetherness with an enlarged form of sub-particles. This emanated to formation good compactness and larger particles size. When the film surface is smooth, it resulted to lower roughness due to high surface diffusion [33]. This can be attributed to better particle size and good power conversion efficiency in QDSSCs, which depends solely on the low roughness of the photoelectrodes. The topographical analysis was carried out using atomic force microscope (AFM). The evaluated particles size varied in the range of 0.11-1.18 um for SnS/HDA and 0.054-0.54 um for SnS films grown at 360 • C with size height of 16.8% and 8.4%, respectively, as seen in Figure 8. These images revealed little significant changes in the particles size of both samples fabricated at the same temperature. The ad-atoms move over the surface brings bonding togetherness with an enlarged form of sub-particles. This emanated to formation good compactness and larger particles size. When the film surface is smooth, it resulted to lower roughness due to high surface diffusion [33]. This can be attributed to better particle size and good power conversion efficiency in QDSSCs, which depends solely on the low roughness of the photoelectrodes. The measurements and morphology of the synthesized SnS/HDA and SnS were examined by HRTEM; Figure 9 reveals, that the products are spherical, clustered, and agglomerated, with crystallite sizes within the range 14.96-44.39 nm for SnS and 9.5-23.19 nm for SnS/HDA [34]. Their lattice fringes revealed polycrystalline nanoparticles comprising numerous crystal grains d-spacing of 3.549-3.623 nm, affirming their polycrystalline nature, which is in concurrence with the report from Reference [35]. SAED obtained reveals the spot pattern shaping of the diffraction rings, which indicates that both materials have high crystalline nature [36]. The measurements and morphology of the synthesized SnS/HDA and SnS were examined by HRTEM; Figure 9 reveals, that the products are spherical, clustered, and agglomerated, with crystallite sizes within the range 14.96-44.39 nm for SnS and 9.5-23.19 nm for SnS/HDA [34]. Their lattice fringes revealed polycrystalline nanoparticles comprising numerous crystal grains d-spacing of 3.549-3.623 nm, affirming their polycrystalline nature, which is in concurrence with the report from Reference [35]. Nanomaterials 2020, 10, 414 8 of 10 SAED obtained reveals the spot pattern shaping of the diffraction rings, which indicates that both materials have high crystalline nature [36].
The measurements and morphology of the synthesized SnS/HDA and SnS were examined by HRTEM; Figure 9 reveals, that the products are spherical, clustered, and agglomerated, with crystallite sizes within the range 14.96-44.39 nm for SnS and 9.5-23.19 nm for SnS/HDA [34]. Their lattice fringes revealed polycrystalline nanoparticles comprising numerous crystal grains d-spacing of 3.549-3.623 nm, affirming their polycrystalline nature, which is in concurrence with the report from Reference [35]. SAED obtained reveals the spot pattern shaping of the diffraction rings, which indicates that both materials have high crystalline nature [36].
Conclusions
In the CV curve of SnS/HDA and SnS sensitizers, the spectra displaced both reduction and oxidation peaks for both materials. The diffusion of SnS/HDA and SnS photosensitizer in HI-30 a b
Conclusions
In the CV curve of SnS/HDA and SnS sensitizers, the spectra displaced both reduction and oxidation peaks for both materials. The diffusion of SnS/HDA and SnS photosensitizer in HI-30 electrolyte connote Warburg's constant (W) as a result of the straight line. The SnS/HDA sensitizer displaced lower impedance compare to SnS sensitizer. XRD results for SnS/HDA and SnS photosensitizers displaced eleven peaks within the values of 27.02 • to 66.05 • for SnS/HDA and 26.03 • to 66.04 • for SnS in correlation to orthorhombic structure. The I-V efficiency obtained indicates that the SnS/HDA exhibited a better performance compared to SnS and Sn(II) sensitizers due to the presence of HDA capping agent. Bode plot results indicate that the electrons lifetime (τ) for SnS/HDA photosensitizer displaced superiority to the SnS photosensitizer, owning to their ability to enhanced electron lifetime and reduced electron recombination. The AFM results show the particle size distribution in SnS value at 357 nm with a smooth surface and good compactness on the substrate. However, the value for SnS/HDA is 122 nm displaced shape and size of non-symmetrical particles. The lattice fringes revealed polycrystalline nanoparticles comprising numerous crystal grains for both samples. | 5,905.2 | 2020-02-27T00:00:00.000 | [
"Materials Science"
] |
Candida biome of severe early childhood caries (S-ECC) and its cariogenic virulence traits
ABSTRACT The protected niche of deep-caries lesions is a distinctive ecosystem. We assessed the Candida biome and its cariogenic traits from dentin samples of 50 children with severe-early childhood caries (S-ECC). Asymptomatic, primary molars belonging to International Caries Detection and Assessment-ICDAS caries-code 5 and 6 were analyzed, and C. albicans (10-isolates), C. tropicalis (10), C. krusei (10), and C. glabrata (5) isolated from the lesions were then evaluated for their biofilm formation, acidogenicity, and the production of secreted hydrolases: hemolysins, phospholipase, proteinase and DNase. Candida were isolated from 14/43 ICDAS-5 lesions (32.5%) and 44/57 ICDAS-6 lesions (77.2%). Compared to, ICDAS-5, a significantly higher frequency of multi-species infestation was observed in ICDAS-6 lesions (p=0.001). All four candidal species (above) showed prolific biofilm growth, and an equal potency for tooth demineralization. A significant interspecies difference in the mean phospholipase, as well as proteinase activity was noted (p < 0.05), with C. albicans being the predominant hydrolase producer. Further, a positive correlation between phospholipase and proteinase activity of Candida-isolates was noted (r = 0.818, p < 0.001). Our data suggest that candidal mycobiota with their potent cariogenic traits may significantly contribute to the development and progression of S-ECC.
Introduction
Early childhood caries (ECC) is the most ubiquitous, plaque biofilm-mediated, aggressive form of dental caries affecting children the world over [1,2]. A hypervirulent variant of this intractable disease is called severe-early childhood caries (S-ECC) [3]. According to International Caries Detection and Assessment System (ICDAS), S-ECC could subcategorized as caries code-5, a distinct cavity with visible dentin, and caries code 6, an extensive caries lesion involving half/more than half of tooth. Code 5 and 6 are the most severe caries lesions as per the ICDAS classification system [4,5].
S-ECC is particularly rampant in the developing world due to the relatively wide and cheap availability and accessibility of sucrose substrates, and inadequate dental health care delivery systems [6]. Without appropriate intervention, S-ECC lesions may further progress, leading to extensive cavitation, reaching the pulp chamber, and compromising the longevity of the tooth, while simultaneously serving as a potent reservoir for systemically seeded infections [3,7,8].
The protected niche of deep cavitated lesions in S-ECC is a unique ecosystem. First, due to the relatively extreme depth, the carious lesion is poorly accessible to routine oral hygiene measures, as well as the salivary flushing mechanisms compounding the plaque biofilm accumulation. Second, the sugar pulses from sucrose-rich food frequently snacked by these children are retained over a prolonged period in such niches, providing a constant and a ready source of food supply for the resident microbiota. Both these conditions inevitably lead to a very low pH acidic locale, leading to the emergence, growth, and sustenance of a profuse biofilm, particularly rich in aciduric, and acidophilic microbiota [9].
It is generally recognized that the mutans-group streptococci are the prime movers of the caries process due to their acidogenic and aciduric nature [10]. However, several studies have now shown that the aciduric oral yeasts, mainly belonging to Candida species, frequently co-inhabit these lesions with mutansstreptococci, and significantly contribute to the caries process [11][12][13][14]. There is also a substantial body of data to indicate cross-kingdom synergistic, interactions between this fungus and cariogenic CONTACT Lakshman Perera Samaranayake<EMAIL_ADDRESS>M28-125, College of Dental Medicine, University of Sharjah, Sharjah 27272, UAE bacteria within such polymicrobial-biofilm habitats [15,16] making the eukaryote a candidate caries pathogen. Indeed, some have hypothesized that Candida species are secondary movers of the pathological process in deep carious lesions, primarily initiated by mutans-group streptococci [17]. In a recent, ultrastructural study Dige and Nyvad [18] have elegantly demonstrated the co-colonization of Candida species with streptococci in intact in vivo biofilms from carious lesions, and called for a detailed examination of diverse virulence attributes of the yeast species that modulate this ecosystem [18].
On closer examination, it is apparent that Candida species possess a slew of virulence traits that may contribute to dental decay [17,[19][20][21]. First, these fungi are richly endowed with metabolic machinery required for dietary carbohydrate metabolism and the production of short-chain carboxylic acids such as acetates, and lactates, and contribute to the generation of an acidic milieu, which in turn facilitates the demineralization of dentinal tissues. The seminal work of Samaranayake et al, in early eighties, clearly demonstrated the acidogenic potential as well as the survival potential of various Candida species under such extremely adverse, low pH conditions [22,23]. Second, the yeasts possess critical attributes essential for synergistic biofilm development with cariogenic flora [15,24], including their ability to adhere avidly to abiotic surfaces and develop profuse biofilms whilst serving as anchor organisms providing a skeletal framework for the biofilm [2]. Several studies have reported the ability of Candida albicans in particular to produce extracellular hydrolases such as hemolysins, phospholipases, acidic hydrolases and DNases [20,22,[25][26][27] which could contribute to the breakdown the organic structural components of the human dentin.
The oral mycobiome and the prevalence of it predominate constituent Candida, in childhood caries has been sparsely studied. In a recent preliminary study of 15 Australian children, Fechney et al. concluded that their oral mycobiome comprised at least 46 fungal species [14]. Further, they noted that the diversity of fungi was similar irrespective of the caries status of this small pediatric cohort, and caries influenced the abundance of specific fungi. However, as far as we are aware, no studies, to date have been performed to assess the prevalence of Candida species in S-ECC, and to evaluate their cariogenic traits that may contribute to the pathogenesis of the disease. Hence, the aim of this study was first, to characterize the prevalence of Candida species in S-ECC in a Middle East child cohort, with ICDAS caries code 5 and 6, and then to evaluate the biofilm formation, acidogenicity, and the production of four secreted hydrolases: hemolysin, phospholipases, proteinase, and DNase in a select group of 35 wild-type C. albicans (10 strains), C tropicalis [10], C. krusei [10] and C. glabrata [5] isolated from such lesions.
Study subjects
As per protocol approved by the Research Ethics Committee, University of Sharjah (REC-18-02-18-03), 50 children, aged-48-months to 72-months, attending a regular teaching clinic at the University Dental Hospital Sharjah, UAE, were invited to participate in the study. After obtaining informed consent from the parents of each child participant, a full dental examination was carried out for all healthy, cooperative participants. Children with more than five-decayed teeth and having at least two asymptomatic primary molars with occlusal or proximal carious lesions involved were selected by a calibrated examiner (KSF). The severity of cavitated lesions was determined by the examiner according to ICDAS classification; viz. code 5 being a distinct cavity with visible dentine involving less than half the tooth surface, and code 6, an extensive and distinct cavity with visible dentine affecting more than half of the surface.
The exclusion criteria were children on antibiotics over the last 4-weeks before sample collection, those wearing orthodontic appliance/s or with congenital tooth anomaly, or with any likelihood of pulp exposure during the caries excavation process. Further, dentine samples from endodontically treated teeth, or when gingival bleeding contaminated the cavity during the sample collection process were also excluded.
Caries diagnosis
Caries status was recorded using the WHO criterion of decayed, missing, and filled (dmft) tooth index. The severity of cavitated lesions was ascertained as either caries-code 5 or 6 according to ICDAS-caries criteria [4]. One trained pediatric dentist conducted the clinical examination and sample collection throughout the studies (KSF).
Sample collection
A total of 100 infected-dentine samples from 50 children (two samples each) were aseptically collected by a single trained collector (KSF) from both occlusal and proximal, symptom-free, caries active, deepdentin lesions belonging to ICDAS caries-code 5 and code 6.
Samples were collected using a sterile spoon excavator after cleaning and drying the cavities with a prophy brush, without using prophy paste. Each sample was split in two and one aliquot was placed in an Eppendorf centrifuge tube (1.5 ml) containing 300 µl of Phosphate buffered saline (PBS) for multiplex PCR, and the second aliquot in Brain Heart Infusion (BHI) Broth (Thermo Scientific Remel, USA), for culture, and immediately frozen at −20°C until further use.
In the laboratory, one aliquot in BHI broth was cultured aerobically on Sabouraud dextrose agar (SDA) at 37°C for 48 hr, and the resultant growth observed. All samples which yielded yeast growth were then sub-cultured on CHROMagar (HiCrome™ Candida Differential Agar, M1297A) for 24 hr. afterwards, pure cultures of different species were obtained by selecting colony forming units on the basis of their colonial appearance on CHROM agar. The different candidal species thus obtained from each sample was then sub cultured in Sabouraud dextrose broth for 24 hr. to evaluate the virulence attributes.
DNA isolation and multiplex PCR
The second aliquot in PBS, from yeast positive clinical samples was then subjected to multiplex PCR amplification method of Trost et al [28], with minor modifications The multiplex PCR was based on the amplification of two fragments from the ITS1 and ITS2 regions by the combination of two-yeast-specific and six-speciesspecific primers in a single PCR reaction [28] and our method permitted the identification of up to six clinically relevant yeasts of the Candida genus that were found in our clinical samples, namely C. albicans, C. glabrata, C. parapsilosis, C. tropicalis, C. krusei, and C. dubliniensis. (Table 1).
DNA extraction of the collected infected-dentine samples was performed using MasterPure™ Complete DNA and RNA Purification (Epicenter, USA), following the manufacturer's guidelines. The quality and the quantity of the extracted DNA were assessed using a Colibri Microvolume Spectrometer (Titertek-Berthold Detection Systems GmbH, Germany). DNA samples were considered pure if the A260/280 ratio were more than 1.8, and A260/230 values were in the range of (1-2.2).
PCR was performed under the following cycling conditions: 40 cycles of 15 secs at 94°C, then 30 secs at 55°C, and45 secs at 65°C, after a 10-minute initial period of DNA denaturation and enzyme activation at 94°C [29]. All PCR-reaction products were evaluated by electrophoresis in 2.0% (w/v) agarose gels run at 90 V for 60 mins. Identified poly-fungal samples were re-confirmed by quantitative PCR analysis using species-specific primers.
Candida isolates
We restricted our investigations of the virulence attributes to randomly chosen 35 isolates belonging to four predominant candidal species C. albicans (10 strains), C tropicalis [10], C. krusei [10], and C. glabrata [5]. The phenotypes of the isolates, as identified by characteristic growth on CHROMagar (HiCrome™ Candida Differential Agar, M1297A) was reconfirmed by PCR identification prior to the virulence assays.
Evaluation of biofilm formation
The method of Jin et al. [30], with modifications was used to develop candidal biofilms of the selected 35 yeast isolates belonging to four different candidal species, as follows. Flat-bottom 96-well microtiter plates (Corning, 3370 Polypropylene) were used for biofilm formation. Cell suspensions were further diluted to a final concentration of 10 3 cells/ml in the RPMI-1640 medium w/L-glutamine, 0.2% glucose, and 0.165 moles/l MOPS buffer w/o sodium bicarbonate (AT180, RPMI-1640, Himedia). RPMI 1640 contains 0.2%D-glucose, but for the biofilm assay, we supplemented with D-glucose up to 2%, as a final concentration.
The plates were then incubated at 37°C for 24-48 hrs. in a shaker incubator (Thermo Scientific 4430) at 90 rpm. After biofilm formation at 24 h and 48 h, the medium was carefully aspirated using multichannel pipette without disrupting the biofilms. The plates were washed thrice with sterile PBS (200 µl/ well) and were drained in an inverted position by blotting with a paper towel after the last wash, to remove any residual PBS.
The quantitation of biofilms was later performed by the XTT reduction assay. Before each test, a new XTT solution (Sigma-Aldrich) was prepared by reconstituting 4 mg of XTT in sterilized-filtered 10 ml PBS. This solution was added with menadione (Sigma-Aldrich) stock solution prepared in acetone. Using a multichannel pipette, 100 µl of XTT/menadione solution was added to each well containing pre-washed biofilm. The plates were then incubated for 2 hours at 37°C, after which 80 µl of the resultant colored supernatant from each well was then transferred to a new microtiter plate, and its absorbance was gauged at 490 nm using a spectrophotometer. Calcium-release assay for acidogenicity evaluation To evaluate the acidogenicity of the 35 clinical isolates of Candida species, in terms of degrading the mineralized components of the tooth structure, a calcium-release assay was performed, according to Nikawa et al. [31] and Szabo et al. [32], with some modifications. Acidogenicity was concomitantly evaluated by pH measurements of the incubating media. Briefly, dental root-discs obtained from two sound mesiodens and six sound premolars were sterilized using wet-heat under pressure and treated under UV radiation for 3 hours [33]. The teeth were sectioned and placed at the bottom of 12-culture multi-well plates (Corning® Costar® TC-Treated). The structural components of the sectioned teeth comprised mainly of dentine with a marginal layer of cementum.
The yeast suspensions belonging to four species were adjusted to an optical density (OD) of 1.0 at 530 nm (1x10 8 cells/ml), and 50 µl of each of the selected isolates were inoculated into each well containing a dental disc. 950 µl of SDB containing 50 mM glucose (adjusted to pH 7.0 using NaOH) was then added to each well followed by incubation for 48, 96, and 144 h at 37°C. At the above timepoints the pH level was assessed using a pH meter (Portable pH meter-H1991, Hanna, USA).
The release of calcium ion during degradation of mineralized tooth structure of the teeth was measured with calcium colorimetric assay kit (Abcam-Colorimetric, ab102505). The calcium ion concentration was determined by the chromogenic complex formed by calcium ion and o-cresolphthalein, which was proportional to the concentration of calcium ion. A total of 90 μL chromogenic reagent and 60 μL calcium assay buffer were added to each well of 96well plate containing 50 μL of standards, samples, and controls. After mixing, the reaction system was incubated at room temperature for 5-10 mins, protected against light, before absorbance measurement at OD575 nm. The mean Ca++ release was obtained from readings taken on three independent occasions.
Hemolysin assay
Hemolysin production of 35 Candida spp. was evaluated as described by Luo et al. [34] An inoculum size adjusted to 10 8 cells/ml was prepared for each isolate, and 10 µl of each yeast suspension were spot inoculated on blood agar and the plates incubated for 48 h at 37°C. The variable expression of hemolysins by Candida species was viewed with transmitted light and assessed semi-quantitatively by the presence of a distinct-translucent halo around the inoculum site, indicating positive hemolytic activity.
Phospholipase assay
Candida isolates were assayed for phospholipase activity on egg-yolk agar, according to the method described by Samaranayake et al. [35]. The egg-yolk medium comprising 13 g SDA, 0.11 g CaCl 2 , 11.7 g NaCl, and 10% egg-yolk emulsion was prepared. A10 µl of yeast cell suspensions, adjusted to 10 8 cells/ml were spot inoculated on an egg-yolk agar and left to dry at room temperature. A 5 µl of saline was overlaid on the plate, and after drying at room temperature, each culture was incubated at 37°C for 48 h. Measurement of the zone of phospholipase activity (Pz) was conducted according to the method explained by Price et al. (1982). The diameter of the precipitation zone around the colony was established as the ratio of the diameter of the colony to the diameter of the colony and the precipitation zone expressed in mm [36].
Proteinase assay
Proteinase enzyme activity of Candida spp. was performed in terms of bovine serum albumin (BSA) degradation as per the technique of Ruma-Haynes et al. with some modifications [37]. The BSA test medium consisted of 20 g of dextrose, 1 g K 2 HPO 4 , 0.5 g MgSO 4 , 0.2 g yeast extract, 15 g of agar, 2 g of bovine serum albumin. Briefly, an 18 h yeast cell suspension adjusted to 10 8 cells was prepared, and the 10 µl suspension was inoculated onto the BSA plate and incubated at 37°C for five days. The plates were flooded with 1.25% Amido black stain (MB165-Amido black 10B) in 90% methanol and 10% acetic acid and allowed to stand for 10 minutes for the staining. After de-staining using 15% acetic acid for 20 minutes, plates were washed twice with PBS and allowed to dry at room temperature. Proteinase activity (Pr z ) was determined as the ratio of the colony diameter to that of clear zone proteolysis expressed in mm.
DNase assay
For the DNase assay, in brief, an 18-hr yeast cell suspension adjusted to 10 8 cells were prepared, and the 10 µl suspension was inoculated onto DNase agar plates and incubated at 30°C for seven days as described by Sanchez and Colom [38]. The DNase results were expressed as either negative or positive depending on the absence or presence of a clear halo around the colony.
All assays were conducted on three separate occasions for each yeast-isolate tested.
Statistical analysis
Numerical data obtained were analyzed using t-tests. Chi-square, Fischer exact tests, and analysis of variance (ANOVA) to compare the results between Candida species. The degree of correlation between the severity of caries lesion and Candida spp. Pearson correlation analysis was used. All results were considered significant at p ≤ 0.05.
In terms of the species distribution of Candida species, C. krusei was the predominant species isolated from 25/58 (43.1%) samples, closely followed by C. albicans in 22/58 (38%) samples, Table 2. C. parapsilosis was isolated only once from an ICDAS-6 lesion.
Biofilms formed by four different Candida species were quantified at two different time points using XTT assay for measuring biofilm metabolic activity. No significant differences in inter-or intraspecies biofilm metabolic activity was observed either within or amongst the four Candida species, either at 24 or at 48 hr time points (Figure 2).
All four Candida species were uniformly instrumental in provoking a dramatic decline in pH of the SDB medium from pH 7 to below 4.0 over a 48 h period post-incubation, with no significant interspecies difference (p > 0.05; Figure 3(a-d)). After which, at 96 hr and 144 hr time points, a further marginal drop in pH to below 3.5 was noted in all Candida suspensions (p > 0.05; Figure 3(a-d)).
All four candidal species demonstrated an equal potency to demineralize the tooth disc substrate, as shown by the Ca++ release into the incubation medium (p > 0.05)a surrogate indicator of enamel/dentine demineralization (Figure 3(a-d)). As expected, the dissolution of mineralized tooth structure resulting in Ca++ release, was relatively low at 48 h at pH 4, but increased substantially when the pH dropped to (approx.) 3.5 over the remaining incubation period of upto144 h. Though there were no significant interspecies or intraspecies differences in Ca++ release at each time point, there was clearly a significant temporal increase in calcium release from the tooth discs by all four Candida spp. between different time points (144 h > 96 h> 48 h) of incubation (p < 0.05).
In terms of the hemolysin activity of the evaluated species, we noted that all 35 isolates of Candida belonging to the four different species were hemolysin producers. However, no significant inter or intraspecies differences in the hemolysin activity was observed (p > 0.05; Figure 4). However, we noted a significant interspecies difference in the mean phospholipase activity levels between C. albicans, C. krusei, C. tropicalis versus C. glabrata (p < 0.05; Figure 4). C. albicans demonstrated the greatest activity with seven of 10 isolates producing phospholipases (data not shown) while only one of five C. glabrata isolates was phospholipase positive, and the other two species had intermediate levels of activity ( Figure 4). No intra-species differences in phospholipase activity could be discerned (p > 0.05).
Similarly, C. albicans isolates exhibited highest levels of proteinase activity compared to the other tested species, with a significant mean difference in activity between C. albicans and C. glabrata isolates (p < 0.001; Figure 4).
When we correlated the phospholipase and proteinase activity of a total of 29 Candida isolates belonging to the different species, which produced these hydrolases, a highly significant positive correlation between the production of the two extracellular enzymes was noted (r = 0.818; p < 0.001; Figure 5).
Finally, except for a single clinical strain of C. albicans, which was DNase positive, none of the other 34 isolates belonging to four different candidal species showed extracellular DNase activity (data not shown).
In conclusion, as regards to the secreted hydrolases, C. albicans demonstrated significantly higher protease activity (p < 0.001) as well as phospholipase activity (p < 0.05) relative to the other three non-albicans Candida species, while the DNase and hemolysin activity of all four species appeared to be similar (p > 0.05).
Discussion
Etiopathogenesis of dental caries is intricate and complex. Bacterial and fungal polymicrobial Figure 1. Distribution of Candida species according to ICDAS caries lesion severity code 5 (distinct cavity with visible dentin) and caries code 6 (extensive caries lesion involving half/more than half of tooth).
communities in plaque biofilms are now recognized as the prime mover of the caries process [2,39]. A combination of specific-ligand receptor interactions mediating adhesion to biotic/abiotic surfaces, and microbiota-matrix communications via quorum sensing mechanisms, contribute to such biofilm initiation. If these non-specific interactions are arrested during the initial stages of adherence to dental tissues, then biofilm formation can be reversed [17,40]. However, the cariogenic process, once established, particularly in deep cavitated dentinal lesions of S-ECC, is relentless owing to the multiplicity of contributory factors. These include, i) a stagnant niche with accumulated organic debris, ii) a diversely rich bacterial/fungal microbiome, iii) virtual absence of salivary flow and consequent lack of flushing action, and the innate salivary antimicrobial mechanisms, and finally, iv) the predominantly acidic pH milieu in deep carious trenches.
Here, we present novel insight into the fungal microbiota in S-ECC and their virulence attributes.
To the best of our understanding, this is the first comprehensive report highlighting fungal existence as mono, dual, and mixed-C. albicans and nonalbicans Candida species, in deep dentinal lesions, as well as their major pathogenic attributes that contribute to enamel demineralization and dentin collagenolysis. Our clinical data, from the largest cohort of S-ECC examined to date unequivocally demonstrate an unexpectedly high prevalence of a spectrum of yeasts in such extensive cavitated lesions ( Figure 1). To our surprise we isolated a multitude of common human pathogenic Candida species, either singly or in combination, in these caries-active deep-dentin cavities.
Given the abundance of the candidal flora in these lesions, which in itself was an interesting and a novel finding not reported elsewhere, we attempted to evaluate the difference, if any, between the yeast flora in ICDAS-5 and ICDAS 6 lesions. However, we were unable to detect any significant difference or correlation in the severity of the lesions and the prevalence of yeasts. But we noted a startling difference in the multi-species co-habitation, with approximately over one third (35.1%) of the deeper lesions co-colonized as opposed to only a small minority (4.7%) of the shallower cavities. We are unable to attribute a reason for this intriguing finding, but it is tempting to speculate that the deeper lesions with perhaps lower pH and Eh may be conducive to such multi-species cohabitation. Further work, however, need to be done to confirm or refute our contention. Due to the abundance of the yeast flora and the species variations, we then evaluated their key pathogenic attributes that could possibly contribute to the caries process. In particular, we examined the pathogenic qualities that may lead to enamel/dentine demineralization as well as the dissolution of the organic dentinal matrix components (i.e. collagenolysis). For this purpose, we randomly selected 35 out of a total of 58 yeast isolates belonging to four predominant human candidal pathogens, C. albicans, C. krusei, C tropicalis, and C glabrata and set out to examine their key, putatively cariogenic virulent attributes. A number of isolates belonging to each species was selected as it is well known that multiple isolates are essential to decipher a specific pathogenic feature of a given species, due to the intraspecies trait differences [41]. Many studies conducted with a single/ dual strain belonging to a single species have made this recommendation due to such intraspecies phenotypic variability [41,42].
As mentioned, Candida species have an immense capacity to form biofilms on both abiotic and biotic surfaces [39,43]. Yeast biofilm formation is a sequential process beginning with adherence and proliferation of blastospores on the substrate, accumulation of extracellular matrix, and finally, biofilm dispersal. Inter and intraspecies variations on the rate of biofilm formation have been reported amongst Candida species although we did not discern such variability between the four tested species [27,[44][45][46]. All four species were uniformly good biofilmformers over the observation period of 48 hrs. implying their similar capacity to colonize the dentinal niche irrespective of species differences. This is borne out by the fact that both C. albicans and C. krusei, for instance, were equally predominant colonizers of the deep dentinal plaque, despite the general observation that the former is the superior biofilm former, and the latter lacks the capacity for hyphal development [43].
However, our results on the similar biofilm forming ability of several candidal species could possibly be a reflection of the XTT assay (which evaluates the gross metabolic activity of the biofilm mass) we used, and further work is necessary to confirm or refute this observation. Additionally, biofilm assays in a simulated saliva medium with a dietary carbohydrate supplement such as sucrose should be performed to mimic and reproduce realistic in vivo conditions. Also, in vitro studies of mixed yeast cultures for biofilm formation and acid production, mirroring the in vivo caries niche, should provide additional insights into the cariogenic traits of yeasts.
Candida species are supremely adapted to survive and thrive in acidic settings [19,20,24,47]. Some laboratory models have shown the potential of C. albicans to rapidly dissolve hydroxyapatite in a low pH milieu [48,49]. Our in vitro data on decalcification of tooth samples clearly show that both non-albicans Candida and C. albicans species released Ca++ from hydroxyapatite in an acidic milieu. Moreover, the whole process was concomitantly accompanied by a drastic reduction in pH from 7.0 to 3.0 over a 24 hr incubation period, plateauing thereafter. It is tempting to speculate that similar mechanisms may operate in vivo in deep dentinal cavities of S-ECC, within entombed, stagnant, low pH eco-systems devoid of salivary defenses. Indeed, one of the more powerful virulence traits of Candida species is their ability to control the local habitat through their unique metabolic machinery. Being both aciduric and acidogenic, the yeasts, for instance, can metabolize lactate produced by neighboring organisms to short-chain carboxylic acids such as formats and acetates that drive the hard tissue dissolution leading to cariesa mechanism not too dissimilar to that of mutans-streptococci [50]. Additionally, Crabtree negative yeasts such as C. albicans, can switch between fermentative and respiratory pathways depending on oxygen availability [51] and hence adapt to nutrient and oxygen fluxes extant within the lesion, particularly at its depth [52]. Curiously, we also noted that all four Candida species from the S-ECC lesions were uniformly acidogenic and aciduric, a finding that needs further evaluation by comparing strains derived from carious lesions and other non-carious oral co-locales.
Compared to enamel, dentine is made up of a considerable amount (approx. 30%) of organic matter, mainly collagen [53]. Hence, a candidate cariogen needs to possess attributes necessary for proteolytic degradation (collagenolysis) particularly at the advancing front of the deep caries-lesions [52]. Ultrastructural observations on the dentin-demineralization process suggest a two-stage proteolytic process. An initial demineralization of the mineral dentin content [54] exposing the collagen scaffold, and a secondary stage of collagen destruction by the proteolytic enzymes. Additionally, the exposed collagen scaffold appears to serve as a skeleton for further biofilm formation and lesion perpetuation into the pulpal regions [52].
In addition to the ability of Candida species to adhere to dentin-collagen [17,[55][56][57], they also possess proteinases active in acid media. Wu and Samaranayake [58] have demonstrated in early studies, the intraspecies variation in proteinase production of Candida species in in vitro salivary cultures as well as in artificial media. They noted that C albicans is a superior proteinase producer than C. tropicalis and C. parapsilosis [58]. Our results tend to confirm the latter hierarchy of virulence in Candida spp., as we too noted that C. albicans was the predominant protease producer amongst the all four tested species. Similarly, it is well recognized that C. albicans is the foremost phospholipase producer in comparison to its counterparts [59], and this was borne out by the current data where 70% of the C. albicans strains produced this potent enzyme.
Another interesting finding was the positive association between the phospholipase and protease production amongst all the strains that produced both these enzymes. There is one study that has noted a positive association between these two hydrolases [60], while others have been unable to show such an association. This implies that caries-associated candidal flora may have a relatively strong armamentarium, which facilitates collagenolysis. In general, though, the inter-and intra-species phenotypic variations reported above amongst the caries-associated Candida species implies that our findings are not too dissimilar to those of others who compared the virulence attributes of oral, vaginal, urinary and various other body sites [35,61].
Taken together, our findings imply that candidal flora play a critical role in S-ECC, and a deeper carious niche is conducive to multi-species habitation in comparison to more superficial lesions. Furthermore, it appears that, in addition to their prodigious and well recognized acidogenic and aciduric potential, the collagenolytic virulence traits of candidal species could be an additional contributing factor for the now apparent major role they play in S-ECC. Further, similar work in different child cohorts, in various geographic locales, is essential to confirm our findings, most of which are reported here for the first time.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by the University of Sharjah-Project grant of the oral microbiome research group. | 7,015.4 | 2020-01-01T00:00:00.000 | [
"Biology"
] |
gSeaGen: a GENIE-based code for neutrino telescopes
The gSeaGen code is a GENIE based application to generate neutrino-induced events in an underwater neutrino detector. The gSeaGen code is able to generate events induced by all neutrino flavours, taking into account topological differences between track-type and shower-like events. The neutrino interaction is simulated taking into account the density and the composition of the media surrounding the detector. The main features of gSeaGen will be presented together with some examples of its application within ANTARES and KM3NeT.
Introduction
Monte Carlo simulations play an important role in the data analysis of neutrino telescopes. Simulations are used to design and optimise trigger, reconstruction and event selection algorithms. A reliable simulator of neutrino events at the detector is therefore required.
GENIE [1] is a neutrino event generator for the experimental neutrino physics community. It has a focus on low energies and is currently used by a large number of experiments working in the neutrino oscillation field. The goal of the project is the development of a "canonical" Monte Carlo simulating the physics of neutrino interactions whose validity extends to all nuclear targets and neutrino flavours from MeV to PeV scales. At the moment GENIE is validated up to 5 TeV [2].
The gSeaGen code is a neutrino event generator for neutrino telescopes. It simulates particles created in all-flavour neutrino interactions, which may produce detectable Cherenkov light. gSeaGen uses GENIE to simulate the neutrino interactions. The kinematics of the generated particles is used as input to the codes simulating the detector response [3]. The code is written in C++. The main features are described in the following.
The neutrino interaction volume
The gSeaGen simulation code depends only on the detector size. The detector is defined inside the code by the so-called can. It represents the detector horizon, i.e. the volume sensitive to light. Within this volume the Cherenkov light is generated in the next steps of simulation to study the detector response. The can is a cylinder exceeding the instrumented volume by three light absorption lengths, L a , bounded by the sea bed from which the light can not emerge (see Fig. 1).
The interaction volume is the volume where a neutrino interaction can produce detectable particles. In case of electron neutrinos and neutral current interactions of muon or tau neutrinos, resulting particles may be detected only if they are generated inside the light sensitive volume (shower-like events) and the interaction volume is defined as a cylinder coincident with the can and entirely made by seawater. If muon or tau neutrino charged current interactions are simulated, secondary muons may be detected also if the interaction vertex is outside the can (track-type events). In this case the interaction volume is a cylinder made by a layer of rock and a layer of seawater surrounding the can. Its size is dimensioned according to the muon maximum range in water and in rock, evaluated at the highest energy of simulated neutrinos. The maximum muon range is input by the user. In the simulations reported in this work, the maximum muon range was evaluated through a Monte Carlo simulation using the MUSIC code [4].
The target media
Four different target media are defined: SeaWater, Rock (used to define the interaction volume), Mantle and Core (entering in the calculation of the transmission probability through the Earth). The composition of all media is set by the user, providing the possibility to study the systematics due to medium compositions and also to simulate under-ice detectors. The density profile of inner layers of the Earth (Mantle and Core media) is described according to the Preliminary Reference Earth Model (PREM) [5].
Simulation algorithm
The energy range considered for event generation is binned in equal divisions of log-energy. For each bin, the interaction probability is scaled up to reduce the number of trials. The probability scale is the maximum interaction probability (i.e. probability at maximum energy and for the maximum possible path length) summed over initial states and it is calculated by GENIE [1].
If track-type events may be generated, the interaction volume and the number of events are also scaled. The neutrino energy is drawn according to a power-law energy spectrum and its direction is randomly extracted according to a flat distribution in the solid angle. The track vertex is drawn on a circular surface (outside the interaction volume and covering its projection onto a plane perpendicular to each neutrino's direction).
Once the neutrino is generated, its interaction is simulated using the GENIE event generation driver class GMCJDriver [1]. The neutrino-induced particles generated inside the can are stored in the output file. Muons generated outside the can are propagated with the MUSIC code [4] and stored if they reach the can surface.
Calculation of the event weight
During the simulation, the code assigns a weight W evt to each event in order to normalise the generation to a real neutrino flux. The event weight is the product of the real neutrino spectrum φ(E ν , cos θ ν ), at the generated neutrino energy E ν and direction θ ν , and the generation weight W gen : (1) The generation weight is defined as the inverse of the simulated neutrino spectrum: where I E and I θ are the energy and angular phase space factors, T gen is the simulated time interval, A gen is the generation area, N ν is the number of simulated neutrino types and N T ot is the total number of simulated neutrinos. P scale is the GENIE interaction probability scale (see Sec. 4). The neutrino transmission probability through the Earth is calculated as: where σ(E ν ) is the total cross section per nucleon (taking into account the different layer compositions); ρ l (θ ν ) is the amount of material encountered by a neutrino in its passage through the Earth. The latter is computed with the line integral ρ l (θ ν ) = L ρ Earth(r) dl, being L the neutrino path at the angle θ ν and ρ Earth(r) is the PREM Earth density profile [5].
Calculation of the systematic weights
The accuracy of the input simulation parameters is known and the uncertainties related to nu interactions can be propagated with GENIE. For each input physics quantity P, a systematic parameter x P is introduced. Tweaking this systematic parameter modifies the corresponding physics parameter P as follows: where δP is the estimated standard deviation of P. The calculation of the systematic errors in GENIE is based on an event reweighting strategy. A description of the full reweighting scheme is reported in [6].
The evaluation of the systematics has been implemented in gSeaGen, using the GENIE class GReWeight [6]. The implementation accepts single parameters or a list of them as input. In the latter case, the code treats all parameters at the same time and calculates the global systematic weight. If the calculation is activated, the systematic weights w sys are written in the output file. The modified distributions are obtained by multiplying the event weights by w sys .
The development of gSeaGen code started within ANTARES Collaboration [7], providing the possibility to use modern and maintained neutrino interaction codes/libraries. Currently, the code is used as a cross-check for GenHen [8], the standard generator code written in FORTRAN, limiting the comparison within the present GENIE validity range. gSeaGen can generate high energy events in ANTARES and KM3NeT-ARCA [9], when the GENIE extension at the PeV scale will be available. At present, gSeaGen is the reference code for simulation of the KM3NeT-ORCA detector [10].
As an example of application of gSeaGen, a flux of muon neutrinos and anti-neutrinos have been generated for the ANTARES detector can. The spectrum of generated events have been shaped according to the Bartol atmospheric muon flux [11]. The results are reported in Fig. 2 in terms of the energy spectrum and angular distributions of neutrinos producing detectable events (i.e. inside or reaching the detector can). Figure 2. Energy spectrum and angular distribution of atmospheric muon neutrinos and anti-neutrinos producing detectable events at the ANTARES detector can. Generated events are weighted according to the Bartol atmospheric muon flux [11]. Results from the standard neutrino event generator GenHen are shown for comparison [8]. | 1,919.4 | 2016-02-01T00:00:00.000 | [
"Physics"
] |
BASIC PROBLEMS WITH THE USE OF RESILIENT DENTURE LINING MATERIALS: LITERATURE REVIEW
Complicated cases in the treatment of totally edentulous patients, as well as in the fabrication of obturators and epitheses, require the use of resilient lining materials (RLMs). These materials have several disadvantages, which is why many dental practitioners do not recommend them. The tendency to breaking the bond between the hard acrylic resin and the resilient liner is a fact with unpleasant consequences, mainly found with the use of cold-curing siliconebased resilient liners. Methods to improve the bond between the hard denture base and the RLM have been the subject of numerous publications. The efforts of the authors are focused in two directions: mechanical or chemical treatment of the denture base or a combination of both. Other commonly discussed problems with the use of RLMs are the difficult and even impossible repair and the retention of oral fluids, fungi and other microorganisms due to the porosity of RLMs. Despite the wide variety of tools offered by different manufacturers, the issue of optimal and high-quality mechanical treatment of the polymerized resilient material has not yet been fully resolved.
. Lower denture, lined with a silicone-based resilient material (authors' clinical case) According to Grant et al. [1], the thickness of the resilient liner should be at least 3.0 mm, and, according to Basker, Davenport and Thomason [2], a thickness greater than 2.0 mm is not needed. In 2013, Lima et al. [5] also found that the stress in the underlying mucosa is the lowest when the thickness of the lining material is not more than 2.0 mm. The greater thickness of the liner reduces the thickness of the base of hard acrylic resin, which may lead to its fracturing, Fig. 3.
Fig. 3. Fractures of dentures lined with a siliconebased resilient material (authors' clinical cases)
Recent studies have indicated that the retention of microorganisms is the result of insufficient cleaning. According to Basker, Davenport and Thomason [2], a diet with fewer carbohydrates would reduce the supply of nutrients to the microorganisms, Fig. 2. According to the authors, cleaning with sodium hypochlorite is most effective, although it may lead to discoloration and corrosion of the metal elements (if any).
Fig. 2.
Colonization of Candida albicans on lower dentures, lined with a silicone-based resilient material after 6 months of use (authors' clinical cases) Methods for strengthening the base are provided in the literature. Basker, Davenport and Thomason [2] and others [6] propose the inclusion of a metal plate. A number of other authors [7 -10] recommend the use of fiberglass. Vergani et al. [11] and other authors [12] have reported the effect of post-polymerization microwave radiation on the flexural strength of Lucitone 550 hard heat-curing acrylic resin, which may increase the durability of the dentures.
Another major problem is the bond between the hard acrylic resin and the RLM.
Bond characteristics
The bond between the hard acrylic resin and the RLM can be of two types: mechanical and chemical, depending on the chemical composition of the materials. Acrylic-based RLMs chemically bond with the acrylic resin of the denture. In the case of silicone-based lining materials, the bond is predominantly micromechanical, and additional means -adhesives or polymeric solvents (primers), are used to improve it, Fig. 4a, b. When bonded by a polyurethane adhesive containing an A-silicone reactive component, a bond resistant to the tensile strength of 200 N/cm 2 is formed [13]. Adhesives contain a polymeric substance in a solvent. The polymeric substance may be a reactive molecule (organosilane) or a molecule such as PMMA in a solvent. The use of adhesive is appropriate for the indirect technique since the drying time is prolonged. In the direct technique, primers are more commonly used. The solvent contained in the primer dissolves the plastic surface and introduces the adhesive bonding polymer. After evaporation of the solvent, the hydrocarbon chains of the in the primer bind to the polymer of the matrix. The formed network in the superficial dissolved layer is a mechanical-chemical bond. On the other hand, the polymer contains reactive Si-H and vinyl groups that are ready for chemical bonding with the A-silicone. The primer technology is affordable, easy to implement, held for a short time, and provides tensile strength and shear strength of 190 -220 N/cm 2 [13]. The most commonly used primers are: ethyl acetate, dichloromethane, a mixture of methoxy and ethoxy silanes, methylene chloride, etc. The durability of the bond depends on the mechanical stresses, the coefficient of thermal expansion, the modulus of elasticity of the various materials, the solubility and the imbibition [14].
Bond breaking can occur at the border between the different materials, and then it is defined as adhesive. If the cause of the bond breaking is a tear of the resilient material, it is defined as cohesive, Fig. 5 and 6.
Bond improving methods
The improvement of the bond between the hard denture base and the RLM has been the subject of numerous publications. The efforts of the authors are focused in two directions: mechanical or chemical treatment of the denture base or a combination of both. In 2012, Philip, Ganapathy and Ariga [27] examined the tensile strength (after preliminary mechanical and chemical treatment) between a polyvinylsiloxane soft liner and an acrylic denture base and concluded that pretreatment with a sandblasting apparatus and a monomer significantly increased its value. They obtained the following results for the different treatments: acetone treatment -0.043 MPa, monomer treatment -0.054 MPa, surface sandblasting -0.0615 MPa, jet-abrasive treatment -0.0727 MPa, sandpaper treatment and a monomer -0.082 MPa and jet-abrasive treatment plus a monomer -0,111 MPa. After being stored in water, all values increased.
In the same year, Kazanji and Abid Al-Kadder [28] investigating Molloplast B (heat-curing silicone), GS Reline (cold-curing silicone), and Bony plus (cold-curing acrylicbased material), offered a 032-bur treatment to increase bond strength. Tensile strength was increased more for Molloplast B than for GS Reline and Bony plus. Arafa [29] also has reported that treatment with a monomer increases bond strength. According to him and other authors [30], treatment with a laser weakens the bond, and treatment with Al2O3 sand strengthens it. On the other hand, a number of other publications do not recommend jet-abrasive treatment, but only one with a monomer [31 -33]. Atsü [34], after testing several types of treatment (adhesive, sandblasting, silica coating, and combination) with Ufi Gel P, found that the strongest bond was when using an adhesive alone (1.35 MPa). The values for the jet-abrasive treatment and the silica coating were 0.28 MPa and 0.34 Mpa, respectively. The low bond strength can be explained by the stresses occurring in the border band and the reduced ability of the material to penetrate the PMMA irregularities. This was confirmed by other authors [35 -37]. Korkmaz et al. [19] applied several conditioning methods: The minimum binding strength acceptable for clinical use is 0.44 MPa [15 -17]. Popular methods of evaluating the bond strength are tests for tensile strength, shear strength [14] and peel bond strength [17 -20]. The peel bond strength is a test to measure the strength of the bond between two materials. Although tensile strength tests do not simulate clinically common forces, the method is considered indicative.
Wiêckiewicz et al. [21] investigated the adhesion strength by comparing tensile strength and shear strength. 46 MPa), and it was not recommended, especially for polyamide resins (Deflex). Akin et al. [38] confirmed the conclusion of Korkmaz et al. [19]. They treated acrylic denture bases with sandblasting and various lasers (Er: YAG, Nd: YAG and KTP lasers) and reported that treatment with an Er: YAG laser increases the bonding strength and sandblasting and Nd: YAG and KTP lasers reduce it. Tugut et al. [39] found that long pulse length Er: YAG (300 mJ, 3W) causes significant changes in texture and improves adhesion between the denture base and the resilient liner.
Gupta [15], after treating the denture base surface with acetone for 30 seconds, the monomer for 180 seconds and methylene for 15 seconds, has found that the flexural strength of the acrylic resin decreases. The flexural strength values were determined after simulating accelerated aging by thermal cycling (500 cycles at 5°C -55°C) and 30 minutestorage in water. For the control untreated base surfaces, the flexural strength values were 781.19 kg/cm 2 , when treated with monomer for 180 seconds -725.09 kg/cm 2 , when treated with methylene chloride and acetone, 715.78 kg/cm 2 and 711.81 kg/cm 2, respectively. Demir et al. [40] studied the effect of different reactive surface agents on the tensile strength between the RLMs and the denture base. Maleic anhydride (MA) is a reactive monomer that contains an unsaturated double bond and acid anhydride groups that allow chemical bonding with the resin. Treatment with 2% MA in butanone, added to the primer prior to adhesive application, has shown the highest tensile strength values of 2.53 Mpa, and the lowest values were obtained with 20% MA. Hatamleh et al. [8] have reported increased bond strength of Molloplast B to a fiber-reinforced StickTech acrylic base. According to Lassila et al. [9], denture base reinforcement with fiberglass should be carefully considered. Increasing bond strength is not valid for all materials. According to the authors, the bond strength of the cold-curing silicones studied by them is determined by the binding agent. Only the Softreliner Tough with the ethyl acetate primer was registered to increase the bond strength. For a primer polymer in 2-butanone (Ufi Gel SC) and polyacryl in dichloromethane (Vertex SoftSil 25), the bond strength even decreases, as with the Eversoft acrylic-based resin material. In 2017, Hristov [41] suggested a method for improving the bonding by using retentive pearls in the hard resin/resilient material border zone.
Studies in recent years have confirmed that the bond strength between acrylic resilient lining materials and polymethyl methacrylate denture base is of the highest quality. Bonding to acrylic plastic for CAD-CAM -IvoBase CAD is the weakest and the strongest when the resilient material is applied to non-polymerized polymethyl methacrylic resin.
The latter is only possible with acrylic-based resilient materials and with Molloplast B -a resilient silicone-based heatcuring material. A number of authors have confirmed that pretreatment of the denture base with oxygen plasma, monomer, acetone or isobutyl methacrylate (iBMA) leads to improved bond strength with the resilient material and thermocycling and sandblasting -to its deterioration [42].
Treatment of RLMs
After compression in the laboratory or functional molding in the patient's mouth, there is always a surplus of resilient material that has to be removed. Initially, this is done with a sharp scalpel. This is followed by machining with various cutting tools [13]. Despite the wide variety of tools offered by different manufacturers, the issue of optimal and high-quality machining of polymerized resilient material is not fully resolved. It is necessary to distinguish between shape correction and smoothing. In the resilient material/hard resin border zone, materials with clearly different properties come in contact. The roughness of the materials is undesirable because it creates conditions for bacterial retention and the development of fungal infections. All these features greatly increase the requirements for the treating tools. The various RLM manufacturers offer their own cutting tools in the commercial kits, Fig. 7 [13].
The special milling cutters of Komet, Germany are GSQ or FSQ. When working with them, low pressure and speed of 15,000-30,000 r/min are used. For Mucopren soft® products of Kettenbach, Germany and Elite® soft relining of Zhermack, Italy, double-cut hard alloy cutters are available. The elastic material UfiGel C® of Voco, Germany is treated with non-woven abrasive discs with various sizes of the abrasive particles under the name Lisko (Erkodent, Germany). The discs are designed for pre-polishing and are of four types -rough, medium, fine and super fine. In the first three varieties, the abrasive particles are coated with synthetic resin and in the finest -with latex. For its products, Molloplast B®, Mollosil® and Mollosil plus®, Detax, Germany offers a toolkit containing cutters, caps with different abrasive properties (abrasive particle size of 150 µm and 180 µm) and pre-polishing discs, Fig. 8.
The instrument Soft Wizard ® of NTI-Êahla, Germany, is a soft, elastic wheel with an optimum size of the abrasive particles, Fig. 9. Pesun, Hodges and Lai [43] examined the surfaces of RLMs after treatment and have reported that all tools adequately reduce the resilient liner volume. According to the authors, machining with diamond files leaves many scratches and should be followed by machining with carbide finishing burs or carbide milling cutters. The authors compared the surfaces of polished and unpolished material and noted that the final polishing with pumice and tin oxide paste gives the best results. The pumice erases scratches on the surface and provides a smooth surface for tin oxide polishing. It is confirmed that Molloplast B (Detax, GmbH & Co, KG, Germany) should be treated with stones of different abrasive properties prior to polishing. Immersion of soft liner dentures in ice water facilitates their treatment. Silicone-based RLMs are difficult to polish. To optimize their surface treatment, different varnishes for surface sealing are available on the market for dental products. These are incompletely filled, low-viscosity Asilicones that cannot chemically bind to the base material due to the lack of free binding sites in the polymerized silicone. The lacquer coating remains as a film on the silicone, which is peeled off after a few months and needs new application.
CONCLUSION
Despite the efforts of many authors to find a solution to improve the compromised stability of the full dentures [44][45][46][47] and the bond between the denture base and the resilient lining material in the case of cold-curing silicone-based resilient liners, this problem is still unsolved and remains a matter of scientific interest. | 3,192 | 2021-05-11T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Annals of the New York Academy of Sciences Laterality and the Evolution of the Prefronto-cerebellar System in Anthropoids
There is extensive evidence for an early vertebrate origin of lateralized motor behavior and of related asymmetries in underlying brain systems. We investigate human lateralized motor functioning in a broad comparative context of evolutionary neural reorganization. We quantify evolutionary trends in the fronto-cerebellar system (involved in motor learning) across 46 million years of divergent primate evolution by comparing rates of evolution of prefrontal cortex, frontal motor cortex, and posterior cerebellar hemispheres along individual branches of the primate tree of life. We provide a detailed evolutionary model of the neuroanatomical changes leading to modern human lateralized motor functioning, demonstrating an increased role for the fronto-cerebellar system in the apes dating to their evolutionary divergence from the monkeys (∼30 million years ago (Mya)), and a subsequent shift toward an increased role for prefrontal cortex over frontal motor cortex in the fronto-cerebellar system in the Homo-Pan ancestral lineage (∼10 Mya) and in the human ancestral lineage (∼6 Mya). We discuss these results in the context of cortico-cerebellar functions and their likely role in the evolution of human tool use and speech.
Introduction
Lateralization in human motor functioning is often considered as a principal factor explaining the exceptional capacity of humans to learn complex motor skills in a wide range of tasks. Lateralization in motor behavior and in its underlying neural systems is, however, not unique to humans, with extensive evidence demonstrating lateralization in primates, 1-3 nonprimate mammals, 4-6 birds, 7,8 fish, 9,10 reptiles, 11,12 and amphibians 13,14 (e.g., pawedness in toads, 15 footedness in birds, 16 and handedness in fish 17 ). Considering the evidence for an early vertebrate origin for lateralized motor behavior and its close links to neural structural asymmetries, 18,19 human lateralized motor functioning could be con- [The copyright line for this article was changed on July 18, 2014 after original online publication.] sidered in a broad primate evolutionary context of neural organizational patterns with possibly deep evolutionary roots. Here, we aim to elucidate aspects of the neural evolutionary origin for complex motor learning and its lateralization, in the context of millions of years of divergent primate evolution.
We focus on quantifying the evolution of a brain system fundamental to motor control (the frontocerebellar system) across 46 million years of divergent evolution in anthropoids. The brain is organized as a distributed system, with different anatomically and functionally connected areas interacting in coordination to produce complex behaviors. The acquisition and adaptation of complex manual motor sequences involves activation of a frontoparietal praxis network involved in hand manipulation skills, 20,21 as well as a frontocerebellar-basal ganglia network involved in novel motor sequence learning. [22][23][24][25] Within the frontocerebellar network, the lateral hemispheres of the cerebellum receive input exclusively from the cerebral cortex projecting to the frontal motor and prefrontal areas via the dentate nucleus. 26 The traditional (and empirically well-supported) theory of cerebellar function is that it encodes and continuously refines input-output relationships between motor commands and their consequences 27 in both feed-forward and inverse feedback models of ongoing movements during action execution. 28 In relation to its prefrontal projections, the cerebellar cortex simulates the way in which the outputs of prefrontal areas are processed, allowing it to issue feed-forward commands of correction signals back to the frontal lobe circuits. 29 This neural system crucially underlies the process of motor learning and allows the development of motor plans that are not coded for limb-specific movements, but for the goal of an action. 30 Functional neuroimaging studies of complex forms of motor learning confirm this interpretation of the role of the prefronto-cerebellar system in motor learning, by indicating that in the initial stages of learning, prefrontal processes control complex action execution, but that when the motor sequence is learned as a specialized automatic execution, cerebellar activation increases and prefrontal activation decreases. [31][32][33] Considering the function of this prefronto-cerebellar system in the context of human evolution, we can hypothesize that its elaboration under natural selection could explain humans' exceptional capacity to acquire, and continuously and dynamically adapt, complex forms of motor behavior.
Evidence for laterality in the fronto-cerebellar system primarily comes from human studies. The distributed cortical network involved in complex tool use is functionally left biased, 20 and both prefrontal cortex 34 and frontal motor areas 35,36 have been demonstrated to be structurally lateralized in relation to language processing and handedness (e.g., neural asymmetry in primary motor cortex corresponds to behavioral lateralization in hand preference). Cerebellar directional asymmetry of size has been observed for lobules III and IV (left < right) and VI (left > right), 37 possibly also related to handedness. 38 Laterality is thus an additional feature of the prefronto-cerebellar system in humans, and may underlie humans' exceptional capacities in tool use and language.
Despite comparative evidence suggesting increased prefrontal input to the cortico-cerebellar system 39,40 and of its lateralization in human evolution, there is only limited information on the evolutionary history of these patterns of brain system connectivity. The main reasons for this are that previous studies have compared humans with a maximum of three nonhuman primate species (without consideration of their phylogenetic relatedness), and that methods to infer detailed evolutionary pathways for all branches in a phylogenetic tree have only recently become available. 41,42 Reconstructing the detailed evolutionary history of the prefrontocerebellar system and of its structural lateralization is of crucial importance: this will both provide more detailed information on the selective pressures that have defined its adaptive role in primate behavior, and enable assessment of the deeper evolutionary history of its structural lateralization.
Our previous work on this aspect of brain system evolution 43 demonstrated a selective and correlated expansion of both frontal cortex and the cerebellar hemispheres at the dawn of the ape and great ape radiations. Here, we extend our previous work by differentiating between prefrontal (PF) and frontal motor areas (FM) within the frontal cortex and by delineating the part of the cerebellar hemispheres that has the closest functional association with prefrontal cortex, i.e., the posterior lobe of the cerebellar hemispheres (PCH). [44][45][46] We further differentiate between the left and right hemispheres in each case, allowing inference of the evolution of laterality in the fronto-cerebellar system. We have collected information for 16 extant primate species, and quantified evolutionary rates of hemispherespecific volumetric changes in the PF, FM, and PCH along individual branches of the primate phylogenetic tree. We aim to infer the evolutionary origin of a hypothesized shift from a predominantly frontal motor to a predominantly prefrontal involvement in the cortico-cerebellar system, and to examine the possible association of that shift with increased structural lateralization.
Brain data
We examined both hemispheres of 29 individuals from 16 anthropoid species (see Table 1). Data consist of serially sectioned brains from the Stephan, Zilles, and Zilles-Amunts collections 47 for frontal motor areas and prefrontal cortex were taken from our previous work, 48,49 where they were measured using a delineation protocol involving a bootstrap approach of estimating cumulative volumes at successive slice intervals along the anterio-posterior and posterior-anterior axes of the cytoarchitectonically defined frontal lobe. Data presented here indicate the cumulative volumes up to the 7th section interval of the anterior-posterior axis (PF) and posterior-anterior axis (FM) (Supporting Fig. 1). 50
Comparative cerebellar anatomy
The cerebellum occupies only 10-15% of total brain volume in primates, 51 but it contains roughly half of the brain's neurons. 52 Although cellular organization is very uniform compared to the cerebral cortex, 27 there is a clear differentiation between different parts of the cerebellum in terms of inputoutput relationships with other brain areas. 26 The macro-anatomical subdivisions of the cerebellum across mammals involve 10 lobules 53-55 (defined as I-X), which have been found to relate to distinct topographical cortical connectivity patterns. 45,46 In particular, lobules V, VI, VIIb, and VIIIa have reciprocal connections with frontal motor areas, whereas portions of lobule VI and the entirety of crus 1 and crus 2 (subdivisions of lobule VII that make up an average of 40% of total cerebellar gray matter in humans 37 ) have connections with the prefrontal cortex. 44,45 Delineation of the posterior cerebellar hemispheres We delineated all lobules posterior to the primary fissure. This measurement comprises lobules VI-X, thus including the prefrontal projecting lobules VI-VII that encompass the majority of cerebellar gray matter volume posterior to the primary fissure (up to 64% in humans 37 ). Our present delineation of the posterior cerebellar hemisphere differs from our previous delineation of the overall cerebellar hemispheres 43 in that it focuses more specifically on prefrontal projecting lobules VI-VII. All measurements are presented in Table 1 (for example delineations, see Supporting Fig. 2). Volumes were computed using the Cavalieri procedure. 56,57 Systematic samples from each brain were taken, the position of the first section was chosen randomly, and subsequent sections were chosen based on a regular sampling interval. Twenty or more sections per brain 58 were used and digitized with a flatbed scanner at 800 dpi.
Phylogenetic scaling
In any comparative analysis, raw data will consist of phylogenetically nonindependent data points (e.g., sampled by species); comparisons of such points need to be weighted for phylogenetic distance. Allometric analyses of comparative datasets therefore incorporate phylogenetic trees, to account for differences that are due to phylogenetic relatedness. We use phylogenetically reduced major-axis regressions with a likelihood-fitted lambda model to obtain residuals from regressions of brain structure size on the rest of brain size. 59 "Rest of brain" was here defined as total brain size minus size of PCH, FM, and PF. These residuals are used as measures of the relative size of a particular brain structure. 60 Relative sizes of brain structures were then scaled using phylogenetically generalized least squares regressions. 61 All analyses were performed in the R software environment. [62][63][64] Evolutionary rates and inferring evolutionary history We use an adaptive peak model of evolution to infer rates of change for individual branches along the tree of life. [41][42][43] This model allows for rates of change to be different for each branch in a phylogenetic tree in response to the wanderings of adaptive peaks through phenotype space. We use the adaptive peak model as formalized in the method of independent evolution 41,42 because it allows for the incorporation of more specific models such as Brownian Motion and Ornstein Uhlenbeck as special cases by collapsing its algorithms accordingly under relevant conditions. This formalization has further been shown to accurately estimate fossil values of brain and body size in primates, bats and carnivorans, 41,42 supporting its validity in estimating evolutionary trends for brain structure sizes in primates.
Reconstructing rates of the evolution of traits allow identification of branches in the evolutionary tree associated with episodes of selective and correlated trait coevolution, which can be compared with the general scaling patterns revealed by phylogenetic regressions. 41 Evolutionary trends on individual branches may align with or diverge from more general evolutionary patterns as different species follow different adaptive directions. Importantly, Figure 1. Results from a phylogenetically generalized least squares analysis of correlations in the relative size of the posterior cerebellar hemispheres, frontal motor areas, and prefrontal cortex. Hemisphere-specific regressions between frontal and cerebellar structures were performed contralaterally, because evidence suggests the importance of contralateral connections in the frontocerebellar system. 44 recognition of general scaling regularities simply identifies average trends across all species in a sample, and does not mean that changes in each branch of the phylogenetic tree necessarily exemplify them. The extent to which changes in particular branches align with the clade-general correlation pattern can be revealed by a comparison of evolutionary rates. 41 This approach complements the use of phylogenetic regressions by allowing a more detailed investigation of the evolutionary history of trait coevolution along particular evolutionary branches.
Comparative correlations between PCH, FM, and PF
Phylogenetically generalized least squares analysis with a maximum likelihood-fitted lambda model (Fig. 1)
Laterality in PCH, FM, and PF
We investigated structural lateralization in PCH, FM, and PF by scaling hemisphere-specific volumes of each of these structures to rest of brain size (defined as brain size minus PCH, FM, and PF). Lateralization was assessed by comparing scaling coefficients (intercepts and slopes of the regression) for the left and right hemispheres. In this approach, Table 2. Results from a phylogenetically generalized least squares analysis of scaling of hemisphere-specific brain structures/areas to rest of brain size. Rest of brain size is here defined as total brain size minus the size of PCH, FM, and PF significant results refers to comparisons of the 95% confidence intervals of the scaling coefficients from different analyses; if the scaling coefficient of one analysis lies outside the confidence interval of another analysis, it is considered to be significantly different at P < 0.05. Results are summarized in Table 2. For PCH and FM, there is no asymmetry in general scaling trends in primates. However, and consistent with previous analyses, 48 for PF there is significant hyperscaling of the left compared to the right hemisphere, together with a significantly lower intercept.
The evolutionary history of the fronto-cerebellar system We compared rates of evolution of PCH, FM, and PF relative to rates of evolution of rest of brain size, to investigate which branches align with or diverge from clade-general coevolutionary trends. We focus here on branches where our methods have reconstructed a disproportionate increase in PCH and either FM or PF, since this identifies branches in which the fronto-cerebellar system plays an unusually strong adaptive role. Disproportionate FM-PCH increase characterizes the ape (∼30 Mya) and great ape (∼20 Mya) ancestral branches, but the trend does not continue in branches leading specifically to Pan and to Homo (Fig. 2A). Disproportionate PF-PCH increase also characterizes the ape and great ape ancestral branches, but that trend continues in the Homo-Pan (∼10 Mya) ancestral branch and in the human (∼6 Mya) ancestral lineage (Fig. 2B).
To infer the evolutionary origin of a hypothesized shift from a predominantly frontal motor to a predominantly prefrontal involvement in the corticocerebellar system, we quantified the increase of PF relative to FM in relation to PCH. These results reveal directional selection for an increased PF contribution to the fronto-cerebellar system in branches leading from the Old World anthropoid ancestral lineage through to the human ancestral branch, with the most pronounced trends in the ape ancestral lineage and in the Homo-Pan ancestral lineage (Fig. 2C). Results further indicate that these trends are similar when assessing each contralateral cortico-cerebellar pattern (left PF/FM and right PCH versus right PF/FM and left PCH; Fig. 2 lists results across both hemispheres, and Supporting Figs. 3 and 4 give results for each contralateral pattern).
Discussion
The fronto-cerebellar brain system plays a crucial part in the automatization of learned motor sequences and the incremental acquisition of movements into well-executed behavior. 25,27,[65][66][67] Reconstructing the evolution of this brain system will advance understanding of the evolution of humans' exceptional motor capacities. To investigate the evolutionary history of humans' increased (lateralized) prefrontal input to this brain system, we delineated relevant brain structures for 29 individuals from 16 different primate species, and quantified evolutionary rates on separate branches of the primate phylogenetic tree. Our results indicate a significant contrast in the scaling of left versus right PF, but not of left versus right FM and PCH ( Table 2). The lack of general trends for volumetric asymmetry in PCH across these 16 anthropoid species is consistent with findings of within-species variation in chimpanzees demonstrating "no population-bias in the lateralization of the cerebellum." 68 When considering hemisphere-specific correlations within the frontocerebellar system, we find stronger evidence for a left FM-right PCH coupling than for the contralateral pattern (Fig. 1). This is likely to be principally related to a frontal motor praxis system involved in primate hand manipulation skills. PF-PCH coevolutionary coupling at the level of correlations between contralateral hemispheres is not found consistently across the whole primate sample (Fig. 1).
To investigate lineage-specific patterns of brain reorganization, 41 we quantified rates of evolution for each branch in the phylogeny. Although our sample included multiple individuals for several key species (including all the ape species represented) and are consistent with findings of the distribution of structural asymmetries in larger samples of specific species by other workers, 68 future work should look to expand sample sizes for each species to increase the robustness of lineage-specific inferences. In Figure 2 we highlight branches for which we have reconstructed a coordinated and disproportionate increase in the size of the fronto-cerebellar system. This phylogenetic mapping reveals strong selective investment in both FM-PCH and PF-PCH at the dawn of the ape (∼30 Mya) and great ape (∼20 Mya) radiations, differentiating them from monkeys ( Fig. 2A and 2B). A subsequent expansion of PF-PCH, but not FM-PCH, is indicated on the Homo-Pan ancestral branch (∼10 Mya) and in the human lineage (∼6 Mya). When comparing evolutionary rates for PF and FM, the ape ancestral branch, the Homo-Pan clade and the human ancestral lineage all appear to be characterized by shifts toward an increased role for PF in the fronto-cerebellar system (Fig. 2C).
These results, and the finding of left PF hyperscaling as a general trend in our sample, are consistent with observed structural brain asymmetries in humans and chimpanzees, absent in other nonhuman primates, for several relevant frontal and cerebellar areas, 3,[68][69][70][71][72][73] suggesting that at least part of the neural foundation for human complex motor behavior was present before the ancestral split with the lineage leading to chimpanzees. With increased selection for context-and goal-dependent, dynamic adjustment of learned motor plans (e.g., tool use), the prefrontal input to the cortico-cerebellar system may have become more pronounced and led to selection for increased lateralization. This suggestion is also supported by studies in chimpanzees demonstrating that individual variability in structural asymmetry of the PCH is related to the propensity to perform complex activities such as tool use and aimed throwing, and handedness for a tool-use task (termite fishing). 68 Our finding that humans and chimpanzees share a preadaptation for increased prefrontal involvement in the fronto-cerebellar system, which continued in the human lineage but stabilized in the chimpanzee lineage, may shed light on the evolutionary role of the fronto-cerebellar system in tool use and vocal articulatory control, and on the dif-ferences in tool use and vocalizing abilities between these two species.
Humans and chimpanzees share the capacity to perceive the affordances of objects as potential tools 74,75 and the ability to modify the kinetic energy produced in relation to the affordances of the task constraints. 76,77 In both species, however, these capacities involve an experience-based learning process where increased experience results in increased efficiency. In other words, the ability to move from initial action execution of complex motor sequences to specialized automatic execution through experience plays a crucial role in nut-cracking in both humans and chimpanzees. 78 Stone flaking, however, a bimanually coordinated task that became a habitual behavior within the hominin radiation, may require greater lateralization of hand function because it involves the two hands working at two different levels of resolution in a coordinated fashion to yield a common functional outcome (the hammering hand needs to be controlled in such a way as to transmit the appropriate amount of kinetic energy at impact with considerable accuracy at the point of percussion, whereas the postural hand has to rotate and adjust the position of a core to prepare for the following hammer strike, and stabilize the core against the shock of the blow 78 ). Stout et al. 79 have found increased frontal activation in stone flaking tasks, with site, lateralization and level of activation varying as a function of task complexity and task familiarity, but the extent of any similarities and differences with activation patterns in a nut-cracking task have not yet been studied in humans or in chimpanzees.
Humans are additionally distinguished from chimpanzees in possessing the capacity for articulate speech. Posterior cerebellar activation in language tasks has been found to be right lateralized and focused in lobule VI and crus 1, 80 which, as noted above, are prefrontal-projecting areas. Fluent speech requires the serial ordering of phonemes and syllables, and it has been shown that preparation and production of more complex syllables and syllable sequences recruit left hemisphere inferior frontal sulcus, posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, to supplement the more basic cortical and subcortical components of the speech production system. 81 Activation patterns in a verbal motoric rehearsal task suggest the existence of a frontal (BA44/46) and superior cerebellar (lobuleVI/crus 1) articulatory control system, 82 and there is increased PF activation with increased working memory loads in a speech motor control task. 83 Thus, it is plausible that the expansion of the prefrontal system and of prefrontalprojecting cerebellar lobules in humans 39 also relates to adaptations for articulate speech.
Acknowledgments
This work was supported by the UK Natural Environment Research Council, Grant number NE/H022937/1.
Conflicts of interest
The authors have no conflicts of interest.
Supporting Information
Additional Supporting Information may be found in the online version of this article. | 4,907 | 0001-01-01T00:00:00.000 | [
"Biology"
] |
Minimizing Energy and Computation in Long-Running Software
: Long-running software may operate on hardware platforms with limited energy resources such as batteries or photovoltaic, or on high-performance platforms that consume a large amount of energy. Since such systems may be subject to hardware failures, checkpointing is often used to assure the reliability of the application. Since checkpointing introduces additional computation time and energy consumption, we study how checkpoint intervals need to be selected so as to minimize a cost function that includes the execution time and the energy. Expressions for both the program’s energy consumption and execution time are derived as a function of the failure probability per instruction. A first principle based analysis yields the checkpoint interval that minimizes a linear combination of the average energy consumption and execution time of the program, in terms of the classical “Lambert function”. The sensitivity of the checkpoint to the importance attributed to energy consumption is also derived. The results are illustrated with numerical examples regarding programs of various lengths and showing the relation between the checkpoint interval that minimizes energy consumption and execution time, and the one that minimizes a weighted sum of the two. In addition, our results are applied to a popular software benchmark, and posted on a publicly accessible web site, together with the optimization software that we have developed.
Introduction
Long-running programs include database systems, operating systems, and platforms that support sensor systems. Such software needs to be very reliable, but should also be efficient in execution time and energy consumption. Thus its reliability [1] is often assured via checkpoints, to avoid that each failure leads to excessive overhead in execution time [2][3][4] and energy consumption [5].
Indeed, among the mechanisms that restore or preserve system consistency after failures [6], Checkpointing and Recovery (CR) is used widely to periodically save an up-todate copy of system or program state that is used to restart execution if a failure occurs. CR can also be found in high performance systems [7][8][9][10], operating systems including Linux [11][12][13], databases [14], and distributed systems [15][16][17][18].
Thus checkpoint intervals have been widely studied to maximize system availability and minimize program execution time for transaction-oriented systems [19][20][21], and imbedded multiple level checkpoints introduced in [22,23] were recently studied in [24]. CR [6,25] includes "Application-level Checkpoint and Restart" (ALCR) [26,27], that uses smaller memory space but requires significant programming skills to insert checkpoints in long-running loops [28,29]. Since longer inter-checkpoint intervals increase the required time and energy of system restart, and short intervals increase them due to frequent check-points, the checkpoint interval should be chosen to minimizeboth energy consumption and execution time [30,31].
In recent years the importance of energy savings in information technology and software has been often emphasized [32][33][34][35], and research has addressed the efficient allocation of energy in computer systems [36][37][38] including the use of of server or network node vacations to reduce energy consumption [39], techniques to select Cloud servers based on energy efficiency [40], and the use of renewable energy sources [41][42][43][44]. There has been less work on more detailed techniques such as checkpoints to reduce energy consumption [45,46], or on checkpoint optimization in modern software using ALCR [47,48]. In addition, commonly used tools such as ALCR do not offer assistance in selecting checkpoint intervals to optimize energy consumption or execution time, and a software tool was proposed recently to address this issue [49].
Thus in this paper we focus on analyzing the checkpoint intervals in a unified manner to effect savings in a weighted combination of execution time and energy, since energy consumption is of importance both in autonomously operating platforms and for software running in large scale Cloud data centers [50]. In the sequel, starting from first principles, we develop a mathematical model to estimate the average execution time as well as the energy consumption of a program with long loops that operates in the presence of failures, without and with ALCR. This allows us to compute the checkpoint interval that minimizes the program's energy consumption and average execution time, and its value that can minimize a cost function that is a weighted sum of both elements, expressed via the Lambert Function, with numerical examples that illustrate the results. In addiion, we also apply these results to a well known software benchmark.
The rest of the paper is structured as follows. In Section 2, the mathematical model that estimates the average execution time and energy consumption of a software program that operates in the presence of failures with and without checkpoints is presented. In Section 3, based on this mathematical model, the closed-form expression of the optimum checkpoint interval is derived. In Section 4, we illustrate our results through a set of numerical examples. In addition, Section 5 is devoted to show how our model can be used to select checkpoints for the popular Rodinia Benchmark of real-world open-source software written in C and C++ programming languages, which is widely used for software performance evaluation and energy optimization, and in particular the streamcluster (https: //github.com/yuhc/gpu-rodinia/tree/master/opencl/streamcluster) program. Finally, Section 6 concludes the paper and discusses directions for future work.
A Single Loop Program with Checkpoints
Consider a program P that executes y n instructions between its (n − 1)-th and n-th checkpoint, without counting all possible failures and failure recoveries. Now consider the instant t n > 0 when the program creates its n-th checkpoint, and let Y n denote the total number of instructions that the program has executed by time t n since it started, where Y n does not include all the repeated instructions that were executed due to checkpoints and failure recovery, and obviously: Y n = ∑ n i=1 y i . Let B c (Y n ) be the computation time needed to create the n-th checkpoint. This quantity will generally depend on the total memory space occupied by the program, but in certain cases it may depend on Y n , since the program may generate new data as it is executing. Hence we will write B c (Y n ) = B c 0 + B c 1 Y n where B c 0 > 0 and B c 1 ≥ 0 are constants for the given program.
On the other hand, suppose a failure occurs after the program has successfully executed y instructions after the n-th checkpoint, i.e., after the program has executed Y n + y instructions. If b c (Y n , y) is the computation time needed to restart the program from the most recent checkpoint, when the program has successfully executed y ≤ Y n+1 − Y n instructions after the most recent checkpoint but before the (n + 1) checkpoint, then we will have: Therefore the time duration b c (Y n , y) depends on the number y of instructions that have been executed by the program since the last checkpoint was established. In summary, we are assuming that: • The time B c (Y n ) needed to establish the n-th checkpoint depends on the "age of the program" or the total number of instructions Y n it has executed since the beginning, The time b c (Y n , y) needed to recover from a failure after the n-th checkpoint, including the time related to re-loading system state after the failure, only depends on y ≤ Y n+1 − Y n , the "computation time undertaken by the program since the last checkpoint", i.e., b c (Y n , y) ≡ b c (y) = b c 0 + b c 1 y. Similarly, we denote the energy consumption for creating the n-th checkpoint to be B e (Y n ), and b e (y) is the energy used to recover from a failure after a failure that occurs when the total number of instructions executed is Y n + y ≤ Y n+1 . Also, we will have B e (Y n ) = B e 0 + B e 1 Y n , and b e (y) = b e 0 + b e 1 y with B e 0 > 0, B e 1 ≥ 0 and b e 0 > 0, b e 1 ≥ 0. Let α, β > 0 be positive constants that represent the relative costs of computation time and energy consumption. We can then define the parameters: and the total cost of an instruction can be viewed as the weighted sum of its executon time and of its energy consumption c = αK c + βK e .
Fixed Checkpoint Intervals
Earlier work has shown that "age dependent" checkpoints [51] can reduce the overall cost of checkpointing and failure recovery, when (for instance) the failure rate of a system increases with time. However, most practical checkpointing schemes use a simpler approach where checkpoints are carried out periodically each time the program has executed successfully a predetermined fixed number of instructions y n = y. Thus, in the sequel we will make this assumption so that checkpoints are placed after Y 1 = y, Y 2 = 2y, .. Y n = ny, etc. instructions have been successfully executed, and we will proceed to compute the optimum value of y, assuming that n is fixed in advance.
When the program ends after Y = Ny instructions are executed, a further (N + 1)-th checkpoint is not needed, while the first checkpoint is obviously installed before the first instruction is executed.
We can then formulate our problem as that of a program that executes a total fixed number of instructions Y, where we want to choose the constant value y of the number of instructions between checkpoints, or equivalently we can choose N, the number of checkpoints so that Y = Ny so that the total overhead in additional work and energy consumption due to failures and due to checkpoints is minimized.
For a given y, let us compute K c (y), which is the corresponding total expected execution time including all restarts due to failures, starting from the most recent checkpoint. When the average execution time per instruction is c, and the failure probability per instruction is (1 − a), the total average time elapsed time for the execution of y instructions is: K c (y) = cya y + (b 0 + K c (y))(1 − a y ), because with probability a y a failure does not occur during the y instructions, leading to an execution time of K c y time units, while with probability (1 − a) y at least one failure does occur among the y instructions, and the first of those requires a program re-start time of b c 0 , to which we should add K c (y) representing the effect of all future failures after the program has been re-initialised from the checkpoint.
Also, we have to include the execution time plus the amount of additional work needed per executed instruction, until the failure occurs-hence the term (K c + b c 1 )multiplied by x and the probability that the failure occurs at instruction x which is a x−1 .a, summed over x running from 1 to y. Since we obtain: the total expected energy consumption K e (y) for a number of instructions y after the most recent checkpoint, we similarly obtain the quantity: where K e denotes the average energy consumption per instruction, so that Interestingly enough, we can show using l'Hôpital's Rule, for all y ≥ 1, that: as would be expected. Treating y as if it were a real number, we can compute the derivative of C(y). We first note that for a differentiable function f (y) of the real variable y, we can write: and therefore dC(y) Because a ≤ 1, the quantity − ln a ≥ 0, and since y is large, 1 a y is very large and dC(y) dy > 0.
Minimizing Computation Time and Energy
When we include both the time and energy needed to create each checkpoint, and assuming a fixed number of instructions y executed between successive checkpoints, we can obtain the total cost of the program up to and including the last instruction executed at Y = yN as: The optimum checkpoint interval y * is then the value of y that minimizes κ N (y), the overall cost per unit work that is accomplished, i.e., G N (y) divided by Y = Ny which is the total number of useful instructions executed over this time: Therefore, to seek the optimum value of y, we compute the following derivative and set it to zero: so that the optimum value of y is: we have: To verify that y * is the minimum value, we compute: where C (y), C (y) denote the first and second derivatives of C(y) with respect to y, and Since at y * we have y * C (y * ) = B + C(y * ), we can write: and we need to examine the sign of C (y * ). Starting from (8) we have: which is positive, so that y * is indeed the value of y at the minimum.
The Optimum Checkpoint Using the Lambert Function
Let us first recall the definition of the Lambert Function W(z) [52][53][54][55]. Consider any two numbers z, w, which have the following relation: Thus if we can write z = we w , then w = W(z), and similarly if w = W(z), then z = we w .
Applying (19) to Equation (15), we can write the expression for y * as: which provides an explicit solution for the value of the optimum checkpoint interval y * . Clearly, if we set α = 1 and β = 0, we obtain the optimum checkpoint that simply minimizes the overall execution time, without consideration for the energy consumption. Also, if in the system under consideration the creation of a checkpoint does not depend on the amount of successful computation that the program has accomplished until the time of the checkpoint, then we simply set B c 1 = B e 1 = 0 in the expression for B, so that B = B 0 which is the case that is usually discussed in the literature.
Sensitivity of the Optimum to Energy Consumption and Computation Time
An important question concerns how y * varies with changes in the relative importance of the energy expenditure with respect to computation time. To address this issue as a single parameter problem, we will set α = 1, and consider the derivative of y * with respect to β. Noting that we can now write B = B c + βB e and A = A c + βA e , we have: where we have used the identity: when x = 0 and x = − 1 e . These two conditions will be satisfied because it is unlikely in practice that the system parameters be such that B = A, furthermore it is impossible that Thus we can use the expression (21) to determine how fast y * will vary as a function of β. In particular we have the following very interesting result.
Result:
When B e A c = A e B c , then y * does not depend on the relative weight of the execution time and energy consumption, so that a single value of y * will minimize the overall cost for α = 1 and any value of β that represents the relative importance of energy consumption to computation time.
A Program with a Single Long Loop
In this section, we will apply the previous results to a program with a single long loop of length L instructions which is executed some number, say T times, so that Y = LT. For this program, we may be constrained to place checkpoints either at the start of a loop so that y = m.L with one checkpoint for each m > 1 loops, or n checkpoints may be placed within the loop with L = ny where n > 1, or we set n = 1. We first apply the previous results to compute y * : where: so that Let us denote by I(x) the integer that is closest to the real number x. Then we compute r = L y * , and: • If r ≥ 1 we set n = I(r), • If r < 1, we set n = I( 1 r ). To illustrate these results, numerical examples are provided in order to show the effect of the checkpoint interval n (expressed in terms of the number of loop repetitions between checkpoints) on the expected execution time and the total energy consumption of a software application that operates in the presence of failures. In order to differentiate the effect of computation time and energy consumption, we use n o to represent the checkpoint interval that minimizes the total computation time, while n + refers to the optimum checkpoint interval that minimizes the total energy consumption. Note that in the preceding analysis, n o can be obtained by setting α = 1, β = 0, while n + is obtained by setting α = 0, β = 1.
These examples consider the case of a program with a single loop in which checkpoints are established at the beginning (or at the end) of the loop. We consider a small, medium, large, and very large program, comprised of Y = 10 4 , 10 5 , 10 6 , 10 7 instructions, respectively. The expected execution time of the same program with and without the adoption of the ALCR mechanism is calculated and the corresponding optimization problem is shown numerically. The parameter values that we use are: In Figure 1, the example of a small software program (i.e., Y = 10 4 ) is considered. Figure 1a compares the expected execution time of the application with and without the ALCR mechanism for different values of n, while Figure 1b shows the expected Gain in terms of expected execution time for different values of n. The values that correspond to the optimum checkpoint interval n o are marked within a rectangle. Figure 1 illustrates the fact that the optimum checkpoint interval n o minimizes the overall execution time of the application and maximizes the overall expected Gain. From Figure 1 it is clear that the ALCR mechanism will not reduce the expected execution time of a given software application unless the checkpoint interval is optimally selected. Indeed, for some poorly chosen values of n, the expected execution time of the application with checkpointing is higher than the expected execution time of the same application without checkpoints. For instance in this example, choosing a very small checkpoint interval (i.e., below 5) will actually lead to an increase in the expected execution time of the software program, compared to the execution time of the same program when the checkpointing mechanism is not adopted. This suggests that frequent checkpointing which enhances the reliability of the software program, may result in increases of execution time due to the cost of checkpointing itself.
Similar observations can be made for software with longer loops in Figures 2-4. This emphasizes the importance of setting n to be close or at n o , when there is a need for minimizing the execution time of the program.
The examples of Figures 1-4 show that a significant reduction in the execution time of a software application can be achieved by the ALCR mechanism, if the checkpoint interval is selected to be at, or close to, the optimum n o . In these examples, the Gain ranges from 64% to 80%. However, suboptimal values of the checkpoint interval will lead to a smaller Gain or even to an average execution time, which is larger than when ALCR is not used. Indeed, the checkpoint interval should not be selected arbitrarily and must be tuned to a value at, or close to, the optimum n o . Still, there is a relationship between calculations for n o and n + . However, we must have in mind that the optimum checkpoint interval will be different regarding energy consumption and execution time. Figure 5 shows how they correspond to each other. More specifically, Figure 5a shows how execution time changes when we want to use optimal checkpoint interval calculated for energy consumption. Similarly, Figure 5b shows how energy consumption changes when we want to use the checkpoint interval that optimizes execution time.
The numerical example presented in Figure 5 shows that the checkpoint interval that minimizes the energy consumption does not necessarily minimize the execution time as well and vice versa. In particular, in the given example, setting the value of n to n o will minimize the expected execution time of the software program, but will lead to around half the maximum achievable energy savings. Similarly, setting the value of n to n + will minimize the expected energy consumption of the software program, but will lead to lower than the maximum achievable savings in execution time. Hence, the type of the application should be also taken into account in order to decide, whether to prioritize the execution time or the energy consumption of a given program. It should be noted that the model is highly configurable, which means that the user can define the relative importance of the quality attributes of execution time and energy consumption for a given software program, by properly setting the α and β parameters of the model (see Section 3). This enables the calculation of the checkpoint interval that strikes a desired balance between these two quality attributes.
Impact of g and B on the Optimum Checkpoint Interval
The optimum checkpoint interval n o is expected to be influenced both by the probability of failure g = 1 − a, and by the cost of checkpointing B c = B c 0 . In Figure 6, the optimum checkpoint interval n o is plotted against the probability of failure g, for three different cases of checkpointing cost B c . Four different examples are provided, corresponding to a sample software program of small, medium, large, and very large size. In fact, the same cases of programs that were investigated in Section 4 were considered in this section.
From the different graphs in Figures 6 and 7, we notice that the same behavior is observed regarding the impact that the values of B c and g have on the optimum checkpoint interval, regardless of program size. Indeed for a given checkpointing cost B c , the higher the probability of failure g, the lower the optimum checkpoint interval n o . This means that for a given checkpointing cost, the higher the probability of failure the more frequently the checkpoints should be generated. This is reasonable since the more frequent the failures are the more frequent the checkpointing should be, in order to reduce the cost incurred by the failure-related re-executions. Conversely, for a specific probability of failure g, a higher cost of a single checkpoint B c leads to a larger optimum checkpoint interval n o . This is also reasonable, since the higher the checkpointing cost (given that the frequency of failures is constant) the less frequent the checkpointing, since frequent checkpointing may incur checkpoint-related costs.
These observations are highly intuitive since frequent checkpointing should be applied when the probability of failure is high, while checkpoints should be generated less frequently when the checkpointing cost is high. The same observations hold for the case of the optimum checkpoint interval n + that minimizes the total expected energy consumption of the program.
Demonstration through a Real-World Example
In Section 4, we illustrated the effect of the checkpoint interval n (i.e., the number of loop repetitions between consecutive checkpoints) on the expected execution time and energy consumption of a software program that operates in a failure-prone environment through a set of numerical examples. The results of the simulation led us to the observation that the checkpoint interval should be chosen to be at (or close to) its optimum value (computed by our mathematical model) in order to achieve significant gains with respect to execution time or energy consumption and to avoid potential costs that may be caused by assigning arbitrary values to n.
To enhance the completeness of the present work, we also illustrate the effect of the checkpoint interval selection on the computation time and energy consumption of a realworld software program. More specifically, instead of being based on simulated values, we selected a real-world open-source software program with a configurable computational loop and we determined the required model parameters through actual measurements. Then we used our model in order to compute the optimum checkpoint intervals that optimize the execution time and energy consumption of the selected program for different cases of program size (in fact, loop length). We focused on the execution time and energy savings that could be achieved through the selection of the checkpointing interval using the proposed model.
For the purposes of the present experiment, we used the Rodinia Benchmark (https://github.com/yuhc/gpu-rodinia) [56] as the basis of our analysis. The Rodinia Benchmark is a popular benchmark of real-world open-source software programs written in C and C++ programming languages, which is widely used for benchmarking techniques and mechanisms for software performance and energy optimization. From the different programs that Rodinia contains, we used the streamcluster (https://github.com/yuhc/gpurodinia/tree/master/opencl/streamcluster) program as the basis of our example. The reasoning behind the selection of this program is that it contains a computational loop that is also highly configurable, making it suitable for the purposes of our analysis. In fact, by providing the correct input, the loop can be as lengthy as we wish, allowing us to take different cases of loop length.
To compute the actual parameters that are necessary for the execution of our mathematical model, the Energy Toolbox of the SDK4ED Project was utilized [57,58]. The Energy Toolbox provides measurements of the execution time and energy consumption of a software program at the loop-level of granularity, being mainly based on popular profiling tools like Linux Perf (https://perf.wiki.kernel.org/) and Valgrind (http://www.valgrind.org/) , as well as on static estimations [57,59]. The provision of loop-level performance and energy measurements made it highly suitable for our case, which actually constitutes the main reason for its selection. After executing the Energy Toolbox for the selected software program the following parameters were determined (It should be noted that all the measurements were made on an ARM Cortex A57 (Nvidia Jetson TX1) processor.): As already mentioned, since the benchmark is highly configurable, we considered three cases of loop length (in fact, of program size). In particular, we considered the case of a small, medium, and large loop comprising Y = 5 × 10 5 , Y = 5 × 10 6 , and Y = 10 7 instructions respectively. It should be noted that this characterization is based exclusively on the relative size of the loops that the program contains and it is used to better facilitate the description of the present experiment.
In Figure 8, the example of the program with the small loop is illustrated (i.e., Y = 5 × 10 5 ). Figure 8a compares the expected execution time of the software program with and without checkpointing. Similarly, Figure 8b compares the expected energy consumption of the selected software program with and without the adoption of the ALCR mechanism. The checkpoint interval that minimizes the expected execution time (n o ) and the checkpoint interval that minimizes the expected energy consumption (n + ) are marked within a rectangle in Figure 8a,b respectively. Figure 8 shows that important savings in both the expected execution time and energy consumption are achieved for software program, if the checkpoint interval is selected to be at (or close to) the values of n o or n + respectively computed by the mathematical model. More specifically, if n is selected to be equal to n o , a 74.8% gain in execution time, and a gain of 67.3% in energy consumption is obtained when n is chosen equal to n + . It is very clear that selecting arbitrary values for the checkpoint interval should be avoided, as this may lead to excessive increase in the execution time and energy consumption: i.e. no gain but even additional costs. As can be seen by the given example, if n is set to be less than 3 in Figure 8a, the expected execution time of the program will be higher than its expected execution time when checkpointing is not adopted. Similarly, if n is set to a value lower than 8 in Figure 8b, the expected energy consumption of the program will be higher than the expected energy consumption of the same program when checkpointing is not adopted. This indicates that frequent checkpointing may lead to the introduction of additional costs with respect to execution time and energy consumption. In addition to this, in both cases, if n is set to a value different (lower or higher) than the optimum values n o and n + that are computed by our model, lower than the maximum achievable gains in terms of execution time and energy consumption are achieved, leading to omission of important savings. Hence, this suggests that the arbitrary selection of the checkpoint interval should be avoided, as it may lead to omission of important savings or even introduction of additional costs, and, in turn, it verifies that there is a need for a mechanism (model) for recommending the optimum checkpoint interval.
Similar observations can be made for longer loops in programs as can be seen by Figures 9 and 10. As for the previous case, these examples show that important savings in terms of execution time and energy consumption can be achieved, provided that the checkpoint interval is properly set. Here the maximum execution time savings are 73.12% and 92.21%, whereas the maximum energy savings are 77% and 94.6%, for both medium leng and long loops, respectively. These examples also show that a poorly chosen value for the checkpoint interval may lead to the introduction of additional overhead with respect to the execution time and energy consumption of the software program, highlighting the importance of the choice of an optimum checkpoint interval. These results for a real program example also agree with the "theoretical" conclusions drawn from the numerical examples of Section 4.
Although in our examples it appears that the optimum values for computation time and energy, namely n o and n + , are relatively close to each other, this will not be generally the case, and depending on various parameters these values can differ significantly. Hence, the end user can decide whether execution time or energy consumption should be prioritized by using the parameters α and β. As mentioned in Section 3, by carefully setting these parameters, the model can be used in order to compute the optimum checkpoint interval that optimizes the execution time (α = 1 and β = 0), energy consumption (α = 0 and β = 1), or a weighted combination of those two requirements (α = 0 and β = 0). Hence, the mathematical model presented in this paper can be used in practice to satisfy different user needs with respect to energy consumption and execution time of software programs with loops.
(a) (b) Figure 10. The case of a software program with a relatively large loop (i.e., Y = 10 7 ): (a) Expected execution time comparison (logarithmic axes) (b) Expected energy consumption (logarithmic axes).
Conclusions
Checkpoints are widely used to allow a system to recover from failures without having to restart a program's execution from scratch every time a failure occurs. However, checkpointing may add costs in additional time and energy, even when no failures occur. Thus, we have analyzed the choice of optimum checkpoint intervals in a unified manner from the perspective of energy consumption and execution time. Starting from first principles we have derived the optimum checkpoint for programs with a long running outer loop. Explicit analytic results have been derived and illustrated with numerical examples. The model was also demonstrated using a real-world software program retrieved from a popular benchmark.
More specifically, in this paper, we have focused on the importance of energy consumption on the appropriate choice of checkpoint intervals for long-running programs that require highly reliable operations. To this effect, we have developed a mathematical model that details the manner in which program execution time and energy consumption interact in a system that is subject to the establishment of regularly spaced checkpoint intervals.
The analysis has been used to determine the optimum number of checkpoints that either minimizes total average energy consumption, or total average execution time, or a linear combination of both. The solution to this optimization problem has been shown to relate directly to an expression that includes the classical Lambert function. The sensitivity of the optimum checkpoint interval to variations in all systems and checkpointing parameters has also been computed analytically.
The results were then used to derive the optimum checkpointing interval for a program with a long loop, so that checkpoints are installed either within each loop, or at the beginning of some of the loops. Several numerical examples were presented to illustrate the manner in which this approach could be used in a practical setting, for instance, to guide the choices that need to be made with application-level checkpointing and recovery (ALCR). A real-world example using an actual software program retrieved from the Rodinia Benchmark was also presented.
Both the numerical examples and the example that was based on the real-world software program led to some interesting observations. Firstly, in order to achieve important savings (i.e., gains) in terms of execution time and energy consumption, the checkpoint interval should be chosen to be at (or, at least, close to) its optimum value, as reported by our mathematical model. In addition to this, the arbitrary selection of the checkpoint interval should be avoided, as it may lead to lower than the maximum achievable gains in terms of execution time and energy consumption or even to the introduction of additional overheads. This further supports the need for a mechanism (i.e., a model) able to compute the optimum checkpoint interval. Finally, the results of these examples also highlighted the ability of the proposed model to be used in practice for satisfying different user and application needs with respect to execution time and energy consumption through properly setting its parameters. In fact, the proposed model can be used to compute the optimum checkpoint interval that minimizes its execution time, energy consumption, or a combination of those requirements.
The programs that provide the numerical solutions we have discussed have been made publicly available at the GitHUB repository with Matlab scripts of our mathematical model at https://github.com/siavvasm/optimum-checkpoint-interval.
Future work will consider nested program structures, and ways of linking checkpointing and program structure in a useful manner, similar to what is done in this paper for programs with a large single loop. The impact of multiple programs running on the same platform also needs to be considered. Indeed the ALCR approach deals with each program singly, while the checkpoint for each program dilates the execution time and energy consumption of each individual program, and by extension of the collection of programs, which share the same platform. | 8,269.8 | 2021-01-27T00:00:00.000 | [
"Computer Science"
] |
Two novel species of Calonectria isolated from soil in a natural forest in China
Abstract Species of Calonectria include important pathogens of numerous agronomic and forestry crops worldwide, and they are commonly distributed in soils of tropical and subtropical regions of the world. Previous research results indicated that species diversity of Calonectria in China is relatively high. Most Calonectria spp. reported and described from China were obtained from diseased Eucalyptus tissues or soils in Eucalyptus plantations established in tropical and subtropical areas in southern China. Recently, a number of Calonectria isolates were isolated from soils in a natural forest in the temperate region of central China. These isolates were identified by DNA sequence comparisons for the translation elongation factor 1-alpha (tef1), histone H3 (his3), calmodulin (cmdA) and β-tubulin (tub2) gene regions, combined with morphological characteristics. Two novel species of Calonectria were identified and described, and are named here as Calonectria lichi and Ca. montana, which reside in the Prolate Group and Sphaero-Naviculate Group, respectively. This study revealed that more species of Calonectria may occur in natural forests in central China than previously suspected.
Introduction
Calonectria species include many notorious plant pathogens and are widely distributed in tropical and subtropical areas of the world (Crous 2002, Lombard et al. 2010d, Aiello et al. 2013, Vitale et al. 2013, Alfenas et al. 2015).These species can cause serious plant epidemics on a wide range of plant hosts (Peerally 1991, Schoch et al. 2001, Crous 2002), and result in considerable economic losses to agriculture and forestry.Example include shoot blight on Pinus spp. in South African nurseries (Crous et al. 1991), root rot on Myrtus communis in Tunisia (Lombard et al. 2011), and leaf blight on Buxus sempervirens in Iran (Mirabolfathy et al. 2013).In addition, members of the genus Calonectria are responsible for red crown rot of Glycine max (soybean) in Japan (Yamamoto et al. 2017), fruit rot of Nephelium lappaceum (rambutan) in Puerto Rico (Serrato-Diaz et al. 2013) and root rot of Arbutus unedo (strawberry) in Italy (Vitale et al. 2009).As an important fast-growing tree species, Eucalyptus plays a significant role in the global pulpwood supply.Previous research showed that Calonectria leaf blight (CLB), associated with several species of Calonectria, is considered to be one of the most prominent Eucalyptus leaf diseases that has occurred in numerous countries such as Brazil (Alfenas et al. 2015, Lombard et al. 2016), China (Zhou et al. 2008, Chen et al. 2011), Colombia (Rodas et al. 2005), India (Sharma et al. 1984) and Vietnam (Old et al. 1999).Other fungal diseases of Eucalyptus spp.caused by Calonectria species include dampingoff, shoot blight, and root rot, which have been observed in Brazil (Ferreira 1989) and South Africa (Crous et al. 1991), and these diseases have received considerable attention.
Calonectria spp.are soil-borne fungi, they can form microsclerotia in soil and infected plant roots, stem and leaves as primary inoculum.After diseased tissues decompose or the plants are harvested, microsclerotia are released into the soil, which allows them to survive for extended periods even up to 15 years or more (Sobers andLittrell 1974, Crous 2002).Species of Calonectria are also rapidly dispersed via aerial dissemination and water movement, which leads to the transmission of Calonectria disease (Vitale et al. 2013).Based on previous studies, at least 145 Calonectria species have been identified using molecular data and have been described worldwide (Crous 2002, Crous et al. 2004, 2006, 2012, 2013, 2015, Lombard et al. 2010a, b, c, 2011, 2015, 2016, Chen et al. 2011, Xu et al. 2012, Alfenas et al. 2013a, b, 2015, Gehesquière et al. 2015).Sixty species were isolated from soil samples collected in subtropical or tropical regions (Crous 2002, Crous et al. 2004, Lombard et al. 2010a, b, c, 2015, 2016, Chen et al. 2011, Xu et al. 2012, Alfenas et al. 2015).
In China, Calonectria has a relatively high species diversity, and to date, 28 Calonectria species have been identified and described.Based on previous studies, Calonectria species have been reported in nine provinces and one Special Administrative Region (SAR), which with the exception of LiaoNing and ShanDong Provinces belong to temperate regions (Luan et al. 2006, Li et al. 2010).Most Calonectria have been isolated from agronomic crops or forestry plantations in subtropical and tropical regions, including FuJian, GuangDong, GuangXi, GuiZhou, HaiNan, JiangXi and YunNan Provinces, as well as Hong Kong SAR (Crous et al. 2004, Lombard et al. 2010a, 2015, Chen et al. 2011, Gai et al. 2012, Xu et al. 2012, Pei et al. 2015).
China has large areas of plantation and natural forests.To date 27 Calonectria species have been isolated from Eucalyptus tissues with CLB/leaf rot symptoms or from soils originating from Eucalyptus plantations in tropical or subtropical areas in Fu-Jian, GuangDong, GuangXi and HaiNan Provinces (Crous et al. 2004, Lombard et al. 2010a, 2015, Chen et al. 2011).However, little information is known about the species diversity of Calonectria in natural forests.In this study, a number of soil samples were collected from a natural forest in the temperate region of central China, and baited with alfalfa seeds for Calonectria.The aim of the current study was to identify these isolates using a combination of phylogenetic analyses and morphological characteristics and to gain a preliminary understanding of the species diversity of Calonectria in natural forests in China.
Fungal isolates
In April 2016, 17 soil samples were collected from a natural forestry area in central China.The collected soils were baited with surface-disinfested (30 s in 75% ethanol and washed several times with sterile water) Medicago sativa (alfalfa) seeds using the method described by Crous (2002).After one week, sporulating conidiophores were produced on infected alfalfa tissue.Using a dissection microscope AxioCam Stemi 2000C (Carl Zeiss, Germany), conidial masses were selected and scattered onto 2 % malt extract agar (MEA) (20 g malt extract powder and 20 g agar powder per liter of water: malt extract powder was obtained from Beijing Shuangxuan microbial culture medium products factory, Beijing, China; the agar powder was obtained from Beijing Solarbio Science & Technology Co., Ltd., Beijing, China) using sterile needles.After incubation at 25 °C for one day, germinated spores were individually transferred onto fresh MEA under the dissection microscope and were incubated at 25 °C for one week.
Single conidial cultures were deposited in the Culture Collection of the China Eucalypt Research Centre (CERC), Chinese Academy of Forestry (CAF), ZhanJiang, GuangDong Province, China.Representative isolates were stored in the China General Microbiological Culture Collection Center (CGMCC), Beijing, China.The specimens (pure fungal cultures) were deposited in the Collection of Central South Forestry Fungi of China (CSFF), GuangDong Province, China.
DNA extraction, PCR and sequence reactions
Single conidial cultures grew on MEA for one week at 25 °C, after which actively growing mycelium was scraped using a sterilized scalpel and transferred into 2 mL Eppendorf tubes.Total genomic DNA was extracted following the protocols "Extraction method 5: grinding and CTAB" described by Van Burik et al. (1998).The extracted DNA was dissolved in 30 µL TE buffer (1 M Tris-HCl and 0.5 M EDTA, pH 8.0), and a Nano-Drop 2000 spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) was used to quantify the concentration.
Amplified fragments were sequenced in both directions using the same primer pairs used for amplification by the Beijing Genomics Institute, Guangzhou, China.Sequences were edited using MEGA v. 6.0.5 software (Tamura et al. 2013).All sequences of the isolates obtained in this study were submitted to GenBank (http://www.ncbi.nlm.nih.gov)(Table 1).
Phylogenetic analyses
The sequences generated from this study were added to other sequences of closely related Calonectria species downloaded from GenBank for phylogenetic analyses.All sequences used in this study were aligned using the online MAFFT v. 7 (http://mafft.cbrc.jp/alignment/server) with the alignment strategy FFT-NS-i (Slow; interactive refinement method).The aligned sequences were manually edited using MEGA v. 6.0.5 and were deposited in TreeBASE (http://treebase.org).
Phylogenetic analyses were conducted on individual tef1, his3, cmdA and tub2 sequence datasets and on the combined datasets for the four gene regions, depending on the sequence availability.Two methods, maximum parsimony (MP) and maximum likelihood (ML) were used for phylogenetic analyses.
MP analyses were performed using PAUP v. 4.0 b10 (Swofford 2003), gaps were treated as a fifth character, and characters were unordered and of equal weight with 1000 random addition replicates.A partition homogeneity test (PHT) was conducted to determine whether data for the four genes could be combined.The most parsimonious trees were acquired using the heuristic search option with stepwise addition, tree bisection, and reconstruction branch swapping.MAXTREES was set to 5,000, and zero-length branches were collapsed.A bootstrap analysis (50% majority rule, 1,000 replicates) was carried out to determine statistical support for internal nodes in trees.The tree length (TL), consistency index (CI), retention index (RI) and homoplasy index (HI) were used to assess phylogenetic trees (Hillis and Huelsenbeck 1992).ML analyses were performed using PHYML v. 3.0 (Guindon and Gascuel 2003), and the best evolutionary model was obtained using JMODELTEST v. 2.1.5(Posada 2008).In PHYML, the maximum number of retained trees was set to 1,000, and nodal support was determined by non-parametric bootstrapping with 1,000 replicates.
Based on the morphological characteristics, datasets were separated into two groups: the Prolate Group and the Sphaero-Naviculate Group (Lombard et al. 2010b), and therefore phylogenetic analyses were performed with two separate sequence datasets.Calonectria hongkongensis (CBS 114711 and CBS 114828) and Ca.pauciramosa (CMW 5683 and CMW 30823) represented the outgroup taxa for the Prolate Group and Sphaero-Naviculate Group, respectively.The phylogenetic trees were viewed using MEGA v. 6.0.5 for both MP and ML analyses.
Sexual compatibility
Based on multi-gene phylogenetic analyses, isolates of each identified Calonectria species were crossed with each other in all possible combinations.Crosses were performed on minimal salt agar (MSA; Guerber and Correll 2001) on the surface of the medium using three sterile toothpicks.Isolates crossed with themselves were regarded as controls.These crosses were used to determine whether the identified species had a heterothallic or a homothallic mating system.The cultures were incubated at 25 °C for six weeks.When isolate combinations produced extruding viable ascospores, crosses were considered successful.
Morphology
To determine the morphological characteristics of the asexual morphs, representative isolates identified by DNA sequence comparisons were selected.Agar plugs from the periphery of actively growing single conidial cultures were transferred onto synthetic nutrient-poor agar (SNA; Nirenburg 1981) and incubated at 25 °C for one week (there were five replicates per isolate).Asexual structures that emerged on the surface of the SNA medium were mounted in one drop of 80% lactic acid on glass slides and examined under an Axio Imager A1 microscope (Carl Zeiss Ltd., Munchen, Germany) and an AxioCam ERc 5S digital camera with Zeiss Axio Vision Rel.4.8 software (Carl Zeiss Ltd., Munchen, Germany).Sexual morphs were studied by transferring perithecia obtained from the sexual compatibility tests into a tissue-freezing medium (Leica Biosystems, Nussloch, Germany) and were hand-sectioned using an HM550 Cryostat Microtome (Microm International GmbH, Thermo Fisher Scientific, Wall-dorf, Germany) at -20 °C.The 10-µm sections were mounted in 80% lactic acid and 3% KOH.
Fifty measurements were made for each morphological structure of the isolates selected as the holotype specimen, 30 measurements were made for the isolates selected as the paratype specimen.Minimum, maximum and average (mean) values were determined and presented as follows: (minimum-) (average -standard deviation) -(average + standard deviation) (-maximum).
The optimal growth temperature of the Calonectria species was determined by transferring the representative isolates to fresh 9 mm MEA Petri dishes, which were incubated under temperatures ranging from 5 to 35 °C at 5 °C intervals in the dark (there were five replicates per isolate).Colony colors were determined by inoculating the isolates on fresh MEA at 25 °C in the dark, after seven days incubation, a comparison was performed using the colour charts of Rayner (1970).
Fungal isolates
A total of 40 isolates with the typical morphological of Calonectria species were obtained from the infected alfalfa tissue cultivated in the soil samples.Based on preliminary phylogenetic analysis of the tef1 gene region (data not shown), 16 isolates from all soil samples were selected for further study (Table 1).
Phylogenetic analyses
Sequences for the 78 ex-type and other strains of 48 Calonectria species closely related to isolates obtained in this study were downloaded from GenBank (Table 1).For the 16 isolates collected in this study, nine resided in the Prolate Group, and seven were clustered in the Sphaero-Naviculate Group.Phylogenetic analyses of individual tef1, his3, cmdA and tub2 and the combined sequence datasets were conducted using both MP and ML method.For both the Prolate and Sphaero-Naviculate Groups, although the related position of some Calonectira species were slightly different between the MP and ML trees, the overall topologies were similar, and the ML trees were exhibited.
For the Prolate and Sphaero-Naviculate Groups, the PHT comparing the combined tef1, his3, cmdA and tub2 gene datasets generated P values of 0.141 and 0.333, respectively, which indicated that no significant difference existed between these datasets.These datasets were consequently combined and subjected to phylogenetic analyses.For each of the two groups, the sequence alignments of tef1, his3, cmdA, tub2 and the combination of the four genes were deposited in TreeBASE (TreeBASE No. 21357).The number of parsimony informative characters, the statistical values for the phylogenetic trees of the MP analyses, and the parameters for the best-fit substitution models of ML analyses are shown in Table 2.
Phylogenetic analyses of each of the individual and combined sequence datasets indicated that in the Prolate Group, the nine isolates resided in the Ca.colhounii species complex and were closely related to Ca. colhounii, Ca. eucalypti, Ca. fujianensis, Ca. nymphaeae, Ca. paracolhounii and Ca.pseudocolhounii.In the his3 and cmdA phylogenetic trees, the nine isolates and Ca.fujianensis were clustered in the same clade (Suppl.materials 2, 3), while in the trees based on the tef1 and tub2 sequences, the nine isolates formed an independent clade (Supplementary Figures 1, 4).Based on the phylogenetic analyses of the combined sequences of the four genes, the nine isolates formed a new, strongly defined phylogenetic clade that was distinct from other Calonectria species and was supported by high bootstrap values (ML = 94%, MP = 93%) (Figure 1).Fixed unique single nucleotide polymorphisms (SNPs) were identified in the new phylogenetic clades of the nine isolates and their phylogenetically closed Calonectria species (Table 3).The total number of SNP differences between the new clade and the other closely related species varied between 10-34 for all four gene regions combined (Table 4).The results of these phylogenetic and SNP analyses indicate that the nine isolates in the Prolate Group represent a distinct, undescribed species.
Phylogenetic analyses of each of the individual and combined datasets indicated that in the Sphaero-Naviculate Group, the seven isolates were clustered in the Ca.kyotensis species complex and were closely related to Ca. canadiana.In the tef1 phylogenetic trees, the seven isolates were grouped in the same clade with Ca. canadiana (Suppl.material 5).In the phylogenetic trees based on the his3, cmdA and tub2 sequences, the seven isolates formed an independent clade distinct from Ca. canadiana and other species in the Ca.kyotensis species complex (Suppl.materials 6, 7 and 8).Based on the combined sequences of the four genes, the seven isolates formed a strongly defined phylogenetic clade that was distinct from Ca. canadiana and was supported by high bootstrap values (ML = 100%, MP = 100%) (Figure 2).The seven isolates obtained in this study were distinguished from Ca. canadiana using SNP analyses for each of the tef1, his3, cmdA and tub2 gene region sequences (Tables 5).The total number of SNP differences between the seven isolates and Ca.canadiana for all four genes was 51 (Table 6).The results indicate that the seven isolates in the Sphaero-Navivulate Group represent a novel species.
Sexual compatibility
After a six-week mating test on MSA, all 16 isolates and the crosses of isolates of each identified species failed to yield sexual structures, indicating that they were either selfsterile (heterothallic) or had retained the ability to recombine to produce fertile progeny.| Fixed polymorphisms for each group are shaded and in bold, those fixed but shared between two or more groups are only shaded.¶ "N/A" represents sequences that are not available.The order of the four genes: total (tef1, his3, cmdA and tub2).‡ "NA" represents sequences that are not available.
Taxonomy
Based on DNA sequence comparisons, the 16 isolates collected in this study presented two strongly defined phylogenetic clades in both the Prolate Group and the Sphaero-Naviculate Group.Morphological differences were observed between each phylogenetic clade and its phylogenetically closed species, especially with respect to the size of the macroconidia (Table 7).Based on the phylogenetic analyses, as well as morphological characteristics, the fungi isolated from the soil in this study represent two novel species of Calonectria, they are described as follows: Calonectria lichi Q.L. Liu & S.F.Chen, sp.nov.MycoBank MB821348 Figure 3 Etymology.lichi, which is Calonectria in Chinese.
Calonectria montana Q.L. Liu & S.F.Chen, sp.nov.MycoBank MB821349 Figure 4 Etymology.montis, meaning mountain in Latin, referring to the location where this fungus was collected.
Diagnosis.Calonectria montana can be distinguished from the phylogenetically closely related species Ca. canadiana by the size of macroconidia.
Culture characteristics.
Colonies forming abundant buff and wooly aerial mycelium on MEA at 25 °C after seven days, with feathery, irregular margins at the edges, sporulation moderate and more concentrated in the colony centre.Surface with buff to sienna (8) outer margins, reverse sienna (8) to umber (9), and chesnut (9'm) inner region, abundant chlamydospores throughout the medium, forming microsclerotia.Optimal growth temperature at 30 °C, no growth at 5 °C and 35 °C, after seven days, colonies at 10 °C, 15 °C, 20 °C, 25 °C and 30 °C reached 22.9 mm, 31.5 mm, 51.1 mm, 61.9 mm and 77.2 mm, respectively, this is a high-temperature species.
Discussion
This study identified two novel species of Calonectria from soil in a natural forest in the temperate region of central China.The identification of the fungi was supported by DNA sequence comparisons and morphological features.The two species were named Calonectria lichi and Ca.montana.
Calonectria lichi is a new addition to the Ca.colhounii complex that belongs to the Prolate Group.Based on phylogenetic analyses of four gene sequences, Ca. lichi formed a distinct and well-supported phylogenetic clade closely related to Ca. fujianensis, Ca. nymphaeae and Ca.paracolhounii, but it can be distinguished from these species by its larger macroconidia.To date, 10 species in the Ca.colhounii complex have been identified and described.Other than Ca.lichi described in this study, the other species include Ca. colhounii, Ca. eucalypti, Ca. fujianensis, Ca. macroconidialis, Ca. monticola, Ca. nymphaeae, Ca. paracolhounii, Ca. parva and Ca. pseudocolhounii (Crous 2002, Lombard et al. 2010b, 2016, Chen et al. 2011, Xu et al. 2012, Crous et al. 2015).Of these species, Ca. colhounii, Ca. eucalypti, Ca. fujianensis, Ca. nymphaeae and Ca.pseudocolhounii have been shown to be homothallic and always produce bright yellow perithecia (Crous 2002, Lombard et al. 2010b, Chen et al. 2011, Xu et al. 2012).In China, four species in the Ca.colhounii complex have been reported: except for Ca.lichi, which was isolated from a natural forest in the temperate zone in central China, the other species, including Ca. fujianensis, Ca. pseudocolhounii and Ca.nym-phaeae, were previously isolated from tropical or subtropical regions in southern China (Chen et al. 2011, Xu et al. 2012).
Calonectria montana adds a new species to the Ca.kyotensis complex that belongs to the Sphaero-Naviculate Group.Phylogenetic analyses showed that Ca. montana, which formed an independent clade with a high bootstrap value, is closely related to Ca. canadiana.Morphological differences were observed between Ca. montana and Ca.canadiana, especially with respect to the size of the macroconidia and the shape of the vesicles (Kang et al. 2001, Crous 2002).Species in the Ca.kyotensis complex are characterized by having sphaeropedunculate vesicles with lateral stipe extensions on a conidiogenous apparatus (Crous et al. 2004, Lombard et al. 2010b, 2015, 2016).No lateral stipe extensions were produced by Ca. montana, indicating that this species is different from other species in the Ca.kyotensis complex.In this study, Ca. montana was isolated from soil in central China, 14 species residing in the Ca.kyotensis complex were previously reported in China, and all of them were isolated from soil in southern China (Crous et al. 2004, Lombard et al. 2015).The results from this study suggest that more species in Ca. kyotensis complex have yet to be discovered from China.
Species of Calonectria are important plant pathogens that can cause devastating diseases on various plant hosts worldwide, especially on horticultural, agronomic and forestry crops (Polizzi et al. 2001, 2009, Crous 2002, Saracchi et al. 2008, Chen et al. 2011, Pan et al. 2012).In China, Calonectria species have been reported as pathogens of various important agronomic and forestry crops.In agriculture, the Fabaceae and Arecaceae plant families are susceptible to infection by Calonectria species, including Ca. ilicicola, which causes black rot (CBR) of Arachis hypogaea (peanut) and Medicago sativa (Gai et al. 2012, Pan et al. 2012, Pei et al. 2015), Ca. ilicicola causes red crown rot of Glycine max (soybean) (Guan et al. 2010), and Ca.colhounii and Ca.pteridis cause leaf spot on Phoenix canariensis and Serenoa repens, respectively (Luo et al. 2009, Yang et al. 2014).In forestry, leaf blight caused by Calonectria species is considered as one of the most serious threats to Eucalyptus plantations and nurseries in southern China (Zhou et al. 2008, Lombard et al. 2010a, Chen et al. 2011).The leaf inoculations showed that all tested Calonectria species were pathogenic to the tested Eucalyptus clones, including the clones that are widely planted in southern China (Chen et al. 2011, Li et al. 2014a, b).These research results suggest that species of Calonectria need to be monitored carefully, both in agronomic crops and forests.
Accurate diagnosis of plant diseases and identification of their casual agents provide the foundation for developing effective disease management strategies (Booth et al. 2000, Crous 2002, Old et al. 2003, Vitale et al. 2013, Wingfield et al. 2015).
Based on previous research results, the majority of Calonectria species identified and described in China were isolated from diseased plant tissues or soil under forestry plantations in subtropical and tropical regions (Crous et al. 2004, Lombard et al. 2010a, 2015, Chen et al. 2011).In this study, two novel Calonectria species were described, and they were isolated from soil in a natural forest in the temperate zone.The results from this study suggest that more extensive surveys need to be conducted to collect Calonectria in more geographic regions with different climate zones, which will help to clarify the species diversity of Calonectria in China.
Figure 1 .
Figure 1.Phylogenetic tree of Calonectria species in the Prolate group based on maximum likelihood (ML) analysis of combined DNA dataset of tef1, his3, cmdA and tub2 gene sequences.ML and MP (maximum parsimony) bootstrap values (ML/MP) are shown above branches, with bootstrap values below 60 % marked with an *, and absent analysis values are marked with -.Isolates representing ex-type material are marked with "T", isolates highlighted in bold were sequenced in this study and novel species were covered in blue.The tree was rooted to Ca. hongkongensis(CBS 114711 and CBS 114828).
only in all of the isolates are shown, not alleles that partially occur in individuals per phylogenetic group.‡ Numerical positions of the nucleotides in the DNA sequence alignments are indicated.§ Ex-type isolates are indicated in bold.
Figure 2 .
Figure 2. Phylogenetic tree of Calonectria species in the Sphaero-Naviculate group based on maximum likelihood (ML) analysis of combined DNA dataset of tef1, his3, cmdA and tub2 gene sequences.ML and MP (maximum parsimony) bootstrap values (ML/MP) are shown above branches, with bootstrap values below 60 % marked with an *, and absent analysis values are marked with -.Isolates representing ex-type material are marked with "T", isolates highlighted in bold were sequenced in this study and novel species were covered in orange.The tree was rooted to Ca. pauciramosa (CMW 5683 and CMW 30823).
Table 1 .
The species of Calonectria used in this study.
Table 2 .
Statistics resulting from phylogenetic analyses.
Table 3 .
Single nucleotide polymorphism comparisons in four gene regions between Calonectria lichi and the phylogenetically closest related species.
Table 4 .
Number of unique alleles found in Calonectria lichi and the phylogenetically closest related species in total and in the four gene regions.
Table 6 .
Number of unique alleles found in Calonectria montana and Ca.canadiana in total and in the four gene regions.
Table 7 .
Morphological comparisons of Calonectria lichi, Ca. montana and their phylogenetically closely related species. | 5,629.4 | 2017-08-22T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
“ Hope will die at last ” : an interview with Wolfgang Schirmacher 1
To Wolfgang Schirmacher philosophy is about reading in the spirit of , so that we may follow the logic of the phenomenon that shows itself to us. It is in this spirit of phenomenology Schirmacher asks whether Martin Heidegger ’ s diagnosis of our age – that we live under a Gestell , or fix, of technology – is sufficient. Should we not consider the supplementary notion of technology as an event ( Ereignis ) of becoming into our own existence? We have an inborn character that is unassailable and yet unknown to us until the day we perish, and from such an ethical perspective – and in distinction to deontological views – Schirmacher rejects science ’ s promise never to clone humans. He regards such a declaration as “only valid until it’s possible.” Rather, he regards our future as one in which humans will be allowed to procreate for as long as it doesn ’ t interfere unambiguously with the functioning of the machines, “and during that interim the poor humans living there will still have hope.”
studies programme at the European Graduate School in Saas-Fee, Switzerland. Attracting a range of world-class scholars, such as Jean-François Lyotard, Jacques Derrida, Paul Virilio, Slavoj Žižek, Mike Figgis, Judith Butler, Avital Ronnell, Catherine Malabou, Giorgio Agamben, Graham Harman, Jean-Luc Nancy, etc., Schirmacher stayed on as programme director, teaching his own classes and supervising graduate students, until 2015.
A note on transcription: The attempt here is to capture Schirmacher's speech in its authentic idiosyncracy, including Germanicisms, quaint word order and insertions of German words and phrases directly into English. We should understand this kind of speech in its specific historical context, as utterances of a European thinker shaped by the division of Germany and the new alignments that was made possible with the fall of the former Socialist Republics.
Philosophy in our time
What is the role of philosophy and the philosopher in our world?
I always refer to Nietzsche -the peachy Nietzsche -here, in that the philosopher is the most dangerous person in the world. When the philosopher comes into the room everything can change. I am a philosopher in the tradition of Socrates and Diogenes. When they were asked to explain something, in the end they understood that they know nothing.
That is why it is not a problem to read [Heidegger] said: only [then] is it right. That is why Heidegger is, no question about it, the most influential philosopher of the last century, and he became it because he was re-reading the entire history of Western philosophy. He was sitting in the Black Forest during the Nazi time. He was not in Berlin, he refused to go to Berlin, but he stayed in this hut in the Black Forest where he was reading Aristotle and Thomas Aquinas and everyone.
But reading in the spirit of: not "how can I explain this [to some] person very well," but "can I still see what these people have seen?" And if not, how can I change my gaze, my view, in order to get into the viewpoint of Plato? Obviously, we can never be Plato, but we can -and [Hans-Georg] Gadamer, Heidegger's student, said, that clearly we can -fuse our horizon with the horizon of Plato [when] we encounter his writings. 4 And this fusion might not be a big deal. [However] it might be that just a different understanding of a certain Platonic dialogue makes all the difference in a world where people have already settled on answers, especially in an ideological world in which you are defamed and will be accused if you are not following what is now the spirit of the day.
This brings us directly to our next question. There is so much talk of the different demands of various bodies in today's institutions. In your view what are the social and institutional necessities of doing philosophy?
I think you are turning it upside down. It begins with that we are all born philosophers. That is the real basis of everything. The moment we ask "Why?" "What is blue?" "Why do you hate me?" We ask all the children's questions. We even have a field in philosophy called children's philosophy. It teaches with children how very basic questions are coming out of children's mouths. Because they don't feel anything, they just ask it. And when we say that things, well, it just is like that, they say, "Why is it like that?" So they can make you crazy. But that is what a philosopher does. We are born philosophers.
Then you have trained philosophers, [those] people who studied philosophy. But these are not the best ones, because that is their problem. When I was a young professor and had a class on Hegel I asked [the students] a July 2018 -Volume 1 -Article 19 3 question. I would get an answer out of Hegel, a Hegel quote. My reaction was always "Don't quote Hegel at me." Tell me what you think. Say it in your words. Or even with Heidegger: people came to Heidegger after the war when he was famous, and rang the bell and Heidegger came out and asked "What do you want?" And they would answer "We wanted to meet you." Then Heidegger would say "You have met me," and then he closed the door. And you know one of the worst things Heidegger could say in his class to a student was "Don't Heidegger me." No, it's about the moment when you get away from it. That's why [to write] my own work, Ereignis Technik, I needed ten years, and in [those] ten years I had forgotten all of Heidegger. 5 So the reviewer later wrote that he never knew where Heidegger ended and Schirmacher began because I did not know and I did not care. What I wrote was all Schirmacher. Later I made the joke that Heidegger had "stolen" this idea from me. But it wasn't stolen, it was just that I had seen the same thing, and in phenomenology you don't have to quote, you only have to follow the logic of the phenomenon that shows itself to you.
So that was OK, my inner thing was that I said in my dissertation that "I begin where Heidegger ended." I never did what most dissertations on Heidegger did: [to] re-write again what Heidegger has written much better. [I had] to understand first where Heidegger had ended, in my case his philosophy of technology, the Gestell, the fix, which is the negative understanding of technology, of instrumental technology. My question was "Is that enough?" [to acknowledge] the Gestell, that everything is there already fixed in our world and we can see it? Heidegger said [that] it will be with us for a thousand years, instrumental technology. And when you look around, "Hallo Google" and "Hallo Amazon", "Hello Microsoft" and whoever else you are, you are -and you are not -agents of the instrumental technology. But he said ah, there is another word called Ereignis, the event of becoming into your own existence, and don't stay there, but open it up and go to new adventures in the process of living. Instrumental technology cannot stop this.
In [my] early years I was one of the philosophers of the Greens, in their first years. The Greens liked my criticism of technology, and I followed Heidegger here, but they did not like that I said that what can kill us can also keep us alive. It is the same technology that kills us and [damages] our environment [that] also must have another side which keeps us alive. There is no saving power except in the same power which destroys us. And this is something Heidegger never explains. There's only one little reference in his texts, from a seminar with Eugen Fink, a private seminar, in which he claims the Ereignis is the negative of Gestell. 6 And negative in such a way -which is very hard for young people today to comprehend in the age of iPhone photography -[that we get from] the old way of photography; the context shows that what he means is that what you did would be to make a negative first and then [from] the same film, after you put some stuff on it, turn [it] into an image.
As a student in my apartment in Hamburg I had one little room that I shared with another student there where I made my own images. Usually it was nude photos of my wife, because at that time one was not able to store images [laughs]. It was very innocent, but nevertheless, nude was nude, and at that time it was not allowed. This is the Ereignis. The Ereignis was a nude photo. And the negative of that is the instrumental technology.
In some ways it was Herbert Marcuse -[who experienced] Heidegger's coming to him and [becoming] one of his best students, but spoiled and corrupted by Freud -who in the philosophy of technology was a bridge from Heidegger to me. 7 Actually I met him once as a young student. At the University of Berlin there was an evening lecture with a German professor: [when] the door opened a blonde and brown Marcuse came [in] with a bottle of wine in one hand and in the other a blonde graduate student from California. That was the moment in which I 5 Wolfgang Schirmacher's 1980 dissertation Ereignis Technik: Heidegger und die Frage nach der Technik (The Event of Technology: Heidegger and the Question Concerning Technology) was published as the two volumes Technik und Gelassenheit (Technology and Releasement, 1983) and Ereignis Technik (The Event of Technology, 1990
Artificial life
To return to your own philosophy, can you explain what you mean by the concept artificial life and how it is different from the more well-known notion of artificial intelligence (AI)?
Artificial intelligence is not artificial life. The one who is using artificial life in his work, the physicist, well, we found out that he had a totally different understanding of it. For him artificial life meant a computer that could generate its own kids so to speak, that it could generate new programs. That is what we now expect of machine learning and these other notions of self-programming, and this is what they promised us already in the 90s and it never came, but now they are coming, and there is no question about it.
But that is not artificial life. It is not a life to see that a program can design a different offspring programme. That is just a language cover. Because "living" and "life" -at least in our understanding, and we have a right to our antropomorph understanding of the word -we cannot escape it, it is a necessity. It has to do with the fullness of body and mind in an environment. The exact definition of what is a human being can never be just you and me and social stuff. "I am Dresden! I am Germany! I am everything, you see? I am the stars! It all has to be me." If you reduce subjectivity to this little guy here. That is totally... In phenomenological terms there is not the evidence for it here. The evidence is that there are differences in between our different forms of living, in our ways of being human, but they are all together. There is a totality, and that is actually what is connected to the term Gelassenheit [releasement].
Gelassenheit means letting be into, and this refers to everything. This is just hard to understand if you, say, must be in my power. So the stars, say, are not in my power. There is the power of the stars. Right? Because if only one iota, one little piece was missing then the entire universe would collapse. And that is very true. Take computer systems now: they are so primitive compared with the natural systems of the universe. We're only coming very slowly to this idea where we need to make the world green so that we can... It's changed so little! It is who we really are.
And this paper I gave at the Schopenhauer and Nietzsche conference -my first conference -there I made the point that it was Nietzsche who understood that everything and nothing is exactly the same. They are just different ways of recognizing and getting into the mood of it. But it's just an indication that all this -what with Schopenhauer we can think of as the principle of individuation -this suffering in the world: it's not us. You suffer and I'm a different person. This is the Principium Individuationis, so that it cannot happen to me. But in fact it will happen to me in the next moment or some other time. So there is no barrier, no protection, out there. That was Schopenhauer's negative understanding. Like Schopenhauer, Nietzsche is someone who shows that this will-to-power, the aesthetic will-to-power, to generate an entire world, well, anything is possible. If you can make it, but we are also very ohnmächtig, powerless, so it's not really that we can make all that, just that we...
So what is power, Wolfgang?
The definition of power, as you know, is that things happen the way you want them to.
So the power lies in the will: the will of the human or the will of the subject?
Yes, but what is will? How do you get to will and from whom?
What about self-mastery? Is that will, or is it lack of will, or is it will not-to-will? How are we going to link the notions of mastery and power? This is an illusion. It works as it does with the media: its secret task is to play all the stuff -the newspapers -I'm waiting here to read my newspaper, it's totally senseless, it only shows that nearly every article has some connection to something I have experienced before, so it's a kind of self-enjoyment, if you want. It is what [Gilles] Deleuze referred to as self-enjoyment: it has no value in itself. It is just the process of it. But what has a value? Life itself? Well, I have no doubt that life has no value whatsoever, because you should not get old. It's the next worst kept secret, the "golden age", it's just... [laughs] Is this not the ultimate nihilism, though, when you say that life has no value?
No value as such. When you say value it indicates that somebody values something, what I want or what I like.
What you said earlier was that when someone puts a gun to your head and asks you what is important to
you, your answer would be life. Doesn't your answer here mean that life has value for that person and that life is the ultimate value?
Yes, but it is a life in which I do not get food and I do not get a good girl, it has no... It is just life as such. It is not a need in there. And it's actually something that you don't have any power over anyway. You have only power over death. That is why death is the last frontier. The State will not allow you to kill yourself. Well, they allow you now but nobody is allowed to help you. They cannot give you the right medication and things like that. Hopefully in a few more years it will be over.
But on the other hand all my life I have held that suicide is the only pension plan for a philosopher, for a living philosopher. And now I face this possibility and I don't like it. It was a stupid idea, actually! [laughs] One should rob a bank at an earlier stage, so that at least... [laughs] Well, it's like with the case of my mother who is now 100 years old. Her body is so fragile and she needs the help of other people, for everything, etc. Is that really a life worth living? But she does it. And she does it by not thinking about it. She is living day by day. When she falls down she just automatically rehabilitates. They will not give any operations to the old. Even at my age the doctors will not do anything. They say, "You're too old now to get anything." Well, what I'm saying is that you cannot say that life is a value because it has no... You don't know what it is, really, we breathe... but there are so many bad things all the time and more and more.
And you know the Homo generator, I promised to my son there would be a book out called Homo generator dedicated to him, and he has waited now 26 years for it. Well, this will happen. [laughs] This is something I will have to think more about: power and Homo generator. When we consider Giorgio Agamben's Homo Sacer and the Muselmann -what happens when we have nothing left and we are confined to naked life -it is actually funny that Homo generator and Agamben's project were started at nearly the same time. 8 We did not know about each other at the time.
It is almost the opposite of each other isn't it?
Yes, Homo generator is our power to [resolve] whatever happens: we can get out of anything. We can start anew. It is about natality, in the sense Hannah Arendt gives to it. And mortality, that is the good Agamben's way into the question. What is left from that...? Mortality and natality have a connection. There is no question about it. And the connection is that they might be interchangeable. Every natality is a mortality; it's a going-down. What is not really a problem, in my understanding of Gedinge and fulfilling-itself: you never fulfil yourself without breakdowns and failures, etc., that is what happens mostly. As I said once it is not our failures that kill us; it's our successes. But that was at the time when I was a young philosopher and I said, "OK, you have to change your lifestyle! You people have to change or humanity will go down! It will go down with our species!" And I found myself very powerful. I could threaten death on our species. Nowadays I'm thinking, oh, my god, how ethnocentric have you been at the time?
[laughs] That is one of the things I have to turn down in my book when I republish... But again, going back to this question of need, the easy way out would be Schopenhauer's. And unfortunately the older I am the more I become touched by Schopenhauer. Life is not worth living. But he still lived on, so at least it was one philosopher who did not commit suicide, even though he had everything, all his understanding of suicide. But his point was in his Ethics that "I fight this. I fight this evil will. I fight [the idea] that life is not worth living until my last breath. Because if I would kill myself then I would agree with what I ethically think is wrong." Ach so, he is not agreeing with the survival structure of nature.
To talk about Schopenhauer, then, isn't he also saying that it's impossible to kill yourself, when he's saying that
after you die what is left is what is good in you.
That is one of the misreadings [of his work]. 9 Well, Schopenhauer is not holding the view of the Unsterblichkeit der Seele, there is no Unsterblichkeit.
But the stars remain, the universe remains...
Yes, but in Schopenhauer nature is the worst thing in ethical terms. And that is an interesting point, because nature in many ways will be like the technology coming, the second nature. It has the same total indifference to any human interest or lifestyles, etc. Like nature [it is] totally indifferent. My point is that we are here; our sexual organs are in the middle of the world; we are only here to procreate: that is all. And that is why he said we should stop procreating. If we could we would do that, but we have so many idiots in Islam and they want more and more babies. And besides babies are so fun, you learn so much about the world from being with babies.
But we should not, it's a very cruel move, certainly from the perspective of philosophy, it's a very cruel move to bring another human being able to suffer into this world. But on the other hand, who says that suffering..., nicht, Schopenhauer says that luck is just a moment between two sufferings. But you can also say that suffering is just a moment between two lucks. You know that's the same structure. So many sayings, you know, so many girls you had. It went bad, but then you had one and it went well. This is a metaphor for everything. And the other [statement] is just exercise, so to speak, to learn that. That is, I think, the only proof for [the assertion that] life is not worth living would be AIDS. And that is in some kind of way a Selbsmitleid, you know. 10 You are feeling sorry for yourself. You forget how many good years you had, nicht? I had 60 good years. The last 12 years in Thailand with my Thai girl, she was 40 years younger than I am. So it was a good time every time I was able to go there. OK, I am not able to do it anymore. But should I say that therefore everything that was good before, that I enjoyed, was nothing? That is a very stupid idea, but it is a typical idea for humans. When they have had a relationship to somebody and then a divorce then everything was bad! They forget that they lived [many] years very happily, or most of the time [they were] happy with this girl. No, no! It was bad! How come? How did we lose our ability to judge fairly our own existence? That is the reason why we are always so upset about our failures and so happy about successes. In fact they both should be more viewed in the way of the Stoics. Well, Schopenhauer was against the Stoics. The Stoics were not what we believe nowadays. The Stoics believed in a cosmic system. They were calm because it was just fate, and so on. That is one easy way out.
But again, [with regard to] Heidegger on die Geworfene and wurf; we are thrown into the world, and then Entwurf becomes our project. It was actually not so much a project as such, it was [merely an] interest. But the first part [of Heidegger's approach, that] we are thrown into the world; that is the part of total coincidence, nicht, if you are Rockefeller or a poor boy in Africa, it's totally a coincidence. And it's not that our project can make up for a bad start, [that is the case] only to a certain degree, and only to a very certain degree. And so many people kind of blew it, even when they had a good start. They hate themselves for being privileged.
But I was raised in the workers' and farmers' paradise of East Germany so I have never any feeling that I owe them anything. Like it was in the West, you know, in '68 and '69, the student revolution, the leaders were all bourgeois, rich kids, and they had a kind of bad feeling that the poor worker kids, etc., had not the same chances, and so they became communist or just a leftist liberal. I always understood that the farmers and workers and the functionaries could become the same bad guys when they came into power. So power corrupts. Still, the only power you have is your life power; it's your creative power. That's why the will-to-power in Nietzsche is not a political power. It's a creative power, because there you create -not from nothingness, but from the material at hand -in your life situation. You generate something. And also if it does not work you have to live this, you have to accept it.
That would be the wrong way to see it. That is what media tells us. But the secret task of media is to lie about this, because the entire thing is a total lie. It's not that certain things, that, OK, this young girl gets this prince or not, this [kind of] obvious lie, some as-such. Everything that happens in front of us is what life looks like. And we have to say that, actually, we are lucky that this is not the case.
Isn't that precisely what Slavoj Žižek talks about with his notion of "outsourcing"? Isn't this an "outsourced" life what the media lives for us? Isn't it that our lives are outsourced in this sense?
My point is exactly that -for [by] outsourcing... You outsource, for example, accounting, because we don't want to do it, it's easier this way, but we certainly expect that the person we outsource it to does it right. So in this sense media would have an expectation to outsource correctly what humanity is about. But that Žižek cannot mean. What he would mean is that we outsource it and then we open it up to all the distortions and all these ways we are made to look at things, you know. If we had kept it closer to our own interests we might have done a better job.
Could you also say that when you outsource it you can see it as an object in a way you couldn't prior to the moment of outsourcing -that it enables you to see a lie that was always there?
Žižek is a very fast thinker. He always finds another way to... I met him in New York and I did not see how he could get out of my critique. But he got out of it, and that was not very good. After that I stopped criticising him outside of Saas-Fee.
[laughs] But of course at Saas-Fee I could always use the power of the director! OK, stop now; you talk too long, no discussion. Now, well, the distance that outsourcing allows also allows for critical review, only [it is required that] we don't get involved directly. We always try to lie to ourselves. So we have a better chance now to criticise it. But still there is the idea that, firstly, there is something we can outsource, nicht, that there is a true human life, an authentic human life which can be described and can be understood. And then this [life] will be [transferred into] a different medium -a kind of mimesis, of the Greek -in there and [so] it can be criticised. Because we haven't forgotten the lesson from Aristotle that language is understanding: it is my understanding. And it's Heidegger's understanding. Language, the word, is already an outsourcing. Take "chair": it's is not the chair. It is the word and then we sit on something which is a chair. So language in itself is always already different from the phenomenon, the evidence. With phenomenology we also know that evidence is not so easy to find.
Heidegger's famous remark in Being and Time was that what we see at a first glance is not the phenomenon. That is why real phenomenological work is very long in explaining. It is actually something machines can do better. Do you know this anecdote with Husserl? Husserl came to his lecture, nicely dressed, a conservative, bourgeois guy. And then the lecturer [Husserl] says, "OK, what is the topic? What kind of topic do you want today?" So he asked them for the topic. And somebody said, "Wedding." And then Husserl started for 90 minutes to give a phenomenological description of wedding. And actually, it is very funny, he asked that this will be stored because anybody who ever wanted to know what a wedding is will just go there and read it. It's like the Google of its time, the Wikipedia or Google, they give you all the different kinds, and you didn't pay for it. So Husserl really believed that -well, I'm not saying that this is the end of it, there could be other guys coming and adding to it, because [the description becomes] more and more differenziert, ach so more complex in there -but if this is described correctly then this will stay on for eternity. There is no other way to call this wedding. And for them it was not just a word. It was a word in which all the evidence possible [was gathered, such] as with, for example, this chair: I cannot understand this chair in phenomenological terms, as evidence, without going around it and look at what's behind [it]. How does the chair look from there? How is the chair used in a poem by Baudelaire? It is all part of the phenomenon of chair. And not to forget that chair has so many other meanings, like chair at a conference or some such thing.
So what I am saying is that language always outsources our evidence to the medium of words. And in this outsourcing there is certainly all the lies [that] can come in to it. You don't have the intention of truth, or at least for aletheia, the revealing and concealing. Well, not at the same time: you conceal and then you reveal, and then you conceal it again. It's a process. You wouldn't call it dialectical but it is nevertheless dialectical in many ways. It's only if you are a sophist, if your doxa is that you explain to people that this piece of chair is the best chair ever bought, and you say "Low prices! Great colour!" What has the colour to do with the quality of a chair? In a doxaic way, well, most of the media is like that. But again, this is just a second discussion about the media. In itself media has a task, a secret task. We have given media the ability to create a world which has something to do with us, and some things do not, some things are fantasies. The form in itself claims independence, but it is not true. It is in a disguise of being independent. And only in a few forms of it [is it so], in a very, very few forms, because art sometimes discloses truth, although it has no intention to, it just happens. And all the other people, anyone who has an interest, let alone financial interest, anyone with an interest in that are kind of distorting the evidence.
And with [regard to] Habermas' notion of biases; well, biases are our concrete ways of living our lives. If we don't have biases we would not be able to sit here because we would not know that there is not a way for heaven to open up and the end of the world will come, or will this door ever open, or will your train really come, or... It's not possible; it is absolutely not possible to live without these biases. The problem for Habermas, and he knows this very well, the real problem with biases is that you are not able to criticise biases if you get new evidence. So the problem is not the bias, but your inability or your unwillingness mostly to accept new evidence because you are so used to the other evidence that you are using. You like the other evidence. You like the idea that god exists. And even if the worst thing happens, if no-thing happens and god wouldn't accept... Well, you still find a way around [it], that you slept at the time, or that the devil did it, or that the poor boy couldn't do anything [about it], or it's because Lucifer is the most powerful agent, and so on. There are so many nice stories.
Ethics and character
Christians, they love Nietzsche. There is no better philosopher, because Nietzsche is the true..., well, he proclaims Antichrist. So what better philosopher can there be. And you have talked a little bit about Nietzsche's derision of compassion. And compassion in Schopenhauer you feel is misread by Nietzsche. Can you talk a little bit about...?
Ach so, firstly, what I said, well, [let's begin from] Nietzsche's attack on Plato's idea of truth, because [to Nietzsche the prevailing notion of] truth would not fit to the will-to-power, to the aesthetic will. The artist generates the world; there is no world as-such. There is just a world in my imagination, etc. And Nietzsche didn't do that because of spite, but only because he understood that this idea of truth was also the opening for so many forces. [This was so] because in the name of truth, and because of God having written the truth in the Bible, etc., the name of truth there oppressed and killed so many people. So it is much better to say there is no truth; there is only invention, because every invention could be changed, etc.
You know Nietzsche's ethics is an aristocratic ethics. It is an ethics I live by, but not because other people tell you. God is dead and there is nobody else to tell me what I should do so I have to find it myself. 11 Or as [Mitchell] Feigenbaum said so nicely, just don't do it. When they asked, "What kind of ethics do you have?" I [answered that I] have this ethics: "Just don't do it."
You prefer not to?
11 Mitchell J. Feigenbaum was professor of physics at Cornell University from 1982 and an importantcontributor to chaos theory. To Schirmacher Feigenbaum provided insights into the possibilities and limitations of the science of physics. Initially invited to teach at Saas-Fee Feigenbaum's and Schirmacher's approaches to artificiality turned out not to be commensurable.
Yes, and I have no reason for that. I just don't do it. I just don't murder people or cheat people, I just don't do it. And this is actually so simple in the end. And do we need all these complicated stories, and these biases, and some situations, situation ethics, it might be good to cheat and things like that? It cannot work if it is not you. But if you are a cheat, a born cheat, if anyone is born a murderer or born a cheat, perhaps one per cent, then [they will do that].
But anyway, compassion, Nietzsche's compassion, and Schopenhauer's, also, Buddhist and Christian, they are kind of forms of this understanding that the other, well, you are me. There is no difference between us in suffering. In luck we are all different, but not in suffering. But the Christian, or the conservative, understanding is still that I am the priest, I am the better off. And out of compassion I help you. I am not looking at you as in I am the same poor swine as you are. No, I am better, and even in the very unlikely case that I am ending up in hard [conditions]torture, [questions of] security -well, it's not so unlikely for a philosopher, well, this kind of... It gives you power. Because you can give other people gifts, you can keep them alive. You are the master even when you don't call yourself so. And if you have a religion behind you then you are full master. And this kind [of compassion] certainly Nietzsche was not happy with.
But for Schopenhauer it was totally different. An ethics based on "should" was a no-no for Schopenhauer. He does not say what you "should" do. Ethics describe what humans are able to do. Your ability: that was the main thing. And for Schopenhauer, for example, every person is born with a certain character. And we say how can that be, etc.? But he also said that you cannot know your character until your last breath, because it's happened quite a few times that a very stingy person on his death bed has given his money to somebody, [and] so has done something out of character. But in effect it was not out of character. It was his character! So it means that he had the ability to do that and you cannot find that out just by looking at the person or see what he had done until the end: only then can you make a judgement based on [his] deeds. As Sartre says, existence [comes] before essence.
But what really attracted me to all [these debates] about Schopenhauer was [this]: [Do] you know Agnes Heller? She was at the New School -a philosopher -at the time when I was there too, and also a student of [Georg] Lukács. 12 At the end of her life she turned to Kant and his morality. She gave a lecture at the New School and I got to make her angry. I said, "Agnes, you are at the East Village" -this was at the time when the East Village was so dangerous [that] one could never cross Avenue B because you get mugged or... -"and you are on Avenue B and you get mugged, then explain, please, to the mugger the categorical imperative." [laughs] No way that you can get out of it. But Schopenhauer gives you a chance, because for Schopenhauer compassion is something I do not want. It is kind of a power -the power of compassion -that overcomes me and let me do things I don't want. I don't want [to help] this old lady, but somehow I cannot [but]do it. I could [help] a younger person, maybe, but not at this moment... I understand this old lady, and my old lady and myself: we're all in the same shitty world in there. With concern to caring everything is the same in there. But there is a chance: it happens that you don't do things that you really want to do because of something in you that does not allow you. And this I kind of liked, certainly. It was nothing that you follow or that you were a nice person or something like that. It was an emotion, a psychological power, which really stopped you from doing it. And you don't know if five minutes later it would still work, but it happens. We have this in our lives sometimes which is the compassion you have. Just don't do it.
Cloning humans
Can we just end with a quote from Daodejing (Tao-te Ching): "The work is done, but how it was done noone knows. It is this that makes the power not cease to be." 13 It is through this not-knowing how the work was done that the power persists?
Oh, you go back to the beginning. Yes, certainly Tao-te is very near Gelassenheit, and very near my other insight that it's not that we need to accomplish anything, because everything is already accomplished, and that's why we have no 12 Agnes Heller became Professor of Philosophy at the New School for Social Research in 1986after many years of studies and work under Georg Lukács at the University of Budapest, Hungary. Heller is mostly known for her contributions to Marxist philosophy, although her later work also includes affirmative views of neo-conservative positions. Her current writing encompasses Hegelian philosophy, ethics, and existentialism. 13 So everything that they promise, like "We never clone humans," will only be valid until it's possible. You know there is a well-known Chinese experiment, the sheep, which I called a world historical figure with Hegel: Dolly. Well, she died shortly afterwards, so it was just a possibility. Now they have done it with sheep and with other species, so it will not stop. It will come back, and all their fears [with regard to] how we can clone humans, and how we can clone... Well, these fears are totally unfounded, because every kick you generate, every act of sex, will be different.
The question is why we should do that. Well, the point of cloning would be if humankind will be unable to procreate anymore, and that is a possibility. And another, more likely possibility is that the machines will not allow us to do something like that because they might just instead of kill us let us peter on so we can age and age and then only a few exemplars will be for watching in the zoo and to play with the apes there.
In so far as the idea of the Ereignis -the event -of technology is concerned there is actually a text of mine "From the phenomenon to the event of technology," which I gave as a lecture to the circle of philosophers of technology, including Feigenbaum in there, it was in 1981 I think.
But the point now would be since we are looking for another event, [namely] the event where the machines, the artificial intelligence, comes into its own, the event of the AI. And it's not the AI we know of now, it's not selfdriving cars; it is the A-G-I, the Artificial General Intelligence. It is not a certain artificial intelligence which knows a certain field. It is an artificial intelligence which covers everything which can be intelligently known. And only this kind will be the real challenge of our time.
Gene technology was a challenge, there was an article with the title "The challenge of Gene Technology," which started by saying "The earth needs new human beings, yes, but who can we trust to give us the right human beings? The people in the labs? You must be kidding! Franz von Assisi, maybe. Someone like Feigenbaum, maybe." So I called it "The kiss of the mentor."Â I said [if] only they would agree to certain procedures... But it never happened. The agreement is now just how much money you can make of it. It is totally money based. The money is the real world now. There is no question about it.
So this idea that every human being because of being a human being get a certain amount of money for doing nothing, for just existing, and there are enough nice apartments in which you can survive, and if you want you can maybe earn a little bit more by inventing something. People would exchange favours or things like that. I can see a [future] time, say fifty years, in which this would be a bridge from today's world to a world where it is very open whether humans have any place at all. The only place might be that we are discoverers of what we know already, because the ecological crisis has not stopped at all -which I already said twenty years ago -it will only go underground. Everything which is so obviously bad, like polluted rivers, and so on, we will clean this up. But what is really the ecological catastrophe will go on in other ways, invisible ways or nearly invisible ways. Nobody can stop this as long as it makes money. And every destroyer destroying things and coming up with fakes makes money. Trump is just a very good example of that. A fake being president [who] can send other people to die is really not a good sign for what kind of future humankind has, if it has any.
There I have this problem: why should I care? Hopefully I will find a way to die without pain. But I still have a son who has to live fifty years after me. And he might have a kid, probably, because they are still so used to that. There is still a two hundred year scope [where] I would like to have a world in which people I care about will like to live. But is it possible? A world in which the artificial general intelligence rules, not even rules, because they will not rule. The whole idea is [exactly] that: who rules? There will be made some programme that allows us to rule on, so that we just can do that. But it makes no difference whatsoever, because the machines do not rule. They function. Their only interest is actually to function. And for functioning you need to eliminate everything which is a threat to the function itself. Certainly it is a question of what makes more trouble, the human being or what else? There will be a long time -well, a hundred years or so -in which the trouble humans make are not outweighing, are not worse, than the trouble it is to get rid of the humans. And during that interim the poor humans living there will still have hope. Hope dies at last, the Bible said so. This will happen.
Hope will die at last. | 10,511.2 | 2018-07-01T00:00:00.000 | [
"Philosophy"
] |
An Adaptive Calibration Framework for mVEP-Based Brain-Computer Interface
Electroencephalogram signals and the states of subjects are nonstationary. To track changing states effectively, an adaptive calibration framework is proposed for the brain-computer interface (BCI) with the motion-onset visual evoked potential (mVEP) as the control signal. The core of this framework is to update the training set adaptively for classifier training. The updating procedure consists of two operations, that is, adding new samples to the training set and removing old samples from the training set. In the proposed framework, a support vector machine (SVM) and fuzzy C-mean clustering (fCM) are combined to select the reliable samples for the training set from the blocks close to the current blocks to be classified. Because of the complementary information provided by SVM and fCM, they can guarantee the reliability of information fed into classifier training. The removing procedure will aim to remove those old samples recorded a relatively long time before current new blocks. These two operations could yield a new training set, which could be used to calibrate the classifier to track the changing state of the subjects. Experimental results demonstrate that the adaptive calibration framework is effective and efficient and it could improve the performance of online BCI systems.
Introduction
A brain-computer interface (BCI) provides an alternative communication and control channel between humans and the environment or devices by noninvasive [1][2][3] and invasive approaches [4]. For the noninvasive BCI, the scalp electroencephalogram (EEG) is the most-used modality to convey the user's intentions owing to its low cost and high portability for well-defined paradigms [5]. The well-designed paradigms in EEG-based BCIs include motor imagery [6,7], steady-state visual evoked potentials (SSVEPs) [8][9][10], P300 event-related potentials [11,12], and motion-onset visual evoked potential (mVEP) [13,14]. Among these, mVEP is an important measure for studying the motion vision processing mechanisms of humans and animals. It has already been widely used in such fields as fundamental research and clinical diagnosis.
For the neural mechanism of motion perception and physiological background of mVEP, the literature indicates that mVEP has advantages over other typical VEPs because of its large potential amplitude and minimal differences among and within the subjects [15]. These characteristics make mVEP more suitable in the application of BCIs. mVEP is evoked by the fast-moving visual stimulation and represents visual motion reactions of the middle temporal area and medial superior temporal area. mVEP typically contains three main peaks: a positive P1 peak with a latency of about 130 ms, a negative N2 peak with a latency of about 160-200 ms, and a positive P2 peak with a latency of approximately 240 ms [16]. N2 is the most prominent and stable component of the mVEP. The BCI group from Tsinghua University designed a stimulus paradigm to evoke mVEP and implemented it in a BCI system [14]. mVEP was successfully used to develop a spelling system 2 Computational and Mathematical Methods in Medicine similar to the P300 speller [17]. Because it does not need flashing stimulation or stimulation with sudden changes to evoke mVEP, the subjects are not prone to visual fatigue, which makes mVEP relatively more suitable for subjects in the training process. A common spatial pattern (CSP) algorithm has been proved to be a highly efficient feature extraction algorithm for BCI systems [18]. It aims to find directions (i.e., spatial filters) that maximize variance for one class and minimize variance for the other class at the same time [19]. The eigenvector processing by CSP is beneficial to the target recognition of BCI systems and improves the accuracy of the brain-computer interface system. In the current study, the CSP is used to extract features for the mVEP. For a BCI system, we must collect a sufficient training dataset to train the classifier to implement online tasks. This procedure may be laborious and time consuming. To address this issue, a zero-training strategy and an automatic adapting mechanism have been explored [20][21][22][23]. The user's states could change during experiments due to unexpected environmental factors or internal physiological factor. In addition, EEG signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm [21][22][23]. Therefore, it is essential for an online system to track the changing states of subjects. In the traditional system, the classifier is usually trained before the online application [24][25][26][27]. When the subject's states change considerably from the previous states during the training stage, it is necessary to take some special measures, such as providing new data recorded from the subjects for the retraining and adjusting the classifier to track the subject's changing states. Some efforts on this topic have been tried [23,28]. The main idea of those studies was to exploit the information in the previous sessions to calibrate the classifier. For example, Krauledat et al. proposed a method in which past sessions were used together to evaluate the prototype filters for the new session to calibrate the classifier. This approach does not need the training set and is the zero-training approach [23]. It seems that this approach only performs the calibration at the beginning stage of a new session. For an online system, when a session lasts for several hours, this method may be ineffective [1,23,29]. Therefore, calibration only at the beginning of a session may not be enough to capture the change of the subject's state that may occur during the experiment. It may be more meaningful to calibrate the classifier adaptively in the different phases of the experiment instead of just at certain specific periods. To this end, it is necessary to mine robustly the information hidden in the previous several blocks of data. The calibration performance largely depends on the reliability of the information represented by the previous blocks that could be used for classifier calibration. However, as for the practical online system, it may be very difficult or impossible to know exactly the tasks reflected by the sample; that is, we may not correctly label a sample with the classifier during the experiment.
The support vector machine (SVM) [30,31] and fuzzy Cmean cluster (fCM) [32] are two different approaches, where the traditional SVM needs the training set for the supervised learning [30,31]. It provides a link between the current block and the previously supervised classifier. fCM as a data-driven classifier does not need prior information as much as SVM for clustering [32], and it emphasizes the local clusters that the current samples form. Apparently, the different aspects of datasets can be reflected by these two different approaches, and the combination of them may provide more-flexible and more-reliable information about the samples.
In this paper, we propose an adaptive online calibration framework that was first used in mVEP-BCI system to calibrate the classifier that could track the changing states of the subjects. To fulfill this goal, the framework needs to adopt the new information in the latest samples and remove the information represented by the old samples, which were recorded a relatively long time previously. We combine SVM and fCM to select the reliable samples from the previous blocks and then clip the expanded training set to remove the old information represented by the old samples. With these operations, an updated training set could be generated and subsequently fed into the classifier for the retraining to track the subject's states. The performance of the framework was tested with the dataset from 11 subjects under the mVEPbased BCI paradigm. The results indicate the satisfactory effectiveness and efficiency of the proposed method.
The structure of this paper is as follows: The framework is introduced in Section 2, Section 3 presents the results when the adaptive calibration is used for the recorded dataset, and the discussion of the results and conclusions are given in Section 4.
The Traditional Training Protocol of a BCI Classifier.
For most of the current BCI classifiers, training is usually implemented before the online experiment; that is, the training and test are not interactive [1,[33][34][35]. Figure 1 shows a flowchart for the traditional BCI classification used to classify BCI tasks.
The diagram reveals that the training set is usually fixed after the training procedure, and no new samples in the test set are adaptively updated into in the training set. For an online BCI system, the training set may be collected on different days, and the experiment may last for a relatively long time. Inevitably, the patterns according to the specific tasks may vary over time due to the nonstationarity and nonlinearity of EEG signals [33]. Therefore, the subject's state will surely change during the test stage compared with the state during the training stage. When the states are largely different in the two stages, the trained classifier may fail to decode the samples during new test sessions [21][22][23]. At this moment, the performance of the classifier will be inevitably lowered.
Adaptive Classifier Calibration Framework during the Experiment.
Considering that the individual subject's state will vary during the experiment, it is beneficial to adapt the classifier to new data involving the varying states and to retrain it [21][22][23]28]. To implement the adaptive mechanism, a direct approach is to integrate some new samples into the training set. However, it may be difficult to assign a reliable label to those new samples during the experiment. Obviously, once some unreliable samples are included in the training set, they may have a negative effect on the following classification performance [20,31]. Therefore, it is vital to label the samples correctly and then add these reliable samples into the training set for further classifier calibration.
The traditional SVM classification strategy may not yield satisfactory performance without recalibration using new samples when the bias between the training set and test set cannot be ignored. SVM can provide the ability to discern how reliable an assigned label of a test sample is [31,36]. It needs supervised training with training datasets; that is, the classification largely depends on the prior training data [31,34,36]. Unlike the SVM classifier, fCM is a kind of data-driven approach to classify the set without the training procedure [32]. The only prior information needed for fCM is the number of clusters, which is usually known for the BCI system. Apparently, fCM and SVM are two complementary approaches for classification, in that the former focuses on the similarity between the current sample and the previously labeled samples, whereas the latter aims at the current data distribution. Both can provide the probability (confidence interval) that indicates reliability for this classification. Certainly, the combination of these two methods can guarantee that the samples are classified with an accuracy that is more reliable than the single method. The following assumptions are considered in an adaptive BCI online system: (a) The variance of subject's states will lead to classifier bias. (b) The classifier calibration needs to be performed during a certain interval. (c) The training set size cannot be too large for classifier training.
Based on these three assumptions, we proposed an adaptive framework for classifier calibration for a mVEPbased BCI system. The framework is shown in Figure 2.
The "new training set generation" process is the core of this framework and determines the performance of online BCI systems. If it is removed, the framework presented in Figure 2 becomes the traditional one. Considering the two-class task experiment, the detailed procedure of new training set generation is further revealed in Figure 3.
In Figure 3, the procedure for generating a new training set consists of Steps (A), (B), (C), and (D). For the adaptive classifier calibration, the procedure should include the new samples that can account for the subject's new state in the training set, and it should exclude the old samples that were recorded a relatively long time before current samples from the training set. The detailed implementation for the four subprocedures is elucidated as follows.
Step (A) (label samples with SVM). In this step, after the SVM classifier is trained by the old training set, the samples in session − 1 are classified by this classifier. The output of this SVM classifier provides two kinds of information: the labels of samples and the probabilities denoting the reliability of those predicted labels [30,36].
Step (B) (label samples with fCM). fCM is applied to the samples in session − 1. Owing to the two-class classification task for mVEP data, fCM could classify the data into two clusters, M1 and M2, with cluster centers U1 and U2, respectively. As for clusters M1 and M2, we only know that these two clusters belong to different tasks and cannot exactly determine which labels (i.e., tasks) are assigned to M1 and M2. To label these two clusters, a matching technique is adopted. First, for the training dataset, the two centers C1 and C2 for the two tasks can be obtained by averaging the corresponding features. Then, the center U1 is compared with the centers C1 and C2. If U1 is much closer to C1, the samples in cluster M1 will be assigned with labels as the samples for Task 1 and samples in cluster M2 assigned with labels as samples for Task 2. Otherwise, samples in M1 and M2 are assigned with the labels as samples for Tasks 2 and 1, respectively. Besides the two clusters, fCM also generates a membership probability to indicate the reliability of each trial when it is assigned with the corresponding label [32].
Step (C) (select reliable trials). Based on the probabilities obtained with SVM and fCM that can indicate the classification reliability, we define a criterion to select reliable trials. Only the trials that have the same labels assigned by SVM and fCM can be regarded as potential candidates. Furthermore, we set an acceptance threshold (0 ≤ ≤ 1) for the selection operation. Let svm ( ) and fCM ( ) be the probabilities provided by SVM and fCM for the th trial, respectively. If svm ( ) > and fCM ( ) > , then this trial will be selected as the reliable trial for succeeding classifier calibration.
Step (D) (clip the expanded training set). The subject's state may change over time; therefore, the samples in the training set recorded a relatively long time ago may have different characteristics and have a negative influence on classifier performance. Removing the redundant samples from the training set is necessary to include a fixed number of samples during the online experiment. This procedure is necessary for this adaptive classifier calibration framework. Without this clip procedure, the training set will grow quickly so that the training of the classifier will be unacceptable for the online system due to time-consuming training. Denoting the fixed number of the training samples as M, we label each sample with a time stamp in reverse time order. Specifically, the last added sample is labeled 1, the one before is labeled 2, and so on. When the size of the training set is larger than M, the clip procedure is implemented. We remove the samples that have a time stamp larger than M and keep the rest.
Considering that the subject's state will not greatly change in a relatively short time period, the calibration is performed at a certain time interval. In the current study, we adaptively updated the training set at a certain number of experiment blocks. Each block consisted of five trials that lasted for 1.5 s. With this framework, some new reliable samples could be integrated into the training set, while some old samples were excluded from the training set. In our work, SVM is used to classify the samples based on the expanded training set, and other classifiers, such as linear discriminate analysis (LDA) [33], Bayesian linear discriminate analysis (BLDA) [35], and kernel spectrum regression (KSR) [37], could be considered to replace SVM for classification.
Experimental Paradigm and Subjects.
Eleven subjects (three females and eight males, age 23.6 ± 1.2 years) participated in the experiment. They had either normal vision or corrected-to-normal vision. The Institution Research Ethics Board of the University of Electronic Science and Technology of China approved the experimental protocol. All the subjects read and signed an informed consent form before they participated in the experiment.
A 14 in LCD monitor with a 1280 × 1024 resolution and 60 Hz refresh rate was used to present the visual stimulus graphical user interface (GUI) with a visual field of 30 ∘ × 19 ∘ on the screen, as shown in Figure 4. The six virtual buttons labeled with 1, 2, 3, 4, 5, and 6 were embedded in the GUI. Each button with a visual field of 4 ∘ × 2 ∘ was composed of a red vertical moving line and a vacant rectangle where the line existed.
For each button, the red line appeared and moved from the right side of the rectangle and disappeared at the leftmost side. The entire process formed a brief motion-onset stimulus and took 140 ms with a 60 ms interval between the consecutive move processes. Each motion-onset stimulus appears randomly in the corresponding virtual button, and all the stimuli appeared before others were repeated. A trial had six successive stimulus periods corresponding to the six buttons. Specifically, a trial included a series of six red vertical moving lines across each virtual button successively. Therefore, when there was a 300 ms interval between two trials, each trial lasted for 1.5 s, as shown in Figure 5. In addition, five trials formed a block, which lasted for 7.5 s.
In the experiment process, each subject was asked to focus on the button presented in the center of the GUI, where the random number appeared. And the subjects were required to calculate mentally the number of moving stimulus occurrences in the target button. A total of 72 blocks (including 360 trials) were collected for each subject in two separate sessions, and there is a 2 min interval for rest between the sessions. In the following process, the first session was used as the training set, and the second session was used as the test set. For the training set, we averaged five trials for each virtual button in each block. Then, we could get one target stimulation sample and five standard stimulation samples, where the sample in the current work refers to the 0.5 s long EEG recording corresponding to the stimulus. One standard stimulation sample was randomly selected and combined with the target stimulation sample as two samples. Thus, the data collected from each subject contain 36 pairs of samples to constitute the training set. For the test set, we also averaged the five trials for each virtual button, resulting in one target stimulation sample and five standard stimulation samples, and then we totally obtained six samples for one block in the test set. It was a binary classification problem for mVEP recognition. We needed to conduct six times the two 6 Computational and Mathematical Methods in Medicine classifications, and then we compared these output values to recognize the button at which the subject gazed. In this study, the accuracy was used to measure the subjects' performance, which is the ratio of the correctly classified blocks to the total blocks in the test set. It is obvious that the higher the recognition accuracy, the better the performance of mVEP-BCI.
By using a Symtop amplifier (Symtop Instrument, Beijing, China), eight Ag/AgCl electrodes (O3, O4, P3, P4, CP1, CP2, CP3, and CP4) from an extended 10-20 system were placed for EEG recordings. AFz electrode was adopted as reference. The EEG signals were sampled at 1000 Hz. There usually was noise contaminating the scalp-recorded EEG signals, and, in our work, those samples with absolute amplitude above the 50 v threshold were considered to be contaminated with strong artifacts and abandoned in the following analysis. Because the mVEP is usually distributed in the low-frequency band, EEG data were bandpass-filtered between 0.5 Hz and 10 Hz. Data between 150 ms and 300 ms were used to extract features with the CSP algorithm. The one pair of CSP filters was selected to filter the dataset. The log-variances of the spatially filtered data were fed into the classifier for training or test task recognition.
Results
This section details the performance evaluation of the proposed approach under various conditions based on the accuracy and information transfer rate. The accuracy is defined as the ratio of the number of correctly recognized targets to the number of targets overall. Besides accuracy, the corresponding information transfer rate (ITR) is another standard criterion to measure the BCI performance. Generally, ITR is defined as where is the number of selectable items, is the selection accuracy, and is the average time in seconds for finishing one selection.
Effect of the Calibration Interval.
As the subject's state may change during certain intervals, in this section, the effect of the calibration interval on the classifier performance is explored. Specifically, we study the performance of the classifier when it is calibrated with different numbers of blocks. · · · · · · 140 ms 60 ms 60 ms 60 ms 60 ms 60 ms 60 ms 300 ms Figure 5: Timing scheme of the mVEP experiment. Each block contains five trials. In each trial, the motion stimulus appears in the virtual button for 140 ms. There is a 60 ms interval between two consecutive stimuli and a 300 ms interval between two consecutive trials. Tables 2 and 3. We find that the combination of SVM and fCM yields better performance with higher accuracy and ITR than a single method. Additionally, the performance obtained by the combination strategy is significantly higher than the performance obtained by the method without adaptive calibration under three interval conditions.
Effect of the Threshold for Reliable Sample Selection.
In this subsection, the influence of the threshold for reliable sample selection on the calibration performance is explored. Five values, that is, 0.6, 0.65, 0.7, 0.75, and 0.8, were tested. The calibration was performed every four blocks. Table 4 gives the overall accuracies and ITRs when different thresholds were used. The results of a single SVM calibration and single fCM calibration under different thresholds are also shown in Tables 5 and 6. We observe that the calibration with the combination of fCM and SVM shown in Table 4 provides better performance (i.e., higher accuracy and ITR) with a significant difference compared with single fCM or SVM calibration at each threshold. In addition, the threshold of 0.75 provides the best performance. These results further confirm that the proposed framework combining fCM and SVM is feasible and effective and it is superior to the two methods when they were implemented independently.
Discussion and Conclusion
The calibration of the classifier is an open issue for BCI online systems, and the application of the information contained in the new samples is one feasible solution to this issue [20,22,28]. However, for the practical online system, it is impossible or difficult to label the samples as indisputably correct. To address this problem, we proposed an adaptive classifier calibration framework. In this framework, we labeled the samples according to the outputs of SVM and fCM, and we chose the reliable samples to update the training set that was 8 Computational and Mathematical Methods in Medicine used to recalibrate the classifier. Moreover, two parameters, that is, the calibration interval and the threshold for reliable sample selection, were studied. We systematically tested the effects of the two parameters on the classifier performance. As shown in Table 1, when the calibration interval varies, the calibration effect for the classifier is different. Among the three different intervals tested, the interval including four blocks demonstrates the best performance, and the average accuracy is 88.4% and the average ITR is 14.7. However, compared with the original SVM approach without calibration, whatever the three calibration intervals the calibration approach adopts, the classification accuracy is significantly improved. The average classification accuracies of four-block interval, six-block interval, and nine-block interval are improved from 85.6% to 88.4%, 88.2%, and 87.4%, accompanied by the improved ITRs from 13.5 to 14.7, 14.6, and 14.2, respectively. For a practical online system, there is no doubt that the subject's state may change during the experiment, but it may not be necessary to calibrate the classifier for each block or at a short interval. If the calibration is frequently adapted, the efficiency of the online system may be lowered due to the extra calculation involved. Moreover, the subject's state within a certain duration will be kept relatively stable, so the feasible way is to calibrate the classifier after a certain long period. However, the calibration interval cannot be too large, or it may fail to adapt the classifier to track the subject's state on-time.
As shown in Table 2, when we only adopt the outputs of SVM to find the reliable samples to update the training set, the average performances (i.e., the accuracy and ITR) of the three calibration intervals are 85.1% (13.3 bits), 85.1% (13.4 bits), and 84.6% (13.2 bits), respectively. Compared with the original SVM approach without classifier calibration, the average performance is not improved. We can see similar results in Table 3. When we only use the outputs of fCM to find the reliable samples to update the training set, the average performances of the three calibration intervals are 85.4% (13.5 bits), 85.1% (13.3 bits), and 84.3% (13.0 bits), respectively. Obviously, the average performance evaluated with accuracy and ITR has not yet been improved. For each subject, compared with the method of combining SVM and fCM to perform the classifier calibration, most of the subjects' performance becomes worse for the single method. We can see that the adoption of a single fCM or SVM for calibration is not sufficiently effective. The method using the combination of SVM with fCM is superior to single SVM or single fCM and may provide more-reliable information about the new samples.
The reliability of the selected sample is crucial for the classifier calibration. The performance improvement of the calibration approach is mainly due to the use of the information in the new samples to retrain the classifier. In this framework, the combination of two different approaches can reflect the different aspects of samples to mine the information hidden in the new samples. As shown in Table 4, when the threshold is varied as 0.60, 0.65, 0.7, 0.75, and 0.80, the calibration approach gives the classification with average performances of 87.9% (14.4 bits), 87.6% (14.3 bits), 88.1% (14.5 bits), 88.4% (14.7 bits), and 87.1% (14.2 bits), respectively, compared with the baseline 85.6% (13.5 bits) of the original SVM classifier. The results show that the selection of the threshold influences the performance of the calibration approach, and the best accuracy of 88.4% and highest ITR of 14.7 bits were achieved when the threshold was 0.75. The threshold serves as a filter to differentiate the reliable samples and unreliable samples, and a larger threshold will facilitate selection of the more reliable samples. For the lower thresholds, these thresholds could not guarantee the selection of the samples with high-confidence probability; that is, some incorrectly labeled samples could be added to the training set. Obviously, those mislabeled samples may give incorrect information for classifier calibration (training). Therefore, the performances of thresholds that are 0.60 and 0.65 are lower than those of 0.75. When the threshold becomes too large, few reliable samples can be selected in one calibration interval, and the small number of new samples in the training set will not be enough or effective to calibrate the classifier, which may be the main reason that the performance of 0.8
12
Computational and Mathematical Methods in Medicine is not compared with that of 0.75. As shown in Tables 5 and 6, we found that when only SVM or fCM was adopted to perform the classifier calibration, the average performance of the five thresholds was not obviously improved compared with the original SVM approach. For each subject, compared with the combination of SVM and fCM to perform the classifier calibration, most of the subjects' performance becomes worse when the single SVM classifier is used for calibration. Therefore, these results further confirmed that combination of SVM and fCM to perform the classifier calibration could provide better performance to find reliable samples.
In summary, Tables 2, 3, 5, and 6 consistently revealed that when calibration was performed by the single SVM or fCM, it did not show obvious improvement, whereas when SVM and fCM were combined to calibrate the classifier, higher performance with higher accuracy and ITR was exhibited. The difference is attributed to the enhanced ability of the proposed approach to capture reliable information from the testing set. To further reveal this difference, we analyze the different effects when using the combination of SVM and fCM, single SVM, and single fCM to calibrate the classifier for each session. Table 7 shows the correct number of samples, the total number of samples updated into the training set, and the ratio of the two kinds of samples from a representative subject (Subject 1) by adoption of the three methods, respectively. Figure 6 shows the corresponding identification accuracy for each session. The threshold for reliable sample selection was set at 0.75, and the calibration interval was set at four blocks.
From Table 7, we observe that the ratio of screened samples with correct labels to total screened samples is higher when using the combination of SVM and fCM than with SVM or fCM to perform the classifier calibration. It is obvious that the combined calibration method could provide a training set with more-reliable new samples. Similarly, from Figure 6, we see the differences of identification accuracy among the three calibration methods for each session. The combined method of SVM and fCM is overall better than the other two methods, which is attributed to the higher sample ratio of corrected labels to be added to the training set by the combined approach. SVM and fCM are two different approaches to handle the unlabeled samples: SVM needs the training set to train the classifier, while fCM is a kind of data-driven approach that can classify the dataset without the need to have the training set [31,32,34,37]. The reliability of the information added to the training set will determine the algorithm's performance [22,23,31]. In this work, the outputs of SVM and fCM were combined to find the reliable samples to update the training set, which may make the classifier calibration more robust.
After new samples were added to the training set, the clip technique was used to remove the old samples that were recorded a long time before current blocks. This technique facilitates the BCI online system in two ways. First, the removal of the old samples will be helpful to track the subject's state, because those old samples may represent the different subject's stage from the current stage, and their utilization for training could distort the classifier. Second, the online system requires a not-too-large training set for effective training [28,33], and the clip technique can keep the size of the training set fixed.
The result in this work is offline analysis for mVEP-BCI data of our lab. We will transplant this framework to our BCI online system in the future. Moreover, SVM is used in the current version and other classifiers, such as LDA [33,38], BLDA [35], and KSR [37], could be adopted. To have a relatively fair comparison, the default setups of SVM are used in the current work. If the SVM parameters are optimized with a technique like grid searching [36], the performances for both approaches may be further improved, but we think the relative performance between them will be similar to that reported here. Our framework aims to calibrate the classifier adaptively during the long experiment duration, and it still needs certain training procedures to train the classifier initially; that is, our framework is not the zerotraining online system [23].The data-driven fCM can classify the dataset without the training procedure, and we will study the possibility of extending this system to zero training. The adapting strategy used in the framework assumes that the subject's state will gradually change during the experiment (i.e., it is possible to track the state's change). If the subject's state varies abruptly, our calibration framework may fail to track this change. Under this special condition, it may be necessary to provide the subject with a new training session for totally new classifier training.
In all, the above results demonstrate that the proposed adaptive calibration framework, which was first used in the mVEP-BCI system, can improve the BCI classifier performance. The core of the proposed framework is adaptively updating the training set and recalibrating the classifier. One way of updating is to add the novel information that can reflect the subject's current state to the training set, and another way is to remove the old information from the training set. By merging information in the new samples with the training set, the classifier could track the changes of the subject's states. The feasibility and effectiveness were verified by the real offline EEG data. Accordingly, the proposed framework is a promising methodology for adaptively improving the mVEP-based BCI system, and it could be generalized to other BCI modalities. | 7,714 | 2018-02-26T00:00:00.000 | [
"Computer Science"
] |
A Framework for the Magnetic Dipole Effect on the Thixotropic Nanofluid Flow Past a Continuous Curved Stretched Surface
The magnetic dipole effect for thixotropic nanofluid with heat and mass transfer, as well as microorganism concentration past a curved stretching surface, is discussed. The flow is in a porous medium, which describes the Darcy–Forchheimer model. Through similarity transformations, the governing equations of the problem are transformed into non-linear ordinary differential equations, which are then processed using an efficient and powerful method known as the homotopy analysis method. All the embedded parameters are considered when analyzing the problem through solution. The dipole and porosity effects reduce the velocity, while the thixotropic nanofluid parameter increases the velocity. Through the dipole and radiation effects, the temperature is enhanced. The nanoparticles concentration increases as the Biot number and curvature, solutal, chemical reaction parameters increase, while it decreases with increasing Schmidt number. The microorganism motile density decreases as the Peclet and Lewis numbers increase. Streamlines demonstrate that the trapping on the curved stretched surface is uniform.
Introduction
Non-Newtonian fluid flows have already captivated the attention of researchers. These materials are used extensively in bioengineering, geophysics, pharmaceuticals, chemical and nuclear industries, polymer solutions, cosmetics, oil storage engineering, paper manufacturing, and other fields. Clearly, no single constitutive relationship can account for all non-Newtonian materials based on behavioral shear stresses. It is distinct from Newtonian and creeping viscous fluids [1]. As a result, several non-Newtonian fluid models have been proposed [2][3][4][5]. One such model is the thixotropic fluid model. The shear thinning fluid differs from the thixotropic fluid in that the shear thinning fluid has study bioconvection through the use of non-linear chemical and thermal radiation in a rotational fluid. Hady et al. [46] studied the unsteady bioconvection thermal boundary layer flow in the presence of gyrotactic microorganisms on a stretching plate and a vertical cone in a porous medium. Recent investigations on bioconvection can be found in the references [47][48][49][50][51][52][53][54][55][56].
The current study discusses the magnetic dipole effect on thixotropic fluid with heat and mass transfer, as well as microorganism concentration passing through a curved stretching surface. The Darcy-Forchheimer model is used to describe the flow in a porous medium. Thermal radiation and viscous dissipation effects are also taken into consideration. Through appropriate similarity transformations, partial differential equations are transformed into ordinary differential equations and solved using a well-known technique, namely homotopy analysis method HAM [57][58][59]. Many researchers [40,47,[60][61][62][63] have used HAM to solve their research problems. The results obtained are used to discuss graphically the effects of all the relevant parameters on all dimensionless profiles.
Methods
Two-dimensional hydrodynamic incompressible ferromagnetic thixotropic nanofluid past a stretched curved sheet under the influence of magnetic dipole is considered. x and y are used for curvilinear coordinates. The stretching surface is curled in a radius circle R . Based on the linear velocity u = Ax (A is constant), the sheet is stretched in the x-direction and y-direction, which is transverse to x-direction. The magnetic field of strength B 0 is perpendicular to the flow direction. The surface is submerged in a non-Darcy porous medium. As the Reynolds number (due to a magnet) is smaller in the present problem, the electrical and induced magnetic fields are ignored. Convective heat and mass transfer conditions are observed. In addition, a chemical reaction of the first order is also considered.
In conjunction with the above assumptions, the boundary layer of the equations involved are governed by the following terms [7,26,27,29,30] ∂{(y + R )v} ∂y ρ v ∂u ∂y with boundary conditions where velocity components are (u,v) in the radial (x-direction) and transverse (y-direction), k m is the mass transfer coefficient, h 1 is the convective heat transfer coefficient, R 1 and R 2 are the material constants, diffusion coefficient is D, constant fluid density is ρ, k T is the thermal conductivity, σ is the electrical conductivity, k o is permeability of porous medium, the effective dynamic viscosity is µ, magnetic permeability is µ o , heat capacitance is (ρc p ), first order chemical reaction parameter is K c , microorganisms diffusion is D m , speed of gyrotactic cell is W c , b is the chemotaxis, C b is the drag coefficient, S 1 is the porosity of porous medium, T is the temperature, C is the concentration, N is the gyrotactic microorganisms concentration, and C ∞ , T ∞ , and N ∞ , respectively, stand for the nanoparticles concentration, temperature, and density of microorganisms far away from the surface. Rosseland and Ozisik approximation allows to write the radiation heat flux q r with σ * Stenfan-Boltzman, and β R mean absorption coefficient [64] as:
Magnetic Dipole
The characteristics of the magnetic field have an effect on the flow of ferrofluid due to the magnetic dipole. Magnetic dipole effects are recognized by the magnetic scalar potential Φ [29] shown in Equation (10) where γ stands for magnetic field strength at the source, c is the distance of the line currents from the leading edge. H x and H y are taken as the components of magnetic field as shown in Equations (11) and (12) The magnetic field H is usually proportional to the components of magnetic field H x and H y , gradient along x and y directions respectively. It is therefore defined in Equation (13) as It is considered that the temperature-dependent variation of magnetization M is linear as shown in Equation (14) where K 1 identifies the coefficient of the ferromagnetic. The physical schematic of the heated ferrofluid can be seen in Figure 1. Considering the following transformations [26], with ν as kinematic viscosity, A is constant: By the application of Equation (15), Equations (2)-(8) provide the following Equations (16), (18)-(25) To eliminate the pressure term, integrating (16) to get p and replacing it, then (17) becomes and the boundary conditions become where A 1 is the ratio of rate constants, α 1 is the curvature parameter, d is the dimensionless distance, Nn 1 and Nn 2 are the non-Newtonian parameters, β is the ferrohydrodynamic interaction parameter, heat dissipation parameter is λ, ε is the curie temperature, Prandtl number is Pr, radiation parameter is Rd, Eckert number is Ec, chemical reaction parameter is δ, the Schmidt number is Sc, local inertia parameter is Li, porosity parameter is P 1 , Lewis number is Pe, Lewis number is Le, thermal Biot number is Bi 1 and concentration Biot number is Bi 2 , which are defined by The quantities of interest, such as coefficient of skin friction, local Nusselt, Sherwood and local density numbers, are determined by where By putting values from Equation (28) in Equation (27), it is obtained that
HAM Solution
The initial guesses and the linear operators are taken as Equation (30) satisfies the properties as given below where E i (i = 1, . . ., 9) indicates the arbitrary constants. The corresponding zeroth order form of the problems are where q ∈ [0, 1] is the embedding parameter while N f , N θ , N φ , and N χ are the nonlinear operators.
The m-th order deformation problems are as follows where The general solutions are given by
Convergence Analysis of the Homotopy Solution
The nonzero auxiliary parameters are involved in the homotopy solution. These parameters are extremely important in controlling and adjusting the convergence acquired by the homotopic series solutions. The h-curves at the 15th order of approximations are sketched to show the acceptable approximate region of convergence. Figure 2 depicts the region as falling within the ranges −1.
Discussion
The velocity behavior with the ferromagnetic hydrodynamic interaction parameter β can be seen in Figure 3. It demonstrates that the velocity decreases as β increases. Ideally, the resistance force known as Lorentz force [65] increases with the β increase, and the velocity field decreases. Figure 4 is used to investigate the effect of curvature parameter α 1 on the velocity profile. It is clearly shown in the figure that the velocity component decreases for larger α 1 . Figures 5 and 6 describe the effects of the thixotropic parameters Nn 1 and Nn 2 on the velocity profile. From these figures, it is observed that Nn 1 and Nn 2 result in an increase in fluid velocity. Ideally, Nn 1 and Nn 2 are associated with the properties of shear thinning, which show a time-dependent changes in viscosity. The higher the fluid under shear stress, the lower the viscosity of nanofluid, which will ultimately lead to an increase in fluid velocity. Figure 7 is used to present the velocity behavior with the porosity parameter P 1 . The presence of porous medium slows down the field of the flow, resulting in an increase in shear stress on the curved surface, and therefore the velocity profile shows a declining trend by increasing the values of P 1 . In contrast to the effect seen with P 1 , change in local inertia parameter Li results in an increase in velocity as shown in Figure 8. Figure 9 is used used to examine the effect of β on temperature. Here, temperature increases with higher values of β. The temperature profile behavior relating to the higher values of thermal Biot number Bi 1 is shown in Figure 10. The parameter Bi 1 significantly promotes the temperature field in a positive manner attributable to the effective convective heat effects. It is also observed that there is no heat transfer at Bi 1 = 0. The effect of the heat dissipation parameter λ on temperature is shown in Figure 11. The temperature is a decreasing function of λ. Physically thermal conductivity of liquid decreases with larger λ, and therefore the temperature decreases. The Eckert number Ec attributes to the temperature profile is shown in Figure 12. For larger Ec, temperature and thermal boundary layer thickness were observed to be effected with the increase in Ec. In this phenomenon, the heat energy stored in the fluid is caused by friction forces that increase the temperature. The Curie temperature parameter ε effect on temperatureprofile is shown in Figure 13. The temperature decreases through larger values of ε. Thermal conductivity of the liquid increases with the larger ε. The effect of Prandtl number Pr on temperature profile is shown in Figure 14. The temperature distribution and thermal boundary layer are reduced by higher values of Pr, due to which thermal diffusion is reduced. In addition, fluids with a smaller values of Pr slowly decay compared to liquids with larger values of Pr. The effect of radiation parameter Rd on temperature profile is discussed in Figure 15. The increase in temperature curves with a larger boundary layer thickness is determined by an increase in Rd. Usually, mean absorption coefficient decays for higher estimation of Rd and diffusion flux occurs as a consequence of the temperature gradient, which therefore increases the temperature.
The effect of the concentration Biot number Bi 2 on the nanoparticles concentration profile is shown in Figure 16. In this case, the concentration is increased in response to increase in the Bi 2 values. Figure 17 shows the effect of the Sc on concentration profile. Since Sc is the ratio of momentum to mass diffusivity, the increase in Sc causes a decay in mass diffusivity, thus leading to a decrease in nanoparticles concentration. Figure 18 shows the effect of the curvature parameter α 1 on the nanoparticles concentration profile. The increase in the curvature parameter results in an increase in the concentration. Figure 19 shows the effect of the chemical reaction parameter δ on the concentration profile. The nanoparticles concentration is observed to increase for the higher estimates of δ. In fact, the consumption of reactive species rapidly declines as δ becomes larger. Figure 20 shows the effect of Peclet number Pe on the microorganisms profile. There is a clear relationship between the reduced density of the microorganisms and the increase in Pe. The higher values of Pe indicate the minimum motile diffusivity. Figure 21 shows the impact of Lewis number Le on microorganisms concentration profile. The decrease in the concentration distribution is shown as the Lewis number increases, since it is inversely proportional to the mass diffusion.
The effect of the dimensionless variable ζ on the streamlines is shown in Figures 22 and 23. It is shown that the number of the trapped boluses increases as the values of ζ increase, and the streamlines have also been identified to be perpendicular to the surface. The increase in the ζ increases the shearing motion, which, in fact, results in a higher precession of the flow to the stretching surface. Table 1 shows a numerical analysis of the skin friction coefficient for β, α 1 , P 1 , Li, Nn 1 , Nn 2 . It is discovered that the skin friction coefficient increases with the increasing values of β, P 1 , Li, Nn 2 , while a reverse trend is observed for α 1 and Nn 1 . Table 2 cross-checks the accurateness of the homotopic solution used in the present investigation. A comparison of skin friction coefficient for the different values of α 1 with the study [66] is shown for Table 3 shows the numerical assessment of the local Nusselt number for various values of β, α 1 , λ, Pr, Rd, ε, Ec, Nn 1 , Nn 2 . It is observed that the local Nusselt number decreases with increasing values of β, α 1 , λ, Nn 1 . Table 4 shows the numerical values of the local Sherwood number for various values of α 1 , Sc, δ. It is observed that the local Sherwood number decreases with the increasing values of parameters. The tables clearly show that the current findings are completely consistent.
Conclusions
The Darcy-Forchheimer hydromagnetic flow of thixotropic nanofluid through a curved stretching sheet with thermal radiation and chemical reaction in the presence of heat and mass transfer, gyrotactic microorganisms, and magnetic dipole is explored. The present study contributes to the findings set out below.
•
The velocity decreases with increasing values of ferromagnetic parameter β and a curvature parameter α 1 , while it increases with increasing values of Nn 1 , Nn 2 and P 1 . | 3,322.2 | 2021-06-07T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Population genetics of zig-zag eel (Mastacembelus armatus) uncover gene flow between an isolated island and the mainland China
Introduction Mastacembelus armatus is a commercially valuable fish, normally distributed in southern China and Southeast Asia. The natural population size of M. armatus is shrinking in recent years because of overfishing and habitat loss. In order to clarify the genetic diversity and differentiation of M. armatus populations, we collected 114 samples from eight populations in southern China and Vietnam and analyzed their population structure using nuclear ribosomal DNA sequences, the concatenated 18S and ITS2 regions. Methods Genomic DNA from the fin clip was extracted and sequenced on an Illumina novaseq 6000 (Illumina, USA) high-throughput sequencing platform in accordance with the manufacturer’s instructions. After assembly and annotation, haplotype diversity, TCS network analysis, AMOVA analysis, population pairwise genetic distances, and UPGMA tree construction were conducted based on the concatenated sequences of 18S and ITS2. Results and discussion In total, eleven nrDNA haplotypes were detected based on the concatenated sequences of 18S and ITS2. Amongst, three haplotypes were the main haplotypes, as representatives of three corresponding Clusters. There were two major Clusters in China, however, the Cluster in Vietnam was significantly divergent from the other two in China, likely due to the lack of river connection between China and Vietnam. Interestingly, based on low FST value, we found that gene flow occurred between the isolated island, Hainan Province, and the mainland China of Guangxi Province, probably as a result of exposed continental shelf connected them during glacial periods. In general, combing our data and literature data, genetic diversity and differentiation of M. armatus populations are relatively high regardless of spatial scale, although their natural population size is declining. This suggests that it is not too late to adopt measures to protect M. armatus, which benefits not only species itself but also the whole ecosystem.
Introduction
Mastacembelus armatus, the common name is zig-zag eel or tire-track spiny eel, is an economically important fish, belonging to the Order Symbranchiformes (Family: Mastacembelidae; Genus: Mastacembelus). Among the four species of the genus, M. armatus is the largest (Serajuddin and Pathak, 2012). It is widely distributed in southern China, mainly in Yangtze River and Pearl River (Xue et al., 2020), and Southeast Asia, such as India, Thailand, Nepal, Vietnam, Sri Lanka, and Pakistan (Hossain et al., 2015;Gupta and Banerjee, 2016;Han et al., 2019). Usually, it inhabits rivers, streams, ponds, beels and inundated fields (Hossain et al., 2015;Gupta and Banerjee, 2016). M. armatus is a carnivorous fish, it prefers to feed on crustacean and insect larvae when young while the adults devour small fish and tadpoles (Hossain et al., 2015). M. armatus is in high demand on the market as it attracts consumers with its delicious taste, no intermuscular spines and high nutritional value (Gupta and Banerjee, 2016;Li et al., 2016;Xue et al., 2020). Besides, the appealing color pattern of M. armatus makes it a popular aquarium fish as well (Gupta and Banerjee, 2016).
However, due to overfishing and habitat loss, the wild population size of M. armatus gradually declines year by year (Hossain et al., 2012;Rahman et al., 2016;Xue, 2018). M. armatus is designated as an endangered species in Bangladesh (IUCN Bangladesh, 2000) and has been classified as least concern by the International Union for Conservation of Nature (IUCN) (IUCN, 2019). In addition, largescale artificial breeding has not been achieved for M. armatus (Jiang, 2018). Therefore, it is urgently needed to clarify the present condition of natural populations of M. armatus, particularly their genetic diversity and structure, thus providing basis for their biological conservation. As a native species in China, M. armatus is assigned as a key protected wild aquatic animal by Fujian, Guangdong, and Hunan provinces, and moreover, a national germplasm resource reserve of M. armatus has been established in Fujian Province (Jiang, 2018). Furthermore, aquaculture of M. armatus has intensified in several provinces of China, greatly facilitating its artificial breeding (Han et al., 2017;Han et al., 2019).
Materials and methods
Sampling M. armatus from seven regions in southern China and one region in Vietnam were sampled in 2021. Sample sets collected from a single region were considered a population. In China, we collected M. armatus from four Provinces. Two populations were sampled from Guangdong Province, three populations from Guangxi Province, one population from Jiangxi Province and one population from Hainan Province (Table 1, Figure 1). What should be noted is that Hainan Province is located in an independent island, spatially separated with the mainland China by Qiongzhou Strait. Additionally, Guangxi Province is geographically adjacent with Vietnam. The Vietnam samples were collected from Guangzhou Lanhai Marine Technology Co., Ltd, Guangzhou city, Guangdong Province, China (latitude: 23.21°N , longitude: 113.47°E). More specifically, we collected 12 and 16 samples in GDHY and GDQY regions from Guangdong Province, respectively; 18, 18, 15 samples in GXBS, GXLZ and GXYL regions from Guangxi Province, respectively; 7 samples in HNHK region from Hainan Province, 13 samples in JXGZ region from Jiangxi Province, 15 samples in YN region from Vietnam. A total of 114 samples from eight populations were collected in this study (Table 1).
DNA extraction and sequencing
A 30-40 mg fin clip was collected and preserved in 95% ethanol at -20°C for later genomic sequencing. Both DNA sequencing and assembly were performed by Science Corporation of Gene (SCGene) Co., Ltd, Guangzhou city, Guangdong Province, China. Total genomic DNA was extracted with a Tissue DNA Kit (OMEGA E.Z.N.A) following the manufacturer's protocol. The quality and quantity of genomic DNAs were determined by 0.8% agarose gel electrophoresis and NanoDrop 2000 spectrometer (Thermo Scientific, Waltham, MA, USA). High-quality genomic DNAs were used to construct a paired-end sequencing library with an insert size of 450 bp. The library was then sequenced on an Illumina novaseq 6000 (Illumina, USA) high-throughput sequencing platform in accordance with the manufacturer's instructions.
Sequence assembly
Adaptors and low-quality reads were filtered using Trimmomatic v0.39 (Bolger et al., 2014), resulting the raw reads, which number were between 3,876,630 and 51,352,359. Paired-end reads of 2 × 150 bp were generated, and the quality threshold was set to Q20. Qualified reads were then compared by BWA (Li and Durbin, 2009) employing setting of 0 match and 0 gap. Afterwards, the obtained reads were assembled using SOAPdenovo (Luo et al., 2012). To verify the correctness of the assembly, assembled whole nrDNA sequences were amplified and sequenced by Sanger sequencing. The annotation of assembled nrDNAs was performed using blastn in NCBI with closely related and well-annotated sequences, manually verified afterwards. Finally, respective region sequences were generated, including 18S, ITS1, 5.8S, ITS2 and 28S.
Data analysis
Standard diversity indices, including number of haplotypes (N h ), haplotype diversity (hd) and nucleotide diversity (p), were calculated using DnaSP v 5.10 (Librado and Rozas, 2009) constructed with PopArt (Clement et al., 2002;Leigh and Bryant, 2015) to investigate genealogical relationships among populations inferred from the concatenated sequences of 18S and ITS2. Hierarchical analysis of molecular variance (AMOVA) was performed to detect genetic variations within and among different regions using Arlequin v 3.5 (AMOVA; Excoffier and Lischer, 2010), with statistical significance determined by 1,000 permutations. To quantify the genetic dissimilarity between populations, population pairwise genetic distances (F ST ) were also calculated by Arlequin v 3.5 (Excoffier and Lischer, 2010). The genetic distance among different populations was used to construct the UPGMA tree in MEGA X (Kumar et al., 2018).
Characteristics of nrDNA
Variations of the length of either whole nrDNA sequences or respective region sequences were slight, see details in Table 2. To be specific, the length of 18S and 5.8S of all individuals were all the same, with 1,840 bp and 154 bp, respectively. Especially, all sequences of 5.8S were completely identical. In addition, there was also very little variation in the length of 28S, with only a 2 bp difference in length. In the 28S alignment of all individuals, 30 variable sites were found, accounting for 0.85%, compared with 3 in 18S (0.16%), 20 in ITS2 (3.37%) and 66 in ITS1 (5.77%). It is worth to note that although the proportion of variable sites in the 28S alignment was low, the majority of variable sites, 21 out of 30, were singletons variable sites. Clearly, ITS1 and ITS2 were regions with greater variability. Furthermore, we found that many indels occurred in the alignment of ITS1, for example, the longest indel was 14 bp in length, located at 1020 nt -1043 nt. The GC content of whole nrDNA in all populations was similar, between 62.5% and 62.7%, showing a high GC content.
More than 5% indels and 5.77% variable sites occurred in the alignment of ITS1, directly affecting the accuracy of subsequent analyses. Additionally, in the alignment of 28S, too many singletons variable sites were detected, which is also thought to negatively impact the molecular analyses, such as phylogenetic analysis and population genetics (Dress et al., 2008;Steenwyk et al., 2020). Besides, the 5.8S sequences of all individuals were the same, we thus decided to use the concatenated sequence of 18S and ITS2 for all later analyses.
Genetic diversity
Eleven nrDNA haplotypes were identified from the concatenated sequences of 18S and ITS2, consistently inferred from DnaSP results and TCS network (Table 1, Figure 2). The overall genetic diversity was relatively high (0.561 ± 0.047) while nucleotide diversity was low (0.00199 ± 0.00028) ( Table 1). The highest haplotype diversity was found in GDHY region in China, followed by HNHK, YN, and GDQY region. Most regions harbored more than one haplotype, except GXLZ and JXGZ region. In a word, we found eight haplotypes in China and three haplotypes in TCS network based on the concatenated sequences of 18S and ITS2 of Mastacembelus armatus. Each tick represents a mutational step. nrDNA haplotypes are named as in Table 1. Circle size is proportional to the haplotype frequency. Vietnam (Table 1, Figure 2), and H1, H2 and H3 were the main haplotypes, making three different Clusters ( Figure 2). Amongst, H1 and H2 were the main haplotypes found in China and H3 in Vietnam. Moreover, in China, the genetic diversity of M. armatus in Guangxi province was the highest (0.462 ± 0.060), consisting of two main haplotypes H1 and H2, while only H1 was detected in Jiangxi Province, predominant H1 in Guangdong Province, and predominant H2 in Hainan Province.
Overall, more males were found in our study. In Vietnam, the number of females exceeded males, compared to Chinese populations with dominant males.
Population structure
In general, AMOVA analyses presented that there was high genetic differentiation in overall populations of M. armatus (FST = 0.882, p < 0.001), and the largest level of genetic differentiation was found among populations (88.19%) ( Table 3). The similar pattern was shown at the province level (FST = 0.796, p < 0.001) and at the country level (FST = 0.886, p < 0.001) as well (Table 3).
Pairwise F ST comparisons revealed that most populations were significantly differentiated (Table 4). Particularly, pronounced differences among different provinces were found in a range of F ST between 0.210 and 0.970, all being statistically significant (Table 5), in agreement with AMOVA result that most variations were from among provinces (79.59%) at the province level (Table 3). In addition, the UPGMA tree demonstrated that all populations were divided into three groups (Figure 3). Cluster I consisted of all populations from Jiangxi and Guangdong Province and two of three populations of Guangxi Province (GXLZ and GXBS). While the other population of Guangxi Province (GXYL) was clustered with the population from Hainan Province, forming Cluster II, consistent with the low F ST value between them, only 0.004. The Vietnam population was a single group, Cluster III. Overall, Cluster I and Cluster II were grouped together, making the Chinese Cluster. Furthermore, the genetic differentiation of Clusters between China and Vietnam was also distinctive, characterized with different nrDNA haplotypes (Table 1, Figure 2) and distinct phylogenetic Clusters (Figure 3), as well as shown in the AMOVA analysis, with 88.59% variances from among countries (Table 3).
Discussion Genetic diversity and population structure
Previous population studies (Wang et al., 2012;Zou, 2013;Chen, 2014;Yang et al., 2016;Lin, 2017;Jiang, 2018;Gao et al., 2022) all showed high genetic diversity of M. armatus populations in China, but the haplotype diversity in our result (hd=0.434) is lower than that in other population studies, for example, Wang et al. (2012), Jiang (Jiang, 2018) and Gao et al. (2022) reported the haplotype diversity in China was 0.965, 0.768 and 0.895, respectively. This may be due to different markers employed and the small sample size in our study compared to other studies. Additionally, the overall genetic diversity of M. armatus populations in this study was over 0.5, showing a relative high diversity (Grant and Bowen, 1998). In general, combing our data and literature data, the genetic diversity of M. armatus maintains at a relatively high level, although the size of natural population is declining as reported (Hossain et al., 2012;Rahman et al., 2016;Xue, 2018). Furthermore, genetic differentiation between most populations is pronounced, regardless of spatial scale. Guangxi Province harbored the highest genetic diversity. Amongst, GXLZ population and GXBS population were grouped together, in agreement with low F ST value between them. GXYL population, nevertheless, was in a distinct cluster, either shown in TCS network or UPGMA tree. This suggests not only high genetic diversity but also high genetic differentiation among Guangxi Province populations. Surprisingly, GXYL population was clustered together with HNHK population. Further, the low F ST value between Hainan Province and the mainland means that gene flow occurred between them, despite the fact that Hainan Province is an isolated island and separated from the mainland by sea waters. In contrast, most Chinese population studies revealed that the population from Hainan Province was genetically partitioned with the mainland populations (Yang et al., 2016;Lin, 2017;Jiang, 2018). Gene flow between the population of Hainan Province and the mainland, revealed by nuclear makers in our study, is congruent with the latest research of Gao et al. (2022) discovered by mitochondrial markers. This gene flow is probably as a consequence of geological changes during glacial periods, when the exposed continental shelf connected Hainan and the mainland China (Sun et al., 2000;Voris, 2000). However, Guangxi Province and Vietnam possess completely different nrDNA haplotypes, despite their proximity. This is mainly because that there are no rivers connecting Guangxi Province and Vietnam.
Sex ratio
It is well known that the population size is closely related with sex ratio, and an unbalanced sex ratio may significantly reduce the effective size of populations (Dubreuil et al., 2010). We checked the sex of each individual after sampling, and we found that all sampling sites in China are male dominated, except GDQY. However, Vietnam is female dominated. In fact, the sex ratio of M. armatus in the nature is on debate. One research reported female dominance trend (Panikkar et al., 2013), while the other one showed an equal proportion of male and female (Serajuddin and Pathak, 2012). We need more natural population data to clarify the sex ratio of M. armatus in the future, in order to provide more references for the further biological conservation. In addition, we discovered that the sex ratio of M. armatus is very unbalanced during the artificial breeding process, characterized with significant female dominance. For example, the proportion of females can reach 86.33% in the report of Xue et al. (2021b). This suggests that there are differences and difficulties in sexual differentiation of M. armatus, which may also occur in nature. At present, although some (Xue et al., 2020;Xue et al., 2021b) and Y chromosome differential (Xue et al., 2021a). The underlying sex determination mechanisms are still unclear, which also make challenges to investigate the sex ratio of M. armatus.
Biological conservation
The biological conservation of species heavily depends on their genetic diversity. Understanding intraspecific genetic diversity and differentiation can help us take scientific and effective measures to protect threatened or endangered animals. Combing the results of our study with other previous studies, we found that the present genetic diversity of M. armatus populations is not low, indicating that it is not too late to take action to protect them, so that their genetic diversity can remain high. For example, precisely constructing more reserves for M. armatus based on the distribution of different genotypes/haplotypes, strengthening the investigation and protection of M. armatus in the isolated island, Hainan Province, which may provide crucial clues for their expansion, and exploring artificial breeding techniques to better conserve the germplasm resource.
Conclusion
In conclusion, we researched the genetic diversity and population structure of M. armatus in China and Vietnam, based on nuclear ribosomal DNA markers. The genetic diversity and differentiation of M. armatus populations were at a relatively high level according to the data from this study and previous studies. Three Clusters were classified according to the genetic distance between each population, characterized with two Clusters in China and a distinct Cluster in Vietnam. In particular, we found that gene flow occurred between an isolated island and the mainland China based on the low F ST value between them.
Data availability statement
The original contributions presented in the study are publicly available. This data can be found here: NCBI, OP847241 -OP847354.
Ethics statement
The study was approved by the Laboratory Animal Ethics Committee of Pearl River Fisheries Research Institute, CAFS (number: LAEC-PRFRI-20201219). | 4,032 | 2023-03-06T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Entanglement-storage units
We introduce a protocol based on optimal control to drive many-body quantum systems into long-lived entangled states, protected from decoherence by large energy gaps, without requiring any a priori knowledge of the system. With this approach it is possible to implement scalable entanglement-storage units. We test the protocol in the Lipkin–Meshkov–Glick model, a prototype many-body quantum system that describes different experimental setups, and in the ordered Ising chain, a model representing a possible implementation of a quantum bus.
2 quantum cryptography or quantum computers rely on entanglement as a crucial resource [1]. Within the current state of the art, promising candidates for truly scalable quantum information processors are considered architectures that interface hardware components playing different roles such as, for example, solid-state systems as stationary qubits combined in hybrid architectures with optical devices [2]. In this scenario, the stationary qubits are a collection of engineered qubits with desired properties, as decoupled as possible from one another to prevent errors. However, this architecture is somehow unfavorable for the creation and conservation of entanglement. Indeed, it would be desirable to have hardware where entanglement is 'naturally' present and that can be prepared in a highly entangled state that persists without any external control: the closest quantum entanglement analogue of a classical information memory support, i.e. an entanglement-storage unit (ESU). Such hardware once prepared could be used at later times (alone or with duplicates)-once the desired kind of entanglement has been distilled-to perform quantum information protocols [1].
The biggest challenge in the development of an ESU is entanglement frailty: it is strongly affected by the detrimental presence of decoherence [1]. Furthermore, the search for a proper system to build an ESU is undermined by the increasing complexity of quantum systems with a growing number of components, which makes entanglement more frail, more difficult to characterize, to create and to control [3]. Moreover, given a many-body quantum system, the search for a state with the desired properties is an exponentially hard task in the system size. Nevertheless, in many-body quantum systems entanglement arises naturally: for example-when undergoing a quantum phase transition-in the proximity of a critical point the amount of entanglement possessed by the ground state scales with the size [3,4]. Unfortunately, due to the closure of the energy gap at the critical point, the ground state is an extremely frail state: even very small perturbations might destroy it, inducing excitations toward other states. However, a different strategy might be successful, corroborated also by very recent investigations on the entanglement properties of the eigenstates of many-body Hamiltonians, where it has been shown that in some cases they are characterized by entanglement growing with the system size [5,6].
In this paper, we show that by means of a recently developed optimal control technique [7,8] it is possible to identify and prepare a many-body quantum system in robust, long-lived entangled states (ESU states). More importantly, we drive the system toward ESU states without the need for any a priori information on the system, either about the eigenstates or about the energy spectrum. Indeed, we do not first solve the complete spectrum and eigenstates, which is an exponentially difficult problem in the system size. Recently, optimal control was used to drive quantum systems in entangled states or to improve the generation of entanglement [9]. However, here we have in mind a different scenario: to exploit the control to steer a system into a highly entangled state that is stable and robust even after switching off the control (see figure 1). Moreover, we want to outline the fact that we do not choose the goal state, but only its properties. In the following, we show that ESU states are gap-protected entangled eigenstates of the system Hamiltonian in the absence of the control, and that for an experimentally relevant model it is indeed possible to identify and drive the system into the ESU states. We show that the ESU states, although not characterized by the maximal entanglement sustainable by the system, are characterized by entanglement that grows with the system size. Once a good ESU state has been detected, due to its robustness it can be stored, characterized and thus used for later quantum information processing. ESU protocol: a system is initially in a reference state |ψ(−T ) , e.g. the ground state, and is optimally driven via a control field (t) in an entangled eigenstate |ψ(0) , protected from decoherence by an energy gap. S(t) represents a generic measure of entanglement.
Here we provide an important example of this approach, based on the Lipkin-Meshkov-Glick (LMG) model [10], a system realizable in different experimental setups [2,11]; we prepare an ESU maximizing the von Neumann entropy of a bipartition of the system and we model the action of the surrounding environment with noise terms in the Hamiltonian. However, our protocol is compatible with different entanglement measures and different models, such as the concurrence between the extremal spins in an Ising chain, see section 5. Note that with a straightforward generalization it can be adapted to a full description of open quantum systems [12].
The paper is organized as follows. In section 2, the general protocol to steer a system onto the ESU state is presented; in section 3, we consider the application of the protocol to the LMG model; in section 4, we discuss the effect of a telegraphic classical noise on the protocol; in section 5, we test the protocol into an Ising spin chain, and finally in section 6, we present the conclusions.
Entanglement-storage unit (ESU) protocol
As depicted in figure 1, we consider the general scenario of a system represented by a tunable Hamiltonian H [ ], where (t) is the control field, and initialized in a state |ψ in that can be easily prepared. We assume that the control field (t) can be modulated only in the finite time interval [−T, 0]; outside of this interval, for t < −T and 0 < t, we impose ≡˜ (e.g. absence of control). According to our protocol, at the end of the control procedure, i.e. once the control field is brought back to the value˜ , the system has prepared been in a state with desired properties (for instance, high entanglement), stable in the absence of the control and robust against noise and perturbations.
Optimal control has already been used to enhance a given desired property without targeting an a priori known state; unfortunately, the results of such an optimization are usually fragile and ideally require a continuous application of the control in order to be stabilized [9]. However, in practical situations, a continuous application of control can be unrealistic, being 4 either simply impossible or too expensive in terms of resources. An example is the initialization of a quantum register that has to be physically moved into different spatial locations (such as a portable memory support), or if the control field used to initialize has to be switched on and off in order to manipulate different parts of the apparatus; in such situations, the register should indeed also be stable once disconnected from the device employed for its initialization. Consequently, in certain applications, a procedure capable of preparing quantum targets intrinsically stable even in the absence of sustained external manipulations is not only highly desirable but also crucial. The main contribution of our work is to move a step forward in this direction, proposing a flexible recipe to improve the stability of the outcome of a generic optimization process.
The simple idea behind our method is the following. As is well known, in a closed system the evolution of an arbitrary state is driven by the Schrödinger equation Assuming that, as in the absence of control, the Hamiltonian is constant H (t) = H [˜ ], we can evaluate the extent of deviation induced by the time evolution in an infinitesimal time dt after switching off the control [13]: |ψ(t) correspond, respectively, to the energy fluctuations and the energy of the Hamiltonian in the absence of control. Then from equation (1) it is clear that an arbitrary state is stabilized by minimizing the quantity Ẽ . In particular, by reaching the condition Ẽ = 0, the system is also prepared in an eigenstate of Our protocol relies on the use of optimal control implemented through the Chopped RAndom Basis (CRAB) technique [7,8]. The CRAB method consists of expanding the control field onto a truncated basis (e.g. a truncated Fourier series) and in minimizing an appropriate cost function with respect to the weights of each component of the chopped basis (see [7,8] for details of the method).
In particular, for the ESU protocol a CRAB optimization is performed with the goal of minimizing the cost function F: where S represents a measure of entanglement, λ is a Lagrange multiplier and the cost function is evaluated on the optimized evolved state |ψ(0) produced with a control process active in the time interval [−T, 0]. As discussed previously and shown in the following, the inclusion in F of the constraint on energy fluctuations is the crucial ingredient to stabilize the result of the optimization also for times t > 0, that is, once the control has been switched off. We conclude this section by stressing a couple of important advantages of our protocol with respect to other possible approaches to the problem, for instance evaluating all the eigenstates of the system and picking from among them the state(s) with the desired properties. First, in our protocol we never compute the whole spectrum of the system, but simply require evaluation of the energy and the energy fluctuations into the evolved state, see equation (2); therefore, our procedure can also be applied to situations in which it is not possible to compute all the eigenstates of the Hamiltonian (e.g. many-body non-integrable systems or DMRG simulations or experiments including a feedback loop). Furthermore, it can occur that none of the eigenstates of the system has the desired property we would like to enhance; then by simply considering the eigenstates one could not gain any advantage. In contrast, with our protocol in this situation it is possible to identify states that, even though different from exact eigenstates, still show an enhanced robustness, such as the optimal state found in the considered scenario, see section 5.
ESU and the Lipkin-Meshkov-Glick model
We decided to apply the protocol to the Lipkin-Meshkov-Glick model [10] because it represents an interesting prototype of the challenge we address: it describes different experimental setups [2,11], and the entanglement properties of the eigenstates are in general not known. Indeed, the entanglement properties of the eigenstates of one-dimensional many-body quantum systems have been related to the corresponding conformal field theories [5]; however, for the LMG model, to our knowledge, this study has never been performed and a conformal theory is not available [15]. Finally, the optimal control problem we address is highly non-trivial as the control field is global and space independent with no single-site addressability [9].
The LMG Hamiltonian describes an ensemble of spins with infinite-range interaction and is written as [14] where N is the total number of spins, σ α i 's (α = x, y, z ) are the Pauli matrices on the ith site and C is a constant measuring the intensity of the spin-spin interaction. By introducing the total spin operator J = i σ i /2, the Hamiltonian can be rewritten, apart from an additive constant and a constant factor, as (from now on, we set C = 1 andh = 1). The symmetries of the Hamiltonian imply that the dynamics is restricted to subspaces of fixed total magnetization J and fixed parity of the projection J z ; a convenient basis for such subspaces is represented by the Dicke states |J, J z with −J < J z < J [16] 2 . In the thermodynamical limit, the system undergoes a second-order QPT from a quantum paramagnet to a quantum ferromagnet at a critical value of the transverse field | c | = 1. There is no restriction on the reference value˜ and on the initial state |ψ in : we choose˜ 1, corresponding to the paramagnetic phase, and as the initial state |ψ in , the ground state of H [˜ ], i.e. the separable state in which all the spins are polarized along the positive z-axis [2]. A convenient measure of the entanglement in the LMG model is given by the von Neumann entropy S L ,N = −Tr(ρ L ,N log 2 ρ L ,N ) associated with the reduced density matrix ρ L ,N of a block of L spins out of the total number N , which gives a measure of the entanglement present between two partitions of a quantum system [16]. In our analysis we consider two equal partitions, i.e. S ≡ S N /2,N . Note that the maximally entangled state at a fixed size N is given by ρ M = 1/(N /2 + 1) and S ρ M = log 2 (N /2 + 1) [16]. In figure 2, we report the entanglement S N /2,N of the eigenstates deep inside the paramagnetic phase at˜ = 10, for systems of different sizes. Clearly, also far from the critical point = 1, many eigenstates possess a remarkable amount of entanglement that scales with the system size.
The effect is shown more clearly in figure 3, where the entanglement of the central eigenstate (red full circles) at˜ = 10 is compared with the entanglement of the ground state at the critical point (blue full diamonds). Both the sets of data show a logarithmic scaling with the size, but the entanglement of the central eigenstate is systematically higher and grows more rapidly.
Dynamics. We initialize the system in the non-entangled ground state of the Hamiltonian state |ψ in does not evolve apart from a phase factor. After the action of the CRAB-optimized driving field (t) for t ∈ [−T, 0] the state is prepared in |ψ(0) (a typical optimal pulse is shown in the inset of figure 5), and we observe the evolution of the state over times t > 0. The behavior of the entanglement is shown in figure 4 for different values of the weighing factor λ and N = 64. For λ = 0 highly entangled states are produced; however, the entanglement S(t) oscillates indefinitely with the time. In contrast, if the energy fluctuations are included in the cost function (λ = 0), the optimal driving field steers the system into entangled eigenstates of H [˜ ], as confirmed by the absence of the oscillations in the entanglement and by the entanglement eigenstate reference values (blue empty circles). These results are confirmed by the survival probability in the initial state P(t) = | ψ(0)|ψ(t) | 2 reported in figure 5: the state prepared with λ = 0 decays over very fast time scales τ 0 , while for λ = 0 it remains close to unity for very long times τ λ τ 0 . The small residual oscillations for N = 64 and λ = 1.2 are due to the fact that in this case the optimization leads to a state corresponding to an eigenstate up to 98%. We repeated the optimal preparation for different system sizes and initial states, and show the entanglement of the optimized states for λ = 0 (green empty triangles) and λ = 0 ( Ẽ /Ẽ < 0.05, P > 95% red empty circles) for different system sizes in figure 3. In all cases a logarithmic scaling with the size is achieved.
Random telegraph noise
A reliable ESU should be robust against external noise and decoherence even when the control is switched off, in such a way that it could be used for subsequent quantum operations. In order to test the robustness of the optimized states, we model the effect of decoherence by adding a random telegraph noise and we monitor the time evolution in such a noisy environment [1]. In particular, we study the evolution induced by the Hamiltonian where α(t), β(t) are random functions of the time with a flat distribution in [−I j , I j ] ( j = α, β), changing the random value every typical time 1/ν. The case I α = I β = 0 corresponds to a noiseless evolution. The first important observation is that the frequency ν of the signal fluctuations is crucial in determining its effects [17]. Indeed in figure 6, the survival probability P(t) is plotted as a function of the time in the presence of a strong noise, I α = I β = 0.2, for a system of N = 64 spins and for a given initial optimal state obtained with λ = 1.8 (see figure 4). When ν is either too low (empty circles) or too high (full diamonds) the effect of the noise is reduced; however, around a resonant frequency ν R (dashed line with crosses) its effect is enhanced and the state is quickly destroyed. We checked that the resonant frequency is the same for different eigenvalues, different sizes and different noise strengths (data not shown), reflecting the fact that in the paramagnetic phase (˜ 1) the gap separating the eigenstates is proportional to˜ independently of the size of the system and of the state itself, see equation (4). Therefore, we analyze this worst-case scenario, setting ν = ν R from now on. In figure 7, we compare the survival probability P(t) for three instances of the disorder at the resonant frequency with an intensity of the disorder I α = I β = 0.01. The noise-induce dynamics of the states obtained optimizing only with respect to the entanglement (i.e. setting λ = 0, full symbols in figure 7) drastically depends on the (in general unknown) details of the noise affecting the system; thus, such states cannot be used as ESU. Conversely, the states prepared with λ = 0 (empty symbols in figure 7) turn out to be stable, noise-independent and long-living entanglement. Finally, in figure 8 we study the decay times of the survival probability P(t) studying the time T 0.8 needed to drop below a given threshold P min = 0.8 as a function of the system size N and of the intensity of the disorder I = I α = I β (inset). These results clearly show that T 0.8 for ESU states is almost independent of the system size, reflecting the fact that the energy gaps in this region of the spectrum are mostly size independent. Note that, in contrast, T 0.8 for maximally entangled states decays linearly with the system size and that there are more than four orders of magnitude of difference in the decay times τ λ and τ 0 . Finally, the inset of figure 8 shows that the scaling of T 0.8 with the noise strength for ESU states is approximately a power law and again depends very weakly on the system size N .
The Ising model: concurrence between extremal spins
In our previous discussion, we focused our attention on the optimization of the von Neumann entropy of eigenstates other than the ground state of the LMG model, in order to show the effectiveness of our protocol in controlling the dynamics and unexplored properties of manybody systems.
However, aiming at demonstrating the generality of the method, in this section we would like to present briefly the application of our protocol to a different situation, closer to the typical problems encountered in quantum information: in particular, we show how it is possible to stabilize the concurrence between the extremal spins of an open Ising chain.
The Hamiltonian of the ordered one-dimensional Ising model with nearest-neighbor interaction is given by where the transverse field (t) is our control field. We assume that the system can be easily prepared in the ground state at a large value of the control field˜ = 10, in which all the spins are polarized along the positive z-direction. The aim of the control is to enhance the concurrence between the first and the N th spin of the chain, possibly stabilizing the state. The concurrence between two spins is defined as S = max{0, e 1 − e 2 − e 3 − e 4 }, where the e i s are the eigenvalues in decreasing order of the Hermitian matrix R = √ ρρ √ ρ, ρ is the reduced density matrix of the two extremal spins andρ = (σ y ⊗ σ y )ρ * (σ y ⊗ σ y ) is the spin-flipped state [18]. At a large value of the transverse field, the eigenstates of the Hamiltonian are the classical states represented by all the possible up-down combinations of N spins, and states with the same numbers of flipped spins, although in different positions, are degenerate. A naive approach to build stable entangled states would then require a search for possibly entangled states in each degenerate subspace at a given energy. Such a search, however, represents a highly non-trivial task, due to the strong constraint imposed by requiring non-vanishing concurrence: again a suitable recipe for such a search should be provided and is non-trivial to find. In contrast, our protocol proposes an answer to the task without requiring any diagonalization, while automatically performing the search, therefore offering a clear advantage.
We perform a CRAB optimization in the time interval [−T, 0] minimizing the function F(λ) |ψ(0) = −S + λ Ẽ , in which now S is the concurrence; then at the time t = 0 the control is switched off, the value of the field is kept constant ( (t) =˜ for t > 0), and we observe the evolution of the optimized state. In figure 9, we show the behavior of the concurrence S(t) and of the survival probability P(t) = (Tr √ ρ(t)ρ(0) √ ρ(t)) 2 , excluding (λ = 0, black continuous line) and including (λ = 0.1, red dot-dashed line) the energy fluctuation term in the optimization procedure. As shown in the picture, although, as expected, the concurrence is smaller when λ = 0, the survival probability is stabilized by a factor bigger than 50 in time with respect to the λ = 0 case.
Conclusion
Exploiting optimal control we proposed a method to steer a system into a priori unknown eigenstates satisfying the desired properties. We demonstrated, on a particular system, that 12 this protocol can be effectively used to build long-lived entangled states with many-body systems, indicating a possible implementation of an ESU scalable with the system size. The method presented is compatible with different models (e.g. LMG and Ising) and measures of entanglement (e.g. von Neumann entropy and concurrence) and can be extended to any other property one is interested in, such as for example the squeezing of the target state [12]. It can be applied to different systems with a priori unknown properties: optimal control will select the states (if any) satisfying the desired property and robust to system perturbations. We stress that an adiabatic strategy is absolutely ineffective for this purpose, as transitions between different eigenstates are forbidden. Applying this protocol to the full open-dynamics description of the system, e.g. via a CRAB optimization of the Lindblad dynamics as done in [19], will result in an optimal search for a decoherence-free subspace (DFS) with desired properties [20]. If no DFS exists, the optimization would lead the system to an eigenstate of the superoperator with the longest lifetime and the desired properties [12]. Although the state so prepared may be unstable over long times, it represents the best and most robust state attainable, and additional (weak) control might be used to preserve its stability. Finally, working with excited states would reduce finite-temperature effects, relaxing low-temperature working-point conditions, simplifying the experimental requirements to build a reliable ESU. | 5,559.4 | 2011-08-16T00:00:00.000 | [
"Physics"
] |
Superheavy thermal dark matter and primordial asymmetries
The early universe could feature multiple reheating events, leading to jumps in the visible sector entropy density that dilute both particle asymmetries and the number density of frozen-out states. In fact, late time entropy jumps are usually required in models of Affleck-Dine baryogenesis, which typically produces an initial particle-antiparticle asymmetry that is much too large. An important consequence of late time dilution, is that a smaller dark matter annihilation cross section is needed to obtain the observed dark matter relic density. For cosmologies with high scale baryogenesis, followed by radiation-dominated dark matter freeze-out, we show that the perturbative unitarity mass bound on thermal relic dark matter is relaxed to 1010 GeV. We proceed to study superheavy asym-metric dark matter models, made possible by a sizable entropy injection after dark matter freeze-out, and identify how the Affleck-Dine mechanism would generate the baryon and dark asymmetries.
Introduction
There are two well-motivated possibilities for generating the baryon asymmetry at high temperatures: right-handed neutrino leptogenesis and Affleck-Dine baryogenesis [1][2][3]. The Affleck-Dine mechanism utilises the fact that scalar potentials in supersymmetric (SUSY) models have nearly "flat directions". In the early universe, gauge invariant combinations of scalar fields that carry an approximately conserved global quantum number (such as baryon B or baryon-minus-lepton B − L number) become initially displaced to large field values. Once the Hubble parameter drops below a given mass scale, the associated scalar field will roll towards its minimum. If the initially displaced B-charged scalar fields have baryon number and charge-parity (CP ) violating potentials, the evolution of the field to its minimum leads to the growth of a large baryon asymmetry.
Notably, Affleck-Dine baryogenesis often leads to a baryon asymmetry much larger than presently observed. Such a large baryon asymmetry must be subsequently diluted, usually through an injection of entropy into the thermal bath and/or strong washout from sphaleron processes during a phase transition. Fortunately such entropy injection events are ubiquitous in UV completions of the Standard Model. In particular string theory typically introduces a large number of gravitationally coupled scalars which decay at late cosmological times, diluting previous particle asymmetries and relic abundances [4][5][6]. This motivates serious consideration of the possibility that at some point in cosmological history, there were large dilutions in asymmetries and particle number due to entropy injection. The occurrence of these large entropy dumps can significantly impact what is regarded as a target range for model building when considering the appropriate freeze-out abundance of dark matter or the magnitude of particle-antiparticle asymmetries.
JHEP02(2017)119
Griest and Kamionkowski [7] argued that if the dark matter is ever in thermal equilibrium with the Standard Model bath, and its freeze-out annihilation cross section is required to be perturbative, then this restricts the dark matter mass to be m DM 100 TeV. An important caveat to this conclusion is that subsequent entropy production can dilute the abundance of frozen out states. Here we show that if baryogenesis occurs prior to dark matter freeze-out (as common in Affleck-Dine models), and the dark matter relic density is diluted by a subsequent entropy dump, then the bound on thermal relic dark matter from perturbative unitarity is relaxed to m DM 10 10 GeV. This relation makes manifest an intriguing connection between high scale baryogenesis and the maximum mass of freeze-out dark matter, assuming the dark matter abundance is diluted by an entropy injection.
Motivated in part by the link between high scale baryogenesis and heavy dark matter, we proceed to study models of "superheavy asymmetric dark matter," in which the dark matter relic density is determined by a particle asymmetry. We show that the presence of a moderate entropy injection, which simultaneously dilutes the dark matter number density, and asymmetries in the baryonic and dark sectors, naturally accommodates superheavy asymmetric dark matter. Intriguingly, it has been argued that the accumulation of asymmetric dark matter with mass 0.1-100 PeV in stellar objects can lead to pulsar collapse in the Milky Way galactic center [8,9] and ignition of type-Ia supernovae [10] (see [11] for related work), both of which are open problems in astrophysics.
The paper is structured as follows; we begin in section 2 by deriving the unitarity bound on the dark matter mass for the case of high scale baryogenesis and a period of entropy injection following dark matter freeze-out. In section 3 we consider superheavy asymmetric dark matter models and show that sizable entropy injections which dilute both the frozen out dark matter abundance, and baryon and dark matter asymmetries permits superheavy asymmetric dark matter. Section 4 quantifies the magnitude of entropy dumps from decaying states in the early universe. Large baryon asymmetries from high scale (Affleck Dine) baryogenesis motivates large entropy dumps and in section 5 we discuss specific implementations within our framework, with focus on generating modest hierarchies between the baryon and dark matter asymmetries. In section 6 we present some concluding remarks and comment on possible connections to models of High Scale Supersymmetry.
Dark matter mass upper bound for freeze-out after baryogenesis
While the specific origin of the matter-antimatter asymmetry in the universe is presently unknown, the broad features of primordial asymmetry generation are understood. If a state carries a baryon number B, in the presence of out-of-equilibrium effects which violate B and CP , an asymmetry can arise such that there is a net number density between the baryons and antibaryons where s is defined as the entropy density of the thermal bath and n b , nb are the number densities of baryons and antibaryons. Analogous asymmetries can arise for other global charges, and such asymmetries may also be connected to dark matter [12,13].
JHEP02(2017)119
It is notable that Affleck-Dine (AD) baryogenesis often leads to particle asymmetries as large as η initial B ∼ O(1), but generally no larger [14]. Indeed, as discussed in section 5, large initial asymmetries are the typical expectation. Thus in order for the AD mechanism to yield the observed baryon asymmetry η now B ∼ 10 −10 , one requires subsequent dilutions by a factor ζ ∼ η initial B /η now B . A dilution factor ζ can arise, for example, if a heavy state decays at late times into the primordial thermal bath ζ ≡ s before /s after , (2.2) where "before" and "after" indicate the entropy density of the thermal bath immediately before and after the decay of the heavy state. We shall be initially agnostic about the precise source of this entropy injection, simply parameterizing it with ζ, but we will discuss the provenance and magnitude of ζ in section 4.
Since the freeze-out abundance Y ≡ n/s depends upon the entropy density s relative to the frozen out number density n, a late entropy injection can dilute the dark matter abundance by a potentially large factor. Crucially, observe that if this dilution occurs after dark matter has frozen out to a fixed abundance in the early universe, then the dark matter abundance will also be diluted by a factor ζ where the observed value is Ω Relic DM h 2 0.12 [15]. As we will see, the possibility of late time entropy injection is particularly salient for heavy dark matter.
For simplicity, we will restrict our attention to the case that dark matter freezes out from a radiation-dominated universe, 1 with an abundance that is later diluted by a factor ζ. The evolutions of particle abundances Y are customarily tracked with respect to the dimensionless temperature variable x ≡ m DM /T . Assuming the particles are stable over the lifetime of the universe, these abundances remain constant after particle annihilations cease and the particle has "frozen-out." For weakly interacting particles, this typically occurs for x ∼ 10. The self-annihilation cross-section of dark matter can be expanded in powers of inverse x: σv ≡ n=0 σ n x −n = σ 0 + σ 1 x −1 + O(x −2 ), where these give the s-wave, p-wave, etc. annihilation components. 2 We can often approximate σv by the lowest order non-vanishing term in its expansion. The temperature at which dark matter annihilations freeze-out is well described by [19] (see also [20]) in terms of K ≡ a(n + 1) π
45
√ g M Pl m DM σ n , where a 0.145g/g S for dark matter with g internal degrees of freedom, for a thermal bath with g massless degrees of freedom, and g S entropy-normalized massless degrees of freedom, as defined in [20]. Note, for the Standard Model g = g S 107 at temperatures in excess of 200 GeV. 1 More generally, dark matter may decouple during matter domination. During matter-domination H ∝ T 4 (rather than H ∝ T 2 ) [16,17], which substantially alters the freeze-out calculation. 2 If the mediators of the annihilations are light compared to mDM this expansion is not always valid [18].
JHEP02(2017)119
Here we consider the scenario that particle dark matter reproduces the observed dark matter relic abundance through freeze-out to an over-abundance during radiation domination, followed by a period of dilution. The relic density of freeze-out dark matter followed by subsequent entropy injection (cf. eq. (2.3)) is The numerical prefactors in eq. (2.5) are for Majorana fermion dark matter, although this can be easily adapted, e.g. for a Dirac fermion, by multiplying by a factor of two. Next we specify the dark matter annihilation cross-section σ n in eq. (2.5) and calculate the dark matter relic density as a function of dark matter mass m DM , coupling strength α DM , and dilution factor ζ. We take the simplest scenario of dark matter freeze-out via s-wave annihilations (n = 0), as occurs if the dark matter annihilations to quarks through a vector mediator V . Specifically, suppose that the mass of the dark mediator is the same scale as the dark matter, m V ∼ m DM and parameterize the s-wave cross section as follows In this case the relic dark matter abundance (for n = 0) is (2.7) The size of ζ required to reproduce the observed relic density is shown in figure 1 for s-wave and p-wave cases. The dilution factor indicated in eq. (2.7) of ζ ∼ 10 −5 implies the initial baryon asymmetry is required to be η initial (2.8) In the Affleck-Dine scenario it has been argued [14] that there is an upper bound on the magnitude of asymmetry that can be generated where an O(1) asymmetry can be generated if a baryon-charged field dominates the energy density of the universe when it decays. We are unaware of well-motivated mechanisms which can yield larger (or even comparable) asymmetries. Perturbative unitarity [7] requires that the annihilation cross-section be smaller than Using eq. (2.7) to ensure that the observed relic density is reproduced, and applying restrictions from eq. (2.9) and (2.10), we derive the following upper bound on the mass of thermal dark matter which freezes-out through perturbative s-wave annihilations, m DM 10 10 GeV. This can also be inferred directly from figure 1. It is straightforward to generalize this bound to annihilation cross-sections that are not predominantly s-wave (n ≥ 1). Because this bound applies to a definite cosmological history (high scale baryogenesis and dark matter freeze-out, followed by dilution), there are a number of caveats, but they do require some model building to realize. Specifically, we can list a number of ways that dark matter could be heavier: • Low scale baryogenesis, with η B (re)generated after an entropy injection.
• Dark matter could freeze-out during a period of matter domination or reheating [16,17].
• The dark matter mass could evolve to larger values at late time, after dark matter freeze-out, due to the evolution of a scalar potential that sets its mass [21,22].
• The dark matter could form heavy bound states after freeze-out of the Standard Model thermal bath, as in "Atomic Dark Matter" [23,24].
Even with these provisos, the class of models to which our arguments apply is broad. Indeed, Affleck-Dine baryogenesis and late time entropy production are common features in Standard Model UV completions in SUSY and string theory. Before moving on it is interesting to note that since the dark matter is overproduced prior to the entropy injection it can have much smaller couplings than un-diluted thermal relic dark matter. From inspection of eq. (2.5) the annihilation cross section needed to JHEP02(2017)119 reproduce the relic density, relative to standard freeze-out, is reduced by a factor of ζ. For many models of dark matter, this will relax direct detection constraints whenever ζ 1. At a rough, order of magnitude level, if we assume the per-nucleon dark matter direct detection scattering cross-section σ N , is approximately the size of the dark matter selfannihilation cross-section σ N ∼ σ 0 , we can surmise that some portions of superheavy dark matter parameter space lie at direct detection cross-sections below the atmospheric and solar neutrino background.
At high masses (m DM > 100 GeV), the cross section at which solar and atmospheric neutrinos provide a substantial background to direct detection experiments is The annihilation cross section required to match Ω Relic DM h 2 is σ 0 ζ × 10 −10 GeV −2 (taking x FO ∼ 20). Therefore, assuming σ N ∼ σ 0 , the dark matter direct detection signal lies above the neutrino background whenever The values indicated are chosen to match eq. (2.7), thereby demonstrating that superheavy dark matter may be found before solar and atmospheric neutrinos provide a significant background to xenon direct detection experiments. There are some studies of direct detection [25] and indirect detection [26,27] of non-thermal superheavy dark matter. We leave the investigation of methods for finding superheavy thermal dark matter to future work.
Asymmetric dark matter & entropy injection
If dark matter carries a global charge, an asymmetry between dark matter and anti-dark matter can arise which is responsible for setting the dark matter relic density [12,13]. Henceforth for concreteness we shall assume that the dark matter is a Dirac fermion (it could equally be a complex scalar). If the dark asymmetry determines the relic abundance, then η DM ≡ (n DM − n DM )/s, defined analogous to eq. (2.1), directly determines Ω Relic DM via where m p ≈ 0.94 GeV is the proton mass, and we note that the observed ratio of dark-tobaryonic matter is approximately 5.5 [15]. For example, normalizing to PeV mass asymmetric dark matter, the final asymmetry needed to match the observed dark matter relic density is Ω Relic For the asymmetry η DM to determine the relic density the symmetric component of the dark matter population must annihilate away, so that mostly the asymmetric component remains [28,29]. As a result the dark matter mass can typically be constrained by unitarity JHEP02(2017)119 arguments [7] to be m DM 100 TeV (assuming dark matter annihilates via perturbative processes). However, as illustrated in section 2, entropy injection (e.g. from a late-decaying field) can dilute both symmetric and asymmetric dark matter components, thereby evading the naïve unitarity bound.
Hereafter, we will examine a scenario in which the asymmetries η B and η DM are too large in the early universe, compared to their values today. As we will see, it is possible for dark matter with PeV-EeV mass to have a perturbative annihilation rate large enough to reduce the symmetric dark matter component below the contribution due to the asymmetry. In this case, both the asymmetric and symmetric dark matter components will be initially larger than the observed relic abundance. A subsequent period of entropy production dilutes the symmetric and asymmetric components of the dark sector, along with the baryon asymmetry, altogether yielding the abundances observed today. 3 The abundance of dark matter prior to the entropy injection, but after it freezes out of the thermal bath, is given by 3) The first term of eq. (3.3) corresponds to the symmetric abundance of dark matter-anti dark matter pairs, the latter term is the abundance due to the asymmetry. Following the entropy dump the quantities Y Sym and η DM are both reduced by a factor of ζ. Therefore, the present day relic density is altogether given by where s 0 ≈ 2.8 × 10 3 cm −3 is the entropy density today and ρ c ≈ 10 −5 h 2 GeV cm −3 is the critical density. For the asymmetry to determine the final relic density, inspection of eq. (3.4) reveals that after freeze-out, the symmetric abundance must satisfy Y FO Sym η FO DM . This requires that the freeze-out dark matter annihilation cross section is large enough to deplete Y Sym to a size smaller than η DM .
The contribution to the freeze-out abundance from the symmetric component of dark matter after freeze-out, assuming freeze-out from a radiation dominated universe and a subsequent entropy dilution ζ, is given by The term in brackets is the standard symmetric freeze-out expression for a Dirac fermion (note the extra factor of two compared to (2.5)). However, the point of freeze-out x AF is modified due to the asymmetry, and in the limit T FO 100 GeV can be approximated as [34] (see also [35]) (3.7) Comparing the first term (Ω Relic Sym ) to the latter (Ω Relic Asym ), we see that for suitable parameter values, the condition Ω Relic Sym Ω Relic Asym is satisfied. The viable parameter space is illustrated in figure 2. As can be seen, in the presence of a sizeable entropy injection after dark matter freeze-out, models of PeV-EeV mass asymmetric dark matter can reproduce the observed dark matter relic abundance, given a suitably large initial dark and baryon asymmetry.
Entropy from decays
Thus far we have treated ζ as a free parameter. In this section we examine mechanisms that lead to entropy injection in order to quantify the magnitude of ζ. We subsequently discuss the model building implications and constraints on such scenarios. In section 5 we will highlight the importance of entropy injection for obtaining the baryon (and DM) asymmetries in Affleck-Dine models.
Magnitude of the entropy injection
Entropy injection can come from a variety of sources, perhaps the most typical are heavy states decaying to the thermal bath, e.g. [2, 4, 5, 30-33, 36, 37], and phase transitions [38]. Henceforth, we focus on the former, in which the entropy injection is due to a state χ, which comes to dominate the energy density of the universe after dark matter freezes-out, and subsequently decays to Standard Model states.
In order for a substantial dilution to take place, we require that the energy density in χ when it decays greatly exceeds the energy density in all other fields in the universe. The entropy jump in the Standard Model radiation bath due to the decays of χ is given by ζ ≡ s before s after ρ rad ρ χ H=Γχ where ρ rad,χ is the energy density in radiation bath and χ states respectively. At the time of decay (H ∼ Γ χ ) the energy density in χ is ρ χ = 3Γ χ M Pl . Below some critical temperature, the energy density of χ starts evolving as a −3 (matter-like), compared to the radiation bath which redshifts like a −4 (radiation-like), where here a is the standard FRW scale factor. This relative evolution leads to χ coming to dominate the energy density of the universe. There are primarily two reasons for ρ χ to have matter-like evolution in the early universe: (i.) χ is a particle that is non-relativistic and thermally decoupled from the rest of the universe; (ii.) χ is a light, slowly decaying bosonic field oscillating in its potential, so that its average equation of state is w ∼ 0 (i.e. matter-like). In the first case (i.) χ starts evolving as a −3 at T crit ∼ m χ , when the temperature of the thermal bath drops below m χ and its momentum becomes negligible. For case (ii.) χ becomes matter-like when the χ field begins to oscillate around its minimum when m χ ∼ H or equivalently T crit ∼ 3m χ M Pl , assuming a simple quadratic potential for χ, i.e. V ⊃ m 2 χ χ 2 . We will restrict our attention to models in which dark matter freeze-out occurs prior to the energy density of χ becoming matter-like, T FO ∼ m DM T crit . In this case freezeout occurs during radiation domination. When χ decays it reheats the thermal bath to a temperature T RH and dilutes asymmetries and frozen out abundances. For m DM > m χ decays of χ to dark matter are kinematically forbidden, thus the dark matter is diluted and not repopulated during χ decays, and for T FO T RH , interactions in the thermal bath will no longer produce dark matter states.
The Friedman equation giving the evolution of the energy density for H(T crit ) > H > Γ χ is given by where ∆a ≡ a(T )/a(T crit ) is the change in the scale factor after ρ χ became matter-like, and R i ≡ ρ i /(ρ χ +ρ rad ) crit is the relative energy densities of χ and the Standard Model radiation at some initial point in time, in this case at the time when T = T crit . As one example, note that if χ is a particle initially in thermal equilibrium with the radiation bath, but has
JHEP02(2017)119
an extremely weak self-annihilation cross-section, then R χ R rad g/g ∼ R rad /100 [20]. Conversely, if χ is a scalar field oscillating in its potential, or was produced by an out-ofequilibrium decay, then potentially R χ /R rad 1. Note that in eq. (4.2) we have neglected the contribution from dark matter, since this is Boltzmann suppressed after freeze-out ρ DM ∝ exp(−x FO ) 1, and for the cosmological epochs we are considering, will not come to dominate the energy density of the universe.
For H(T crit ) > H > Γ χ the contribution from χ grows and becomes comparable to the radiation energy density at T = T MD , or after a period The χ energy density continues to grow until it decays to radiation at H ∼ Γ χ , this occurs after where in deriving eq. (4.3) and (4.4) we have assumed that χ is sufficiently long lived that it dominates eq. (4.2), otherwise the entropy change would be negligible. We can find ρ χ at the time of χ decay H ∼ Γ χ by evolving χ's energy density with eq. (4.4) to obtain We can also find ρ χ at the time of χ decay as a function of the reheat temperature Note that in the Standard Model g (T )π 2 /30 35 for T > 200 GeV [20]. On the other hand, the energy density in the radiation ρ rad immediately prior to χ decay is Inserting eqs. (4.4)-(4.7) into eq. (4.1), it follows that Assuming that the ratio of energy densities at T = T crit are R rad /R χ 1, the dilution is ζ ∼ 10 −10 T RH 10 MeV where we normalize to the maximum dilution permitted by high scale baryogenesis, and the reheat temperature after χ decays T RH 10 MeV, which is the minimum temperature the
JHEP02(2017)119
Standard Model thermal bath must return to in order to reproduce big bang nucleosynthesis (BBN) observations. For the case that dark matter freezes out through s-wave annihilations with cross section σ 0 ∼ α 2 DM /m 2 DM , eq. (2.7) combined with eq. (4.9) (which assumes R rad /R χ 1) determine the dark matter mass required to match the observed relic for given values of the reheat and critical temperatures m DM ∼ 10 9 GeV α DM 1 (4.10)
The dilution parameter space
The critical temperature at which the evolution of ρ χ becomes matter-like is not a free parameter, but is fixed by the details of the model. Below we look at the constraints on T crit corresponding to a decaying state at one time in thermal equilibrium with the radiation bath, T crit ∼ m χ , and an oscillating field where T crit ∼ 3M Pl m χ . For the models outlined in sections 2 & 3 to be consistent they are required to satisfy the following criteria: (a). Standard Model reheating (decay of χ) occurs above BBN temperatures.
(b). The universe is radiation-dominated during dark matter freeze-out.
(c). The entropy jump occurs after freeze-out.
(d). χ dominates the energy density of the universe when it decays.
Below we discuss how each of these requirements restricts the parameter space: (a). The Standard Model is reheated above the BBN threshold: T RH Γ χ M Pl 10 MeV. From eq. (4.10), which assumes R χ /R rad = 1 and a freeze-out annihilation cross-section σ 0 ∼ 1/m 2 DM , it follows that . For freeze-out to occur during radiation domination, it is required that T crit < T FO or .
(4.12)
Dark matter freeze-out during a period of matter-domination is certainly possible, but the relic density calculation is altered since the Hubble rate is different and the dark matter abundance becomes sensitive to the decay widths of the late decaying scalar χ. Particularly, the decay width of χ to dark matter can be responsible for setting the dark matter relic abundance [16]. We leave a detailed study of superheavy dark matter produced via matter-dominated freeze-out to future work.
JHEP02(2017)119
(c). For the dark matter to be diluted, rather than repopulated, by χ decays, the lifetime of χ should be such that χ decays after dark matter freeze-out. Thus H(T FO ) > Γ χ , or in terms of temperature thresholds T FO T RH ∼ Γ χ M Pl , this implies 13) or equivalently, (4.14) Moreover, using eq. (4.10) this can be expressed in terms of a bound on T crit T crit 10 −9 GeV m DM GeV The dark matter energy density should not grow larger than the χ contribution at any stage after freeze-out, or radiation domination will not be restored after χ decay. This condition is satisfied for For scenarios we have considered, this requirement is redundant when compared to condition (a). Figure 3 illustrates how these requirements are complementary in constraining the parameter space for both classes of models. It is evident that a range of dark matter and χ masses reproduce the observed dark matter relic density, while reheating the universe above BBN temperatures. It is interesting to observe that in the parameter space plotted, there is an effective upper bound on the reheat temperature T RH 1 TeV. Higher reheat temperatures imply either freeze-out occurs during matter domination (which changes the freeze-out calculation), or that the dark matter states are repopulated (rather than diluted) following the decays of χ. Notably, for χ as either a particle or an oscillating field, the dark matter mass is permitted to saturate the upper mass bound of 10 10 GeV, derived in eq. (2.11).
Affleck-Dine, dark matter, and large asymmetries
Observing that SUSY models generically present exactly flat directions in the scalar potential in the limit of unbroken SUSY, Affleck and Dine [1] argued that in the early universe it is natural for scalar fields along these flat directions to initially take large field values. 4 Of primary interest are flat directions which carry a global charge (baryon, lepton, or dark). One can parameterize such flat directions (a product of superfields) in terms of a new superfield, the scalar component of which is commonly dubbed the AD field (φ). Affleck Contours showing the reheat temperature at which χ decays required to reproduce the observed dark matter relic abundance as a function of m χ and m DM . These plots assume the energy density in χ begins matter-like evolution at T crit m χ (left) and T crit 3m χ M Pl (right). The initial distribution of energy densities in χ versus the Standard Model thermal bath are R χ /R rad = 1. The dark matter annihilation cross-section is assumed to be s-wave with crosssection σ 0 = 1/m 2 DM . Shaded regions indicate constraints on the parameter space, as discussed in points (a.)-(c.) in section 4.2. These requirements substantially restrict the parameter space, but allow for the dark matter mass m DM to be as large as 10 10 GeV. and Dine demonstrated that the evolution of an AD field from its initial field value can generate particles asymmetries, provided that the potential of the AD field violates C and CP . The AD mechanism has since been thoroughly studied [3], including its application to dark asymmetries [39][40][41].
In what follows we will examine a minimal AD potential and calculate the resulting particle asymmetry. Our aim is to clarify which models typically lead to large asymmetries O(10 −8 ) η B O(1) due to AD baryogensis and thus require significant late time entropy dilution to reproduce the observed level η now B ∼ 10 −10 . We will also outline AD dark/cogenesis scenarios with η B η DM , which is one requirement of the superheavy asymmetric dark matter studied in section 4. Broadly following [3], we take the following AD potential 5 for the complex scalar (AD) field φ where a and b are complex numbers, m φ is the low-temperature mass of φ and M is a mass scale at which the higher dimension operator is induced. The potential is comprised as follows: • The first term is generated by SUSY-breaking, and becomes relevant for H m φ .
JHEP02(2017)119
• The second, fourth, and fifth terms are generated by inflaton-induced SUSY-breaking. In particular, the last two terms violate baryon number, as required for baryogenesis. In the context of SUSY, the form of these terms arises from inflaton F -terms [2,3].
• The third term arises from UV corrections at mass scale M . The non-renormalisable term with the highest power of φ 'lifts' the flat direction when H m φ , determining the initial minimum of the AD potential.
In the early universe, while H m φ , the AD potential depends mostly on the second and third terms of eq. (5.1), and has a minimum at As the universe cools, eventually H ∼ m φ , at which point φ will roll from φ 0 to the new minimum of its potential and undergo coherent oscillations. The baryon (or other charge) asymmetry that arises depends on φ 0 , and the relative phase between the couplings a and b, which together control the magnitude of CP violation.
Using the equations of motion (the Friedmann equations) for a scalar field in de Sitter space, the change in baryon number is given by [3,39] dn B dt where θ parameterizes the phase of complex terms carrying AD charge in eq. (5.1). In this case by construction, the relevant CP and B violating terms are those with coefficients a and b. The relative phase of these terms will determine the net baryon charge produced. Although for Arg[a/b] = 0 there will be no net baryon number generated, one may reasonably expect the initial phases to be chosen at random, so that typically Arg[a/b] ∼ O (1).
With this in mind, henceforth we drop factors of Arg[a/b]. The χ field starts oscillating when H ∼ m φ and one can use the approximation 6 that 1/t ∼ H ∼ m φ . It follows that when φ begins oscillating, the final two terms in eq. (5.1) determine the net charge density created in the universe Equation (5.4) gives the net charge density introduced by the AD field when it begins oscillating, however, the resulting particle asymmetry η B ∼ n B /s also depends upon the relative abundance of other fields in the universe which contribute to s, the total entropy density of the universe. We first consider the simplest scenario, that the universe is radiation-dominated when the AD field rolls down its potential and decays. In this case, the asymmetry is given by [2,3] where we use that the energy density of the radiation-dominated universe is ρ u ∼ 3m 2 φ M 2 Pl when φ begins oscillating, which follows from the relationship H = m φ and the Friedmann equation 3H 2 = ρ/M 2 Pl . Let us consider some examples cases, which illustrate that the initial asymmetry is often too large. Consider the case where the high-dimension effective operator in eq. (5.1) is mass dimension six (n = 1), then the resulting asymmetry is We highlight the case of m φ ∼ TeV since it is a typical choice for an AD soft mass term, assuming electroweak-scale supersymmetry. Furthermore, let us next examine the expected magnitude of asymmetries arising from a higher dimension operator with n = 2. This implies an even larger initial asymmetry Such large initial asymmetries require a subsequent dilution mechanism. This problem is even more apparent in the case that the universe is not radiation dominated, but dominated by the energy in the AD field. If the AD field has an extended period oscillating around its minimum, during which time it redshifts like non-relativistic matter, then its energy density will come to dominate the universe. In this case, the initial particle asymmetry in the universe will be η B ∼ 1 (see [14] for an extended discussion of this point), in which case a very large late time entropy injection is necessary to match observations. One way that the required dilution of baryon number is often achieved in studies of AD baryogenesis [1][2][3], is by assuming that the inflaton dominates the energy density of the universe, and decays later than the AD field. In this scenario the entropy injection of the decaying inflaton field dilutes the AD asymmetry. However, even in this case, the resulting charge asymmetry can still be much larger than that observed: η B η now B 10 −10 . For ρ φ ρ I , where ρ I is the energy density of the inflaton field, which is assumed to be oscillating in its potential (diluting like matter ρ I ∝ a −3 ), the asymmetry is where T R,I is the temperature at which the inflaton field will decay and here we have used ρ I ∼ 3m 2 φ M 2 Pl at the time that φ begins oscillating. Specifically, for n = 1, such that |φ| 6 is the highest dimension operator in the potential of eq. (5.1), one has This is the standard result in the literature that achieves the observed particle-antiparticle asymmetry using dilution via the late-decay of the inflaton [2,3]. However, if the AD mechanism arises from a higher dimension operator (n = 2), the resulting asymmetry will JHEP02(2017)119 again be typically too large, even allowing for dilution via subsequent inflaton decay at T R,I = 10 9 GeV, as can be seen from the following expression η (inf,n=2) B 10 −5 m φ 10 3 GeV . (5.10) This can be alleviated through stronger dilution due to the inflaton decaying at lower temperatures. However, this approach will run into conflicts with observations if T R,I 10 MeV. Conversely, as has been focus of this paper, as an alternative to demanding inflaton energy domination, an entropy injection from a late decaying field can also provide the required dilution of baryon number.
As detailed in section 3, models of superheavy asymmetric dark matter require the dark sector to have a much smaller matter-antimatter asymmetry than the baryonic sector. As we now show, a large ratio of dark-to-baryon asymmetries, η B /η DM 1, can arise if the baryon asymmetry is generated from a higher-dimension operator than the dark asymmetry. For simplicity, we assume that the AD field oscillates and decays to Standard Model fields during a radiation-dominated epoch, and is later diluted by a factor of ζ. We make the reasonable simplifying assumption that both the Standard Model and dark AD fields, φ B and φ D , have symmetries broken at the same high scale M . Then if φ B and φ D have Standard Model and dark asymmetries broken by operators with mass dimension (4 + 2j) and (4 + 2k), respectively (cf. eq. (5.5)), then the relative size of the Standard Model and dark asymmetries is given by (5.11) Note that j or k = 1 are special since in these cases the ratio is insensitive to m φ B or m φ D , respectively. One might reasonably expect the masses of φ B or φ D to be comparable since they likely both arise from the same source of SUSY breaking.
For example, consider the case that the Standard Model asymmetry arises from an operator of leading dimension-6 operator (j = 1), while the dark asymmetry comes from a leading dimension-8 operator (k = 2), then the resulting relative asymmetry is The indicated parameter values are chosen to match the well motivated scenario in which the non-renormalisable operators are generated at the Planck scale, thus M = M Pl , and where we have shown m φ B ∼ 1 PeV. In this case the expected ratio of the initial asymmetries is η B /η DM ∼ 10 4 , which is well suited for the models of superheavy asymmetric dark matter outlined in section 3.
JHEP02(2017)119 6 Concluding remarks
Traditional models of superheavy dark matter set the observed relic abundance via nonthermal mechanisms such as inflationary dynamics [44], gravitational production [45], or thermal inflation [21,22]. The scenario we outline here is distinct in that the dark matter undergoes a standard freeze-out process and its abundance is subsequently diluted due to late time entropy production. We have called this scenario "Superheavy Thermal Dark Matter." Moreover, we believe this is the first paper to construct viable models of superheavy asymmetric dark matter. Thus far we have not specified any UV-completion of superheavy dark matter, but given the links we have drawn to Affleck-Dine baryogenesis, it is interesting to ask whether superheavy dark matter could be the lightest supersymmetric particle of a High Scale SUSY spectrum [46,47] and thus stable due to R-parity. Such High Scale SUSY have been independently motivated via anthropic arguments involving the Higgs mass [48] and provide an interesting alternative to Weak Scale SUSY. Moreover, to realise superheavy SUSY asymmetric dark matter there are several potential candidates, most prominently Sneutrinos [49], Higgsinos [50], or bound states in the hidden sector involved in SUSY breaking [51,52].
It is also interesting to note that in certain classes of models the Higgs quartic coupling λ is anticipated to vanish at the scale of the SUSY partners M SUSY . Models which automatically imply the vanishing of the quartic coupling at the SUSY scale occurs in spectra with Dirac Gauginos [53], or (string-motivated) symmetries in the Higgs sector [54,55]. Evolution of the observed Higgs quartic under renormalisation then implies M SUSY (λ = 0) ∼ 10 11±2 GeV, as inferred from Standard Model-like running. This PeV-EeV mass scale is intriguing from the prospective of explaining the "missing pulsar problem" [8,9] and the "SN1a ignition problem" [10]. Moreover, there are several anomalous events observed at IceCube [56] which have been interpreted as potential signals of the decay of superheavy dark matter [57][58][59].
Sources of late time entropy injection commonly arise in UV complete theories, and we have emphasized that they may play a crucial role in diluting the baryon asymmetry to the observed level. Entropy dumps also provide solutions to cosmological problems related to the overproduction of stable exotics, most prominently: gravitinos [60][61][62], axions [63][64][65][66], axinos [67] and GUT-monopoles [68]. We have shown that these entropy injection events can significantly change our expectation for the mass scales and couplings required for dark matter to match the observed relic density. The prospect of symmetric or asymmetric superheavy dark matter is particularly interesting given the tightening constraints on the traditional WIMP parameter space. In contrast to non-thermal models of superheavy dark matter [21,22,44,45], in this class of models the dark matter has modest couplings to Standard Model states and can be constrained by direct searches. Additionally, we have argued that for theories of high scale baryogenesis any stable state which is in thermal equilibrium with the Standard Model, and freezes-out of a radiation-dominated bath must be lighter than 10 10 GeV, allowing for maximal entropy injection after freeze-out. This limit follows from the perturbative unitarity limit [7] σ 0 4π/m 2 DM and the maximal JHEP02(2017)119 asymmetry bound [14] η initial B 1. The framework presented here offers new opportunities for model building, some of which are discussed above; we leave additional implementations to future publications. | 9,536.8 | 2017-02-01T00:00:00.000 | [
"Physics"
] |
A New Pairwise NPN Boolean Matching Algorithm Based on Structural Difference Signature
: In this paper, we address an NPN Boolean matching algorithm. The proposed structural difference signature (SDS) of a Boolean function significantly reduces the search space in the Boolean matching process. The paper analyses the size of the search space from three perspectives: the total number of possible transformations, the number of candidate transformations and the number of decompositions. We test the search space and run time on a large number of randomly generated circuits and Microelectronics Center of North Carolina (MCNC) benchmark circuits with 7–22 inputs. The experimental results show that the search space of Boolean matching is greatly reduced and the matching speed is obviously accelerated.
Introduction
Boolean equivalence classification and matching constitute a long-standing and open problem. The authors of [1,2] applied a group algebraic approach to NP and NPN Boolean equivalence classification. Reference [2] computed the classification results for 10 inputs. Affine equivalence classification is also an important field of study with applications in logic synthesis and cryptography [3]. All Boolean functions in an equivalence class are equivalent to each other. NPN Boolean matching determines whether two Boolean functions are equivalent under input negation and/or permutation and/or output negation. This paper studies NPN Boolean matching for single-output completely specified Boolean functions.
NPN Boolean matching is an important research topic that can be applied to a number of applications in integrated circuit design, such as technology mapping, cell library binding and logic verification [4]. When a Boolean circuit is functionally NPN-equivalent to another Boolean circuit, one of these circuits can be realized by means of the other. There are n!2 n+1 NPN transformations for a n-variable Boolean function. If Boolean function f is NPN-equivalent to Boolean function g, there must be a NPN transformation that can transform f to g. On the contrary, no NPN transformation can transform f to g. The purpose of our proposed algorithm is to find the NPN transformation that can transform Boolean function f to g as early as possible. Based on the structural signature (SS) vector in our previous study [5], we proposes a new combined signature vector, i.e., SDS vector. In this paper, Boolean difference sigture is introduced into SS vector to form SDS vector. The new signature vector SDS is better able to distinguish variables and reduce the research space for NPN Boolean matching. Experimental results show that the search space is reduced by more than 48% compared with [5] and that the run time of our algorithm is reduced by 42% and 80% compared with [5,6], respectively.
In the following, Section 2 introduces relevant works on NPN-equivalence matching. Section 3 introduces some terminology and notation. Section 4 describes the proposed algorithm in detail. In Section 5, we present experimental results to demonstrate the effectiveness of our algorithm. Section 6 concludes the paper.
Related Works
Many methods can be exploited to solve the problem of NPN Boolean matching. The main results of such research are focused on four methods: (1) algorithms on canonical forms; (2) pairwise matching algorithms using signatures; (3) algorithms based on satisfiability (SAT) and (4) algorithms based on spectral analysis.
Each method has its own advantages. In canonical-form-based matching algorithms, the canonical form of each Boolean circuit of cell library is stored in advance. When cell-library binding is implemented, the canonical form of each Boolean circuit to be matched is computed and compared with the canonical forms of each Boolean circuits in the cell library via a hash table. All Boolean functions in an equivalence class have the same canonical form. The canonical form of each equivalence class has a special value. References [6][7][8][9][10][11][12] studied Boolean matching based on canonical forms and attained significant achievements.
Reference [12] reported P-equivalence matching for 20-input Boolean functions. The canonical forms considered in reference [12] was the binary strings with the maximal scores in lexicographic comparison. Reference [7] devised a procedure to canonicalize a threshold logic function and judged equivalence of two threshold logic functions by their canonicalized linear inequalities. Based on the canonical form of Boolean function, the reference [8] reduced the number of configuration bits in an FPGA architecture. The authors of [10,11] proposed fast Boolean matching based on NPN Boolean classification; their canonical form has the maximal truth table. The authors of [6] proposed new canonical forms based on signatures.
A pairwise matching algorithm searches the NPN transformations between two Boolean functions using signatures, which is a semi-exhaustive search algorithm. The merit of this method is that once it finds a transformation that can prove the equivalence of two Boolean functions, other transformations will not be checked. The authors of [4,5,13,14] proposed Boolean matching algorithms based on pairwise matching and used binary decision diagrams (BDDs) to represent Boolean functions. The authors of [5] proposed a structural signature vector to search the transformations between two Boolean functions and implemented NPN Boolean matching for 22 inputs. In pairwise matching algorithms, signatures are usually used as a necessary condition for judging whether two Boolean functions are equivalent, and variable symmetry is commonly utilized to reduce the search space. Symmetric attributes are used in many fields. Reference [15] studied the symmetries of the unitary Lie group. The variable symmetric attributes of Boolean function are widely used in NPN Boolean equivalence matching. In reference [5], the search space was reduced and the matching speed was improved by means of structural signatures, variable symmetry, phase collision check and variable grouping.
Since a SAT solver can help solve the problem of NPN Boolean matching and because many quick SAT solvers can be utilized, many Boolean matching algorithms based on SAT have emerged in recent years. The authors of [16][17][18][19][20] studied SAT-based Boolean matching. Based on graphs, simulation and SAT, Matsunaga [16] achieved PP-equivalence Boolean matching with larger inputs and outputs. The authors of [17,18] studied Boolean matching for FPGAs utilizing SAT technology.
Cong et al. [19] used the implicant table to derive the SAT formulation and achieved significant improvements. The authors of [20] combined simulation and SAT to perform P-equivalent Boolean matching for large Boolean functions. Compared with studies based on the previous three methods, studies on Boolean matching that use spectral techniques are fewer in number. Moore et al. [21] presented an NPN Boolean matching algorithm using Walsh spectra. The authors of [22] utilized Haar spectra to check the equivalence of two logic circuits.
Regardless of which method is used, the key to Boolean matching is to reduce the search space. It is universally known that the search space for exhaustive NPN Boolean matching is O(n!2 n+1 ). In the methods discussed above, many strategies are used to reduce the search space. The authors of [6] used general signatures and symmetry to reduce the search space.
Based on our previous study [5], we propose a new combined signature, i.e., the structural difference signature. We present a new pairwise algorithm based on the following conditions: (1) two NP-equivalent Boolean functions have the same SDS vectors; (2) two variables of a variable mapping have the same SDS values; and (3) two groups of Boolean functions Shannon decomposed with splitting variables are NP-equivalent.
Terminology and Notation
Let f (x 0 , x 1 , · · · , x n−1 ) and g(x 0 , x 1 , · · · , x n−1 ) be two single-output completely specified Boolean functions. The problem to be solved in this paper is to determine whether f is NPN-equivalent to g. Some related terminology has been introduced in [5,6].
An NP transformation T is composed of input negations and/or permutations. It can also be expressed as a group of variable mappings. In reference [5], the mapping from the variable x i of f to the variable x j of g, ϕ i , can be classified into two cases: (1) x i maps to x j with the same phase, in which case the mapping is x i → x j or x i → x j ; or (2) x i maps to x j with the opposite phase, in which case the mapping is x i → x j or x i → x j . A same-phase relation indicates no input negation, whereas an opposite-phase relation indicates input negation. A same-phase variable mapping between the variables x i and x j is abbreviated as i → j − 0, and an opposite-phase variable mapping between the variables x i and x j is abbreviated as i → j − 1. For two NPN-equivalent Boolean functions f and g, there may be an output negation when | f | = |g|. Definition 1. (NPN equivalence) Two Boolean functions f and g are NPN equivalent, f ∼ = g, if and only if there exists an NP transformation T that satisfies f (TX) = g(X) or f (TX) = g(X).
As a general signature, the cofactor signature is widely applied in NPN Boolean matching. The cofactor signature of f (X) with respect to [5].
Reference [5] proposed SS vector. The SS value of f with respect to is the 1st signature of the variable x i , and C i and |C i | are the symmetry mark, and G i is group mark. According to their symmetry properties, the variables of a Boolean function are classified as either asymmetric and symmetric. An asymmetric variable may have a single-mapping set or a multiple-mapping set. The variable mapping set of the asymmetric variable x i is denoted by χ i . Similarly, a symmetric variable may have a single symmetry-mapping set or a multiple symmetry-mapping set. The symmetry-mapping set of the symmetry class C i is denoted by S i , and the symmetry mapping between C i and C j is denoted by C i → C j . The literal ψ i represents a group of two or more variable mappings generated by C i → C j .
A P transformation does not change the cofactor signature of a variable. However, an N transformation changes the order of the positive and negative cofactor signatures without changing their numerical values. Therefore, we do not consider the order of the positive and negative cofactor signatures when comparing the 1st signature values of two variables. A variable x i may be transformed into an arbitrary variable x j , 0 ≤ j ≤ n − 1; therefore, we also do not consider the order of the variables when we compare two SS vectors.
Given two NP-equivalent Boolean functions f and g with a variable mapping x i → x j between them, we have the following four facts: (1) V f = V g and V i = V j ; (2) The Boolean functions decomposed with x i and x j using the Shannon expansion must be NP equivalent. Specifically, x i f x i is NP equivalent to x j g x j , and x i f x i is NP equivalent to x j g x j ; (3) x i and x j are either both asymmetric variables or both symmetric variables; (4) If there is a variable mapping between x l of f and x h of g, then the SS values of x l must be the same as those of x h no matter how many times the Boolean functions f and g are decomposed [5].
Two Boolean functions f and g may undergo one or more transformations in the process of matching. A transformation consists of n variable mappings. The algorithm of [5] and the algorithm presented in this paper detect all possible transformations between f and g according to their SS and SDS vectors, respectively.
The Proposed Algorithm
The goal of the proposed algorithm is to reduce the size of the search space as much as possible, thereby improving the speed of NPN Boolean matching.
Boolean Difference
For n inputs, there are 2 2 n different Boolean functions. Many Boolean functions have one or more independent variables. Whether a variable x i of f is independent can be determined using cofactors.
The Boolean difference of a Boolean function f with respect to When a variable x i of f is NP transformed into x j (x j ), its Boolean difference signature does not change. Thus, Boolean difference signature, like cofactor signature, can be used to distinguish variables.
Example 1. Consider an 5-input Boolean function f
Let us compute the 1st signature and the Boolean difference signature of each variable.
The 1st signatures of x 0 , x 1 , x 2 , x 3 and x 4 are (9, 7), (8,8), (8,8), (8,8) and (8,8), respectively. The variable x 1 is symmetric to x 4 . The Boolean difference signatures of the variables are 32, 12, 20, 28 and 12, respectively. From the 1st signatures and a symmetry check, we can distinguish variables x 0 , x 1 and x 4 . Variables x 2 and x 3 are both asymmetric variables and have the same 1st signature values. If we only utilize on their 1st signatures, the variables x 2 and x 3 cannot be distinguished. However, these two variables have different Boolean difference signatures. Thus, the variables x 2 and x 3 are actually different and can be distinguished.
Definition 3. (Independent variable) A variable x i of a Boolean function f is an independent variable
NP transformations do not change the independence of a variable. Thus, an independent variable is still an independent variable after NP transformation. Proof. If the Boolean function f is NPN-equivalent to g, then f and g are in the same NPN equivalence class. There must exist an NP transformation T that can transform f into g or g. After NP transformation, an independent variable is still an independent variable. Therefore, it can be deduced that f and g have the same number of independent variables. Property 1. The cofactor signature of a Boolean function f with respect to its variable Because the positive cofactor signature is the same as the negative cofactor signature for an independent variable, the phases of independent variables cannot determined by using the phase assignment method presented in [5,6]. However, independent variables have no influence on a Boolean function. Thus, the proposed algorithm assigns a positive phase to all independent variables.
Let us compute the SS vectors, Boolean difference signatures and independent-variable sets.
The SS vectors of f and g are as follows: Consider two NP-equivalent Boolean functions f and g with independent-variable sets of D f = {x i 1 , x i 2 , · · · x i k } and D g = {x j 1 , x j 2 , · · · x j k }, respectively. If we do not consider the symmetry and independence of variables, then there are 2 k k! groups of different variable mappings between D f and D g according to their 1st signatures. However, according to the properties of independent variables, we need to consider only the positive phase and create one independent-mapping set Therefore, the search space is reduced significantly if there are independent variables in the Boolean functions.
In Example 2, the Boolean function f has the three symmetry classes x 5 }, and the Boolean function g has the three symmetry classes The symmetry class C 0 of f can be mapped to the symmetry classes C 1 and C 2 of g using the method of reference [5]. In other words, the symmetry classes C 0 and C 4 of Boolean function f cannot be distinguished. However, the variables in C 0 and C 4 of Boolean function f have different Boolean difference signatures. Thus, if we consider the Boolean difference signatures when searching the variable mappings, the symmetry class C 0 of f can be mapped only to the symmetry class C 1 of g, and the symmetry class C 4 of f can be mapped only to the symmetry class C 2 of g. There exists an independent-mapping set The algorithm presented in [5] groups variables by their 1st signature values. The algorithm proposed in this paper groups variables by their 1st signature values and Boolean difference signatures. We define the '<' relation between x i and x j as follows.
The variables x i and x j have the relation x i < x j if one of the following two cases is satisfied: The group numbers of the variables are generated with the above '<' relation. The SDS vectors of f and g in example 2 are as follows: V f = {(12, 12, 2, 0, 1, 48), (12, 12, 2,
SDS-Based Boolean Matching Algorithm
NPN Boolean matching is defined as follows: Given two Boolean functions f and g, if there exists an NP transformation T that satisfies f (TX) = g(X) or f (TX) = g(X), then f is NPN-equivalent to g.
Before searching the variable mappings, the proposed algorithm first determines whether there is an output negation for Boolean function f . If there is, then our algorithm will match f and g. The method of identifying the presence of an output negation is the same as that in reference [5].
Our algorithm handles first the condition without output negation and then the condition with output negation if f is not NP equivalent to g.
The algorithm will terminate when it finds a transformation T that satisfies f (TX) = g(X) (g(X)) or when all candidate transformations have been checked and found not to satisfy f (TX) = g(X) (g(X)). The algorithm will attempt all possible variable mappings, and thus, it will certainly find an NP transformation T between two NP-equivalent Boolean functions f and g (g).
The pseudo-code for NPN Boolean matching is given in Procedure 1.
In Procedure 1, trans_list is a tree that stores the NP transformations generated in the process of transformation detection. A candidate transformation is an unabridged branch in trans_list. sp_ f and sp_g are the decomposition expressions for f and g, respectively. After the existence of an output negation has been determined, Procedure 1 calls Handle_SDS() to detect the NP transformations between f and g (g) and judge the NP equivalence of f and g (g).
Any one NP transformation between the Boolean functions f and g (g) is composed of n variable mappings. Thus, the proposed algorithm searches variable mappings and generates NP transformations. In this paper, the necessary condition for two Boolean functions to be judged NP equivalent is that they must have the same SDS vector. For a variable mapping to be established between x i and x j , these two variables must satisfy the following conditions: (1) x i and x j have the same 1st signature values, i.e., ( (2) x i and x j have the same Boolean difference signature, i.e., | f x i | = | f x j |.
(3) x i and x j have the same symmetry class cardinality, i.e., |C i | = |C j |. (4) x i and x j have the same group number, i.e., G i = G j .
Procedure 1 NPN Boolean Matching.
Input: f and g Output: 0 or 1 function MATCHING( f , g) Create BDD of f and g sp_ f = bddtrue, sp_g = bddtrue, trans_list = NULL Return 0 end if end if end function In the process of the variable mapping search, Handle_SDS() searches the variable mappings for each variable that has not been identified. A variable is identified when its phase and variable mappings are determined in a transformation. After searching all variable mapping sets, Handle_SDS() selects the minimal variable mapping set to handle. The minimal variable mapping set is the one with the lowest cardinality. There are eight possible cases for the variable mapping set of the variable x i of f , as follows.
(1) The variable x i is an asymmetric variable. The phase of x i is determined, and there is only one variable x j of g that has the same SDS values as those of x i . The variable mapping set of x i is a single-mapping set. χ i = {i → j − k}, k ∈ {0, 1}, and |χ i | = 1.
(2) The variable x i is an asymmetric variable. There exist multiple variables x j 1 , x j 2 , · · · , x j m of g, where m ≥ 2, that have the same SDS values as those of x i , and their phases are determined. The variable mapping set of x i is a multiple-mapping set.
(3) The variable x i is an asymmetric variable. There exist one or more variables x j 1 , x j 2 , · · · , x j m of g, where m ≥ 1, that have the same SDS values as those of x i , and their phases are not determined. The variable mapping set of x i is a multiple-mapping set.
The variable x i is a symmetric variable, and its symmetry class is C i = {x i , x i 1 , x i 2 , · · · , x i m−1 }. There exists only one symmetry class C j = {x j , x j 1 , x j 2 , · · · , x j m−1 } of g whose variables have the same SDS values as those of the variables in C i , where |C i | = |C j |, and the phase of x i is determined. The variable mapping set of x i , S i , is a single symmetry-mapping set, i.e., |S i | = 1. There exists one group of variable mappings {i → j − k, The variable x i is a symmetric variable, and its symmetry class is C i = {x i , x i 1 , x i 2 , · · · , x i m−1 }. There is only one symmetry class C j = {x j , x j 1 , x j 2 , · · · , x j m−1 } of g whose variables have the same SDS values as those of the variables in C i , where |C i | = |C j |, and the phase of x i is not determined. The variable mapping set of x i , S i , is a multiple symmetry-mapping set: |S i | = 2. There are two groups of variable mappings, {i → j − 0, When the variable symmetry is checked, the phase relation between two symmetric variables is known. The variable mapping relations between C i and C j can be generated in the following way.
We first consider the case in which x i and x j have the same phase, i.e., there exists a variable mapping i → j − 0. A variable mapping i 1 → j 1 − 0 exists in two cases: (1) x i is symmetric to x i 1 and x j is symmetric to x j 1 or (2) x i is symmetric to x i 1 and x j is symmetric to x j 1 . A variable mapping i 1 → j 1 − 1 exists in two cases: (1) x i is symmetric to x i 1 and x j is symmetric to x j 1 or (2) x i is symmetric to x i 1 and x j is symmetric to x j 1 . Then, we consider the case in which x i and x j have the opposite phase, i.e., there exists a variable mapping i → j − 1. A variable mapping i 1 → j 1 − 1 exists in two cases: (1) x i is symmetric to x i 1 and x j is symmetric to x j 1 or (2) x i is symmetric to x i 1 and x j is symmetric to x j 1 . A variable mapping exists i 1 → j 1 − 0 in two cases: (1) x i is symmetric to x i 1 and x j is symmetric to x j 1 or (2) x i is symmetric to x i 1 and x j is symmetric to x j 1 . Thus, two groups of variable mappings between C i and C j will be generated via this method. (6) The variable x i is a symmetric variable, and its symmetry class is C i . There exist multiple symmetry classes C j 1 , C j 2 , · · · , C j m , where 2 ≤ m ≤ n 2 , whose variables have the same SDS values as the variables in C i , where |C i | = |C j 1 | = |C j 2 | = · · · = |C j m |, and the phase of x i is determined. The variable mapping set of x i , S i , is a multiple symmetry-mapping set: |S i | = m. There exists one group of variable mappings between C i and each C j p , where p ∈ {1, 2, · · · , m}.
The variable x i is a symmetric variable, and its symmetry class is C i . There exist one or more symmetry classes C j 1 , C j 2 , · · · , C j m , where 1 ≤ m ≤ n 2 , whose variables have the same SDS values as those of the variables of C i , where |C i | = |C j 1 | = |C j 2 | = · · · = |C j k |, and the phase of x i is not determined. The variable mapping set of x i , S i , is a multiple symmetry-mapping set: |S i | = 2m. There exist two groups of variable mappings between C i and each C j p , where p ∈ {1, 2, · · · , m}.
(8) The variable x i is an independent variable. The variable mapping set of x i is an independent mapping set.
All possible variable mapping sets are listed above. To generate an NP transformation, n variable mappings are needed for x 1 , x 2 , · · · , x n . Each node in the NP transformation tree, trans_list, represents a variable mapping, and all nodes in a given layer belong to the same variable mapping set. The methods for handling the variable mapping sets are as follows.
(1) If it is the first computation of SDS vectors, a check for independent variables is performed. If there are one or more independent variables, an independent-mapping set is created and added to trans_list, and the minimal variable mapping set is then sought among the remaining variables. If there are no independent variables, Handle_SDS() searches the variable mapping sets for all variables.
(2) If the current variable mapping set of x i is a single-mapping set, our algorithm adds the variable mapping in χ i to trans_list. The variable x i is identified.
(3) If the current variable mapping set of x i is a single symmetry-mapping set and x i belongs to C i , where |C i | = m, then the group ψ i of variable mappings of S i is added to trans_list. To the NP transformation tree, m layers are added, where each layer contains a variable mapping node. The variables in the symmetry class C i are all identified.
(4) If the current variable mapping set of x i is a multiple-mapping set or a multiple symmetry-mapping set, then the cardinalities of the variable mapping sets are computed, and the minimal variable mapping set is recorded.
After searching all variable mapping sets, as in reference [5], our algorithm updates the two decomposition expressions sp_ f and sp_g in the case of a single-mapping set or a single symmetry-mapping set. Otherwise, our algorithm handles the minimal variable mapping set. If the cardinality m of the minimal variable mapping set satisfies m ≥ 2, then m branches will be generated in trans_list. Each branch is handled in order.
The purpose of Procedure 2 is to search the variable mappings for all possible NP transformations. In the process of recursive_search, Procedure 2 uses the same methods applied in [5] to find and prune error NP transformation branches. That is, the current branch will be pruned if the two SDS vectors are not the same or if the current variable mapping has a phase collision.
The pseudo-code for Procedure 2 is as follows.
Procedure 2 recursive_search.
Input: f , g, sp_ f , sp_g, and trans_list Output: 0 or 1 The meanings of conditions D 1 , D 2 , D 3 , D 4 , D 5 and D 6 and the operations that need to be performed when these conditions are satisfied are defined as follows: D 1 : When D 1 is true, a candidate transformation is generated. Procedure 2 checks whether the current NP transformation T can transform f into g (g). D 2 : When D 2 is true, the transformation tree is NULL, and this is the first time that the SDS vectors have been computed. Procedure 2 checks and handles the independent-mapping set between f and g (g). D 3 : When D 3 is true, the current variable x i has already been identified, and Procedure 2 fetches the next x i to handle. D 4 : When D 4 is true, the variable-mapping set of x i is a single-mapping set or a single symmetry-mapping set.
D 5 : When D 5 is true, there is a phase collision. D 6 : When D 6 is true, the cardinality of the minimal variable mapping set is 1.
In the process of transformation detection, Procedure 2 attempts each variable mapping in each multiple-mapping set or each group of variable mappings in each multiple symmetry-mapping set. For two NP-equivalent Boolean functions f and g (g), Procedure 2 must find a candidate transformation that satisfies f (TX) = g(X) (g(X)). The purpose of VERIFY() is to check whether f (TX) = g(X) (g(X)).
UPDATE() serves the following functions: (1) Updates the SDS vector V f of f and the SDS vector V g of g by means of Shannon decomposition and the decomposition expressions sp_ f and sp_g.
(2) Updates the phases of the variables in f and g.
. Procedure 2 handles the single symmetry-mapping set S 2 , and the variable mappings x 2 → x 0 and x 3 → x 4 are added to trans_list. sp_ f and sp_g are updated to x 2 and x 0 . After the SS vectors are updated, the 1st signatures of the remaining variables of f and g are all (8,8). Therefore, the remaining four variables still cannot be distinguished, and there are two multiple symmetry-mapping sets, S 0 and S 4 . The first symmetry-mapping set S 0 is selected to be handled. The transformation tree for Example 2 using SS vectors is shown in Figure 1. If we use the SDS values to search the variable mappings, then there are one independent-mapping set, one single symmetry-mapping set and one multiple symmetry-mapping set according to the first computed SDS vectors. Procedure 2 first adds the variable mappings x 4 → x 2 and x 5 → x 5 to the transformation tree. Then, the two variable mappings x 2 → x 0 and x 3 → x 4 of the symmetry mapping C 2 → C 0 are added to the transformation tree. The catch is that the independent variable is not a splitting variable because the decomposition results obtained via Shannon expansion with the independent variable are unchanged. Thus, the decomposition expressions sp_ f and sp_g are updated to x 2 and x 0 . UPDATE() is called to compute new SDS vectors, and the SDS vectors are updated in accordance with sp_ f and sp_g.
In this way, Procedure 2 determines 4 variable mappings, namely, x 4 → x 2 , x 5 → x 5 , x 2 → x 0 and x 3 → x 4 , after the first variable mapping search for example 2. In the next variable mapping search, there is one multiple symmetry-mapping set, The transformation tree for Example 2 using SDS vectors is shown in Figure 2. From Example 2, we can see that the number of candidate transformations decreases from 8 to 2. The use of Boolean difference signatures helps to distinguish symmetry classes C 1 and C 5 , and we need to consider only the positive phase for independent variables. Thus, Boolean difference signatures are very beneficial for distinguishing variables.
In cell library binding, a benchmark Boolean circuit is found to realize another NPN equivalent Boolean function. Example 3 demonstrates the process of NPN equivalent matching by SS and SDS vectors respectively, and illustrates the validity of the SDS vectors proposed in this paper. Example 3. Consider two 6-input Boolean functions f (X) and g(X): The transformation detection process using SS vectors is as follows: (1) Compute the SS vectors of f and g. The results are: The results show the following: (1) the two new SS vectors are the same; (2) the phases of all variables are determined; and (3) the next variable-mapping set to be handled is In the subsequent variable mapping search, the x 0 → x 1 branch is pruned by a phase collision. The x 0 → x 2 , x 0 → x 3 and x 0 → x 4 branches are pruned by having different SS vectors.
(3) Then, Procedure 2 handles the variable mapping x 0 → x 5 and detects a candidate After verification, this transformation is found to satisfy f (TX) = g(X). Therefore, f is NPN-equivalent to g.
The transformation tree for Example 3 using SS vectors is shown in Figure 3. Figure 3. The transformation search tree for example 3 using SS vectorss. Figure 3 shows that this transformation tree for Example 3 has 6 branches and that the two Boolean functions are decomposed 4 times. Let us examine the detection process using SDS vectors.
(1) The SDS vectors of f and g are as follows: From these results, we can draw the following conclusions: (1) these two SDS vectors are the same; (2) the phases of the variable x 1 of f and the variable x 0 of g are determined,; and (3) there is one single-mapping set {x 1 → x 0 } to be used in the search. In Procedure 2, the splitting variables x 1 and x 0 are used to decompose f and g, respectively.
(2) The new SDS vectors are as follows: From these two new SDS vectors, the following can be seen: (1) the phases of all variables are determined; and (2) all unidentified variables can be identified from their Boolean differences.
and this T is verified to be correct.
When SDS vectors are used to perform Boolean matching, the transformation tree for Example 3 contains only one candidate transformation. In the transformation detection process, the search space comprises all branches of the transformation tree, including unabridged and abridged branches. The unabridged branches are the candidate transformations, and the abridged branches are the pruned transformations. When the transformation tree possesses fewer branches, the algorithm considers a smaller search space. The purpose of decomposing the Boolean functions is to update the SDS vectors to search the new variable mappings. When the algorithm requires fewer decompositions, more variables are identified in each iteration. These three indicators can be used to measure how much of the search space our algorithm searches.
In the best case, the variable mapping set of every variable is a single-mapping set, and there is only one candidate transformation. In this case, the spatial complexity is O(1), and the time complexity is O(n 2 ). In the worst case, there are no symmetric variables, every variable has the same SDS value, and the phases of all variables cannot be determined in each SDS update. There are 2 n+1 n! candidate transformations that need to be verified. The spatial complexity is O(2 n n!), and the time complexity is O(n 3 ).
Experimental Results
To demonstrate the effectiveness of the proposed method, we re-implemented the algorithm of [6] and tested the algorithm presented in this paper, the algorithm of [5] and the algorithm of [6] on both a randomly generated circuit set and an MCNC benchmark circuit set. In the random circuit set, there were 1200 circuits in each input circuit set. Every circuit in the random circuit set contained at least two candidate transformations. In the test, we recorded the three indicators concerning the search space and the run time. The proposed algorithm was implemented in C with buddy package. The following experimental results were obtained in a hardware environment with a 3.3-GHz Intel Xeon processor and 4 GB of memory.
In the following tables, the first column shows the number of input variables (#I), and the following four columns show the experimental results for our algorithm. The next four columns show the corresponding experimental results of [5], and the last column shows the average run time of the algorithm of [6]. Tables 1 and 2 From Table 1, we can see that the run time of our algorithm is improved by 54% relative to that of [5] and by 84% relative to that of [6]. From the comparison of the three indicators for the search space, we can see that the number of branches in the transformation tree is reduced by 70%, the number of candidate transformations is reduced by 65%, and the number of decompositions is reduced by 27%. Because the Boolean difference facilitates the identification of the variables, the proposed algorithm reduces the search space and speeds up the matching process. Figure 4 presents the diagram of the search space comparison results for our algorithm and that of reference [5] tested on the random circuit set. Figure 5 presents the diagram of the speed comparison results for our algorithm, that of reference [6] and that of reference [5] tested on the random circuit set. Table 2 shows the experimental results obtained during testing on the MCNC benchmark circuit set. Table 2 shows that with the proposed algorithm, the values of the three indicators for the search space are decreased, and the run time is also slightly reduced. When there are 22 inputs, however, the average run time of our algorithm is higher than that of [5]. This is because the variables of this group circuit are easy to identify and because the search space of our algorithm is almost the same as that of [5]. In this case, our algorithm spends additional time in computing the Boolean differences compared with the algorithm of [5].
From Tables 1 and 2, we can see that the matching speed on the MCNC benchmark circuits is slower than that on random circuits, although the search space for the MCNC benchmark circuits is less than that for the random circuits. In this paper, we use BDDs to represent Boolean functions. The BDD structure of a Boolean function is closely related to the speed of operations on the BDD. Because the BDD operation speed on the MCNC benchmark circuits is slower than that on the random circuits, the matching speed on the MCNC benchmark circuits is also slower than that on the random circuits . Average Matching Time (s) Our Algorithm reference [5] reference [6] Figure 5. The matching speed comparison results for testing on random circuits.
Conclusions
The major contribution of this paper is the raise of SDS vector. The paper demonstrates how SDS vectors can be used to effectively search variable mappings and reduce the search space. The algorithm of this paper take advantage of cofactor, symmetry and Boolean different when search the variable mappings between two Boolean functions. Therefore, the search space and match speed of ours algorithm is better than the competitors. Compared with the algorithm of [5], the search space is cut in 48%, and the run time is reduced by 42% and 80% compared with [5,6], respectively. The experimental results prove that the algorithm proposed in this paper is more effective than competing algorithms on general circuits. In future work, we will extend our algorithm to multiple-output Boolean matching and Boolean matching with don't care sets. | 9,503.2 | 2018-12-29T00:00:00.000 | [
"Computer Science"
] |
Revisiting $^{129}$Xe electric dipole moment measurements applying a new global phase fitting approach
By measuring the nuclear magnetic spin precession frequencies of polarized $^{129}$Xe and $^{3}$He, a new upper limit on the $^{129}$Xe atomic electric dipole moment (EDM) $ d_\mathrm{A} (^{129}\mathrm{Xe})$ was reported in Phys. Rev. Lett. 123, 143003 (2019). Here, we propose a new evaluation method based on global phase fitting (GPF) for analyzing the continuous phase development of the $^{3}$He-$^{129}$Xe comagnetometer signal. The Cramer-Rao Lower Bound on the $^{129}$Xe EDM for the GPF method is theoretically derived and shows the potential benefit of our new approach. The robustness of the GPF method is verified with Monte-Carlo studies. By optimizing the analysis parameters and adding data that could not be analyzed with the former method, we obtain a result of $d_\mathrm{A} (^{129}\mathrm{Xe}) = 1.1 \pm 3.6~\mathrm{(stat)} \pm 2.0~\mathrm{(syst)} \times 10^{-28}~ e~\mathrm{cm}$ in an unblinded analysis. For the systematic uncertainty analyses, we adopted all methods from the aforementioned PRL publication except the comagnetometer phase drift, which can be omitted using the GPF method. The updated null result can be interpreted as a new upper limit of $| d_\mathrm{A} (^{129}\mathrm{Xe}) |<8.3 \times 10^{-28}~e~\mathrm{cm}$ at the 95\% C.L.
Introduction
A quantum field theory that models the formation of the imbalance of matter over antimatter in our universe must fulfill the Sakharov conditions [1]. One of those conditions is the CP violation (C is charge conjugation and P is parity reversal). The best-tested standard model (SM) of particle physics provides two sources of CP violation, the phase of the Cabibbo-Kobayashi-Maskawa matrix and the termθ in the QCD Lagrangian [2]. However, the CP violation within the SM is too small to produce the observed rate of the matter to antimatter asymmetry, motivating searches for physics beyond-the-SM (BSM). BSM theories generally include additional sources of CP violation [2,3], such as a larger permanent electric dipole moment (EDM) of fundamental or composite particles [4,5]. So far, all measurement results of EDMs in more than ten diverse systems, with the first published in 1957 [6], are consistent with zero. These null results are interpreted as upper limits on EDMs and place constraints on various sources of CP violation and masses of BSM particles, thus directing the search of BSM scenarios [7].
Long spin-coherence time and obtainable high polarization leading to high signal-to-noise ratios (SNR) make several diamagnetic systems such as the 199 Hg and 129 Xe atom promising candidates for EDM experiments. Over the last 40 years, significant progress was made in the determination of upper limits for EDMs of diamagnetic systems (see Fig. 1). At present, the 199 Hg atomic EDM measurement is the most sensitive, and its upper limit sets constraints to multiple sources of CP violation [8]. Considering various potential contributions to an atomic EDM, an improved limit on other systems, like the 129 Xe EDM d A ( 129 Xe), will tighten these constraints. The theoretical results for 129 Xe EDM are more accurate and reliable than those obtained for 199 Hg EDM, therefore 129 Xe has the potential to probe new physics [9].
Recently, new upper bounds on the 129 Xe EDM using 3 He comagnetometry and SQUID detection have been reported by a joint collaboration between the University of Michigan, the Technical University of Munich and the Physikalisch-Technische Bundesanstalt(PTB) [11] as well as another independent group with comparable sensitivities [15], which are about five times smaller than the previous limit set in 2001 [16]. One of the challenges in both L. For all systems, the current upper bound has decreased more than an order of magnitude compared to their first published result [8,10,11,12,13,14,15,16].
experiments is the comagnetometer frequency drift, which is several magnitudes larger than the expected frequency shift due to a potential 129 Xe EDM [17]. One approach to correct for the impact of the comagnetometer drift on the measured d A ( 129 Xe) is using a deterministic physical model to fit the comagnetometer frequency drift [15,18]. However, the physical origin of the comagnetometer frequency instability is subject of a controversial debate [19,20], which was inspired by another theoretical model and motivated the performance of recent experiments to substantiate the former criticism [21,22]. Instead, in Ref. [11] a phenomenological method was used, which does not need any physical model on the comagnetometer frequency drift, but a distinct pattern of electric fields with switching polarity.
We will refer to that as the Pattern Combination (PC) method from here on.
Here, we propose a new analysis based on a Global Phase Fitting (GPF) method, where the EDM value is estimated by a single fit to the comagnetometer phase development within one complete measurement. Besides an experimentally deduced EDM function as used in Ref. [15], allowing to analyse any electric field pattern, our GPF method uses a polynomial function to account for the comagnetometer frequency drift. Sec. 2 gives a short description of the basic principle of measuring the 129 Xe EDM d A ( 129 Xe) using comagnetometry. In addition, the PC method is introduced for comparison with the GPF method. The GPF method is elucidated in detail in Sec. 3, including the derivation of the Cramer-Rao Lower Bound (CRLB). The CRLB of the variance on the EDM value estimation using the GPF method is a factor of four smaller than that of the PC method. In Sec. 4 we validate the GPF method with Monte-Carlo simulations and compare the results of the PC and GPF method using the experimental data obtained for Ref. [11]. Eventually we recalculate the systematic uncertainties based on Ref. [11] and derive a new upper limit for the permanent 129 Xe EDM. For 129 Xe atoms stored in a cell permeated by a uniform magnetic field B and an electric field E, that is parallel to B, their nuclear spin precesses at an angular frequency where F Xe = 1/2 is the total angular momentum number and γ Xe is the gyromagnetic ratio of 129 Xe. The magnetic field B in Eq. (1) becomes an interference term for directly calculating d A ( 129 Xe) from ω Xe . To overcome the experimental difficulties on controlling and measuring B, comagnetometry was introduced with two collated species measured at the same time [16,17,23]. 3 He is an ideal candidate for comagnetometry due to its potentially high SNR and a negligible EDM compared to d A ( 129 Xe) [24]. The weighted frequency difference between 129 Xe atoms and 3 He atoms is defined as and commonly named the comagnetometer frequency. Here ω He = |γ He B| is the spin precession frequency of 3 He atoms with γ He being its gyromagnetic ratio. Therefore, ω co can be written as showing that ω co is independent of the magnitude of the background magnetic field but depends on its orientation relative to the applied electric field. The current measurement sensitivity of ω co is in the nHz range for a single measurement, while the comagnetometer frequency drift is at the µHz level, which causes a non-negligible systematic error [21,22,25]. Multiple physical models to describe the comagnetometer drift were proposed. The dominant terms thereby vary in different models. Furthermore, several parameters, such as the longitudinal relaxation time T 1 of the nuclear spins, used in these models are unknown or difficult to measure, making the frequency drift correction with a deterministic model inaccurate. By using a phenomenological model such as proposed here and in [11], these currently unsolved difficulties can be omitted.
Parameters of two measurement campaigns
The data used in our analysis were collected in the joint collaboration at the Berlin Magnetically Shielded Room (BMSR-2) facility at PTB Berlin. Table 1 summarizes the main experimental parameters of the two measurement campaigns carried out in 2017 and 2018, respectively. More details on the setup and process are given in Ref. [26]. The spin precession signal of the transverse magnetization of 3 He and 129 Xe was recorded by a dc-SQUID system with two channels (Z1,Z2). The high voltage and leakage current between the two electrodes of the cell were monitored. A background magnetic field B 0 in the range of 2.6 µT -3 µT was applied to shift ω Xe and ω He to 30 Hz -36 Hz and 90 Hz -98 Hz, respectively, which are well above the vibrational interference signals (see Fig. 2). In order to further decrease the impact of the vibrational noise, a software SQUID gradiometer (Z1 − Z2) was used. The left panel of Fig. 2 shows the raw SQUID gradiometer signal in pT (gray) of one run from the 2018 campaign lasting 35000 s exemplarily. This run comprises two so called sub-runs with 36 segments each. A segment is defined as the time of constant electric field. For the two sub-runs shown in Fig. 2, the segments last 300 s and 600 s, respectively. The first sub-run ranging from 50 s to 12400 s is used as an example in the data analysis section.
PC method
As mentioned above, one approach to mitigate the effect of the comagnetometer frequency drift is repetitively reversing the direction of the electric field E. This allows to separate the impact of d A ( 129 Xe) on ω co from other interference terms. The E modulation method has been applied in diverse EDM experiments with varied modulation patterns [10,14]. For the PC method, the common E pattern for one sub-run consists of 36 segments with an equal time interval t s , and the sign of E changes according to the following sequence ±[0 + --+ -+ + --+ + -+ --+ 0, 0 -+ + -+ --+ + --+ -+ + -0]. The segments of zero voltage were added to allow for systematic error studies. The PC method determines the EDM value from averaging the comagnetometer frequencies ω co from 2 n (n ∈ N) consecutive segments omitting those with zero voltages. This pattern is constructed to cancel the effect of the comagnetometer frequency drift up to n − 1 order when parametrized in polynomials. The effect of the higher order (above n − 1) drift dependency imposing a false EDM on each sub-run is deduced by applying polynomial fits to all ω co within the sub-runs, leading to a correction for the EDM and an additional systematic uncertainty (for more details see Ref. [26]).
GLOBAL PHASE FITTING METHOD
The general data-processing procedure for the GPF method is illustrated in Fig. 3. For this method, the raw SQUID data of a sub-run is cut into continuous blocks of equal length. Each block data is fitted to deduce precession phases of both species 3 He and 129 Xe (see Sec. 3.1) and the continuous comagnetometer phase is derived for each block (see Sec. 3.2). For data blinding an additional phase, bound to the measured high voltage signal, can be added to the comagnetometer phase at this point (see Sec. 3.3). The EDM value is acquired by fitting the blinded comagnetometer phases using a polynomial function together with a constructed function comprising the phase evolution introduced by a hypothetical 129 Xe EDM. The unblinded EDM result is obtained by reanalyzing the raw comagnetometer phases, as illustrated in Fig. 3.
The phase of each block
The block length t b is a free parameter with a suitable range from 1 s to 20 s, being short enough to exclude the amplitude decay and frequency drift, and long enough to perform the fit for our data [11]. The SQUID data in each block are fitted to the function where a Xe/He/i , b Xe/He/i , ω Xe/He , c, and d are the fit parameters and ω i=1,2,3,4 = 2π × 50i s −1 represent the power frequency and its harmonics. The constant and linear terms c and d · t describe the background magnetic field and its small drift as seen by the SQUID. The variable projection (VP) method is applied [27], where the nonlinear parameters ω Xe/He are estimated separately from the linear parameters a Xe/He/i , b Xe/He/i , c, and d. To minimize the correlation between the fit terms in Eq. (4), the time of each block is assigned to be symmetrical around zero from −t b /2 to t b /2. shows the raw SQUID data of a 5 s block from the start of the exemplary sub-run and the residual of the fit to this data. The residual is dominated by the mechanical vibration in the frequency range of 4 Hz -25 Hz as shown in the right plot of Fig. 2. We can assume approximate orthogonality between the precession signal and the vibrational noise of our setup. Therefore, the error on the fit parameter values caused by the latter one is negligible compared to that caused by the white noise, although its integrated power is much larger than the white noise power. This was validated with Monte-Carlo simulations using the recorded vibrational noise (see Appendix A.1). The phase of each species for the block k in the range of [−π, π) can be obtained by where Arg is the function to get the principle argument of a complex number, i is the imaginary unit and m = Xe or He. Note that due to the time centering, the estimated phase φ k is referred to the middle time of each block The time interval of the block k is defined as (t k−1 , t k ). The parameter uncertainties δa k m and δb k m are estimated from the covariance matrix of the fit where r is the residual, ν is the degrees of freedom and J is the Jacobian matrix. The standard deviation of the derived phase δφ k Xe/He is Eq. (6) assumes that the residual r stems from the wideband white noise, which is a conservative approach for our case since the main signal in the residuals is the narrowband vibrational noise, leading to an overestimation of the uncertainty δφ k m . However, the ratio between δφ k m for different blocks reflects the decaying SNR. Therefore, these estimated uncertainties are used as weights in the subsequent GPF routine.
The accumulated comagnetometer phase
The accumulated phase Φ k m in a block k of the continuously precessing spins is the sum of the wrapped phase φ k m and a multiple of 2π where the integer n k m is determined as rounded to the lower integer and n 1 m = 0. Here, the frequencies ω k−1 m are obtained by the fit of the block k using Eq.
is either > π or < −π, n k m is incremented or decremented by one, respectively, to ensure a continuous phase evaluation. The standard deviation of the accumulated phase δΦ k m is equal to δφ k m as Eq. (8) does not introduce any additional uncertainty. According to Eq. (2), the evolved comagnetometer phase Φ k co for each block k is determined by
The fitted EDM value
By integrating Eq. (3), the accumulated phase due to a hypothetical 129 Xe EDM d h at the block k is where E i is the average electric field within the block i. By replacing d h with a computer-generated pseudo-random EDM value d bias , the bias phase Φ k bias is calculated and then used to blind the comagnetometer phase Φ k co,b = Φ k co +Φ k bias in order to avoid operator induced bias during process optimization. The value of d bias has been saved in an independent file in a binary format and Φ k co,b was used for later data analysis. The measured phase Φ k co originates not only from the potential 129 Xe EDM, but also from other sources such as chemical shift [21,26]. These contributions are phenomenologically parametrized by a polynomial of gth order [28]. Hence, the comagnetometer phase is fitted with the function where a, p 0 , p 1 , p 2 , . . . , p g are the global fit parameters. Here the time series t k are normalized to the interval [0,1] and shifted Legendre polynomialsP n (t k ) are applied to decrease the correlation between polynomial coefficients [29]. The fit was conducted by using the iterative least squares estimation method with the built-in function nlinf it in MATLAB. Thereby the inverse values of the phase variances (δΦ k co ) 2 are used as weights. Fig. 5 shows the comagnetometer phase Φ k co , the fit phase Φ k fit , and the EDM function Φ k EDM constructed from the measured E-field pattern of the exemplary sub-run. To determine the order needed for the polynomial function in Eq. (12), we apply an F -test where the significance of adding q terms to the fitting function with g terms was evaluated by the integral probability where P F is the probability density function of the F -distribution and N is the number of data points [30]. The upper bound of the integral is The order of the fit was defined sufficient when P g,g+1 as well as P g,g+2 are both smaller than a chosen threshold of P min . The atomic EDM of 129 Xe is calculated from the fit parameter a as The correlated uncertainties of the parameters are determined as the square root of the reciprocal of the diagonal of the covariance matrix, which inherently includes the uncertainty of the correlations between a and polynomial parameters. The influences of these correlations to the estimation of a are small due to the orthogonality between the constructed function Φ k EDM and the polynomial function of the order up to n − 2 where 2 n is the number of nonzero high voltage segments. The correlation matrix for the exemplary sub-run (see Fig. 2) is given in Table 2. In this case, the correlations between the EDM parameter a and the polynomial coefficients are significantly smaller than 1, but nonzero, since the polynomials of higher than 3rd order are not orthogonal to Φ k EDM . The derived uncertainty is in good agreement with the result using the log profile likelihood method. We also applied the linear regression method with the model in Eq. (12) and obtained consistent results.
The modified Allan deviation
The modified Allan deviation (MAD) is an established tool to evaluate the low-frequency drift of a time series of phases Φ, which is defined as where the integration time τ is n times the block length t b , and the total measurement time T is subdivided into P time intervals of equal length τ , such that P τ ≈ T [31]. As an example, the MAD of the exemplary sub-run is plotted in Fig. 6. σ f of Φ k co reaches the minimum at the integration time τ of 550 s and then increases due to the comagnetometer frequency drift. For the residual phase Φ k co − Φ k fit of this exemplary sub-run, the MAD decreases with increasing integration time according to σ f ∝ τ −3/2 (dashed line in Fig. 6) over the considered range, down to 0.4 nHz. This behavior is an indicator that the comagnetometer phase Φ k co is adequately described by the fit model of Eq. (12), since the residual is dominated by white phase noise. 6. The modified Allan deviation and its error bar of the accumulated comagnetometer phase and the residual phases for the fit with a 7th order polynomial. To fulfill the MAD statistics criteria [31], only data are shown for integration time τ < 4000 s.
The theoretical statistical uncertainty bound
The theoretical limit of the 129 Xe EDM uncertainty can be derived as the CRLB, which also provides insights into optimizing experimental parameters. For the sake of simplicity, only a single species spin-precession signal is considered and its amplitude is assumed to be a constant over the whole sub-run. For the GPF method, d A ( 129 Xe) is estimated with two steps: The VP fitting to obtain the phase of each block and global phase fitting of the sub-runs. Therefore, the overall CRLB is the combination of the results of these two fits.
For the phase φ of a sinusoid embedded in white Gaussian noise (WGN) observed over one block with time being symmetrically around 0 s, the CRLB is where σ 2 w is the variance of the WGN, A the amplitude and N the number of data points in one block [32]. The CRLB for the parameters in the fit model Eq. (12) is the reciprocal of the Fisher information matrix where M is the number of segments in one sub-run, and J is the number of blocks in one segment. For the sake of simplicity, the standard polynomial is used in the fit model Eq. (12). Assuming JM k=1 Φ k EDM t i k = 0 for i going from 0 to g and the phase uncertainty δφ is a constant, the considered CRLB can be simplified to the so called ideal or uncorrelated CRLB as By substituting Eqs. (11), (17) and (18) into Eq. (19), and exploiting the periodic property of the constructed EDM function (see Fig. 5), for our case, where T = M JN ∆t is the total measurement time and ∆t = 1/f s is the sampling interval. Note that the number of segments M should be large enough to ensure the orthogonality between Φ k EDM and the polynomial functions. In case of an exponentially decaying amplitude A of the precession signal, the CRLB has to be calculated with Eq. (18). For the PC method, the CRLB on the 129 Xe EDM for M segments is derived in Ref. [26] as The PC method applies linear fits to the comagnetometer phases within one segment to derive the comagnetometer frequency of each segment, which requires the addition of an interception term as a starting phase, increasing the variance by a factor of four compared to a linear fit without interception term. In the GPF method the accumulated comagnetometer phases within one sub-run are analyzed in a single fit, therefore the uncertainty does not increase as the interception term is orthogonal to the EDM function (see Eq. (18)). Furthermore, the PC method requires the unweighted average of at least four segment frequencies, which increases its statistical uncertainty even further.
Results
A Monte-Carlo study was conducted to confirm that the GPF method can reach the higher sensitivity as shown by the CRLB compared to the PC method. Later, the GPF method was used to obtain the 129 Xe EDM from the data set as taken in Ref. [11] using the same channel and block length for analysis. As there were data sets in the 2017 and 2018 campaigns which were not useable with the PC method but could be analyzed with the GPF method, we gathered all data and optimized the analysis parameters to obtain the minimum uncertainty from the data. Ultimately, an improved upper limit of the 129 Xe EDM was derived using the unblinded data.
Monte-Carlo tests
The accumulated phase of each spin species for the sampling point j was generated as Φ j He,syn = tj 0 γ He B(t) + 2π(f He lin + u He e −t/T He 1 )dt, (24) where the drift of the background field B(t) was parametrized with a 4th order polynomial. f Xe/He lin represent the frequency shifts caused by the chemical shift and Earth's rotation. u Xe/He are the drift amplitudes of the respective precession frequencies. The frequency drift was modeled as exponentially decaying functions with the characteristic time of T 1 [21,22,25]. Thereby it was assumed that T 1 is larger than T 2 and its range is listed in Table 3. f EDM is the frequency shift due to the coupling of a synthetic EDM d syn with the electric field according to Eq. (3). Substituting Eqs. (23) and (24) into Eq. (10) results in the synthetic comagnetometer phase, whose time dependence is designed to mimic the measured data (for details see Appendix A.2). The exponentially decaying spin precession signals of 129 Xe and 3 He atoms can be described by with t j = j∆t , which is the time for the sampling point j. Table 3. Three different kinds of noise were separately added into the synthetic data, including two WGN with σ = 154 fT, the standard deviation of the white noise in the real data, and σ = 154/5 = 30.8 fT, as well as real SQUID gradiometer noise. The overall EDM values obtained with the GPF method from the 18 synthetic sub-runs for four synthetic values d syn = (1, 2, 5, 10) × 10 −28 e cm are plotted in Fig. 7. The averaged overall EDM uncertainty for WGN data with σ = 154 fT is 1.74 × 10 −28 e cm, which is roughly a factor of 5 larger than that obtained from the data with σ = 30.8 fT and a factor of 1.1 higher than the calculated CRLB for these 18 sub-runs, which is 1.59 × 10 −28 e cm. This mainly results from the correlation between the EDM and the parameters of the polynomials in the phase fit. The uncertainty for the real noise is 1.85 × 10 −28 e cm, being similar to that for the white noise with σ = 154 fT. Most of the 1σ confidence intervals of the derived EDM cover the added EDM values d syn , showing that the GPF method is capable of accurately obtaining d syn ≥ 1 × 10 −28 e cm independent of the realistic noise level.
Statistical uncertainty
Applying the GPF method to the same data set of 41 runs (80 sub-runs) as analyzed by the PC method [11] and using the same channel and analysis parameters, the statistical uncertainty is decreased by a factor of 2.1 from 6.6 ×10 −28 e cm to 3.1 ×10 −28 e cm. Due to fewer constraints in the GPF method, runs with the number of segments M = 4n with n ∈ N or having SQUID jumps could be included in the data analysis, leading to a total of 45 runs (87 sub-runs). Furthermore, the segments with zero high voltage are included into the analysis. For the analysis, the block length is t b = 5 s, the threshold of the F -test is set to P min = 0.6 (see Appendix B) and the minimum order of the polynomial used in the fit is set to 4 in order to adequately describe the comagnetometer phase drift. The average polynomial order used for all sub-runs is 6.4 and the maximum order 13.
The overall result using the full data set is d A ( 129 Xe) = 1.1 ± 3.1 × 10 −28 e cm with χ 2 /dof = 115.5/86. As all sub-run measurements were taken with considerable different background noise a χ 2 /dof ≥ 1 can be expected. According to the PDG guidelines [33] we accounted for these random variations by scaling the statistical uncertainty with the factor χ 2 /dof = 1.16 leading to 3.6 × 10 −28 e cm. Bootstrapping [34] the 87 EDM measurements resulted in an estimate of the statistical uncertainty of 3.14 × 10 −28 e cm. Fig. 8 shows the derived EDM results per sub-run. Sorting all EDM measurements into groups based on the experimental parameters, such as the cell geometry, B 0 field direction, number and duration of segments and the gas pressure, shows no correlation between the deduced EDM value and these parameters, as can be seen in Fig. 9. Furthermore, no correlation between the chosen polynomial order and the derived sub-run EDM values was seen.
Systematic uncertainty
The systematic uncertainties of the two experiment campaigns were extensively studied in Ref. [11]. We applied the same analysis to the full data set used here and the derived systematic uncertainties are summarized in Table 4. The correction to the comagnetometer frequency drift of order higher than 1, as it has been done in Ref. [11], becomes obsolete for the GPF method since the model of Eq. (12) considers the higher order drifts implicitly.
As mentioned above, the GPF uses the full data set, including data during the high voltage rampings. Therefore the charging current can have an impact on the result in two ways. First the charging current could magnetize parts of the experimental equipment, and change the magnetic field seen by the spins. By this mechanism a false EDM may be generated. This effect has been carefully analyzed in Ref. [11] and has been adapted for the data set used for GPF (see Charging current in Table 4). Secondly, the charging currents just as the leakage currents will generate magnetic fields, which are correlated with the electric field direction. This effect is only present during the ramping lasting for a few blocks per segment, ranging from 20 to 160 blocks. The impact of the charging current acting as a leakage current is calculated and turned out to be negligible, relative to the effect of leakage currents as given in Table 4.
We further looked for the potential effect of the comagnetometer drift and the vibrational noise with Monte-Carlo simulations and did not find observable systematic error, see Appendix A. The overall systematic uncertainty is the weighted average of the systematic uncertainties of the two measurement campaigns 2017 and 2018 using the reciprocal of its statistical variance as weights, yielding 2.0 × 10 −28 e cm. The final result, separating the statistical and systematic uncertainties, is d A ( 129 Xe) = 1.1 ± 3.6 (stat) ± 2.0 (syst) × 10 −28 ecm, from which we set an upper limit |d A ( 129 Xe)| < 8.3 × 10 −28 e cm at the 95% C.L. This reanalysis leads to a limit that is a factor of 1.7 smaller compared to the previous result [11] and a factor of 8.0 compared to the result in 2001 [16].
SUMMARY AND OUTLOOK
We proposed a global phase fitting method to analyze spin precession data. Applying the GPF method to the data set used in Ref. [11] yields a consistent result for d A ( 129 Xe) but a two times smaller statistical uncertainty compared to the PC method, as predicted by the theoretical CRLB analysis. Using additional data which had to be discarded for the PC method due to incomplete electric field patterns and optimizing the analysis parameters, the upper limit of the 129 Xe EDM improves by a factor of 1.7 to |d A ( 129 Xe)| < 8.3 × 10 −28 e cm at the 95% C.L. This enables 129 Xe to be used as a comagnetometer in future neutron EDM experiments [35] with a systematic error contribution down to |d A ( 129 Xe)| × γ n /γ Xe = 2.1 × 10 −27 e cm. Our GPF method relieves the demands on the physical model describing the comagnetometer frequency drift and could be generally used in similar spin precession experiments, such as the Lorentz-invariance test. By optimizing the experimental parameters for the GPF method (see Appendix C), the upper limit for d A ( 129 Xe) could be reduced even further, as planned for an upcoming EDM campaign with optimized high voltage pattern. To investigate the potential systematic effect caused by the comagnetometer phase drift, we altered the drift amplitude u Xe and u He in Eqs. (23) and (24) in the synthetic phase data. Fig. 12 shows the derived EDM values as a function of the scale ratio of the drift amplitude. No distinct correlation between the obtained EDM value and the drift amplitude could be observed. Therefore, we did not assign a model dependent uncertainty for the comagnetometer drift when applying the GPF method.
B THE F -TEST THRESHOLD
The F -test threshold P min affects the polynomial order used in the GPF method, as listed in Table 5. The EDM values for various P min are overlapped within the 1σ statistical uncertainty and are all consistent with zero. Additionally, the upper limit of the 129 Xe EDM is almost insensitive to the threshold. We have chosen 0.6 as F -test threshold yielding the highest upper bound.
C DESIGN OF EXPERIMENTAL PARAMETERS
The number of segments M in one sub-run has a significant impact on the estimation uncertainty derived by the GPF method. According to the ideal CRLB, a smaller number of segments results in a lower uncertainty, shown as the red line in Fig. 13. To search for the optimum segment number, we used the synthetic comagnetometer phase data with added white Gaussian noise. The phase uncertainty increases with time and starts with 0.1 mrad. The time constants T 2 for 129 Xe atoms and 3 He atoms are a random number ranged from 8000 s to 9000 s. The total measurement time length is fixed to 38400 s, while M is varied from 2 to 64. The averaged EDM value over 100 runs for each M are plotted as the blue crosses. The fit uncertainty is larger than the ideal CRLB due to the correlation between the EDM function and the phase drift. The gap is reduced with the increase of M , since the orthogonality condition is satisfied better. A relatively flat optimum is found around M = 16. Note that this optimum value also depends on the total measurement time. A sub-run with longer measurement time calls for a higher number of segments, hence the optimum number for T = 6400 s and T = 64000 s is 8 and 64, respectively. The improved understanding of the comagnetometer frequency drift behavior may reduce the requirement on the segment number, thus significantly increasing the measurement sensitivity. | 7,884.8 | 2020-08-18T00:00:00.000 | [
"Physics"
] |
The importance of COVID-19 testing to assess socioeconomic fatality drivers and true case fatality rate. Facing the pandemic or walking in the dark?
Abstract
Background
The COVID-19 outbreak has disrupted economic and social life all over the world, and its scope is not yet certain, but it is definitively deep and lasting.Governments, policymakers, politicians, physicians, medical employees, scientifics and international organisations have gathered together into a virtual space for collaboration to find answers to all the raised questions.Apart from defeating the virus by developing a vaccine and/or finding a drug largely effective for patients with COVID-19, among the most important governments concerned in the short term, the impact of COVID-19 on the health system, namely, availability of health infrastructure, as well as finding the best strategy for reducing as much as possible the effects of the pandemic in economic and social aspects.The World Health Organization (WHO) has recommended social distancing measures to slow down virus spreading and, in this way, prevent medical services from collapse.However, in the long term, the WHO expects that the virus will remain present with periods of low-level infections, perhaps with seasonal increments (WHO,2020).Therefore, governmental strategies should aim to ensure that health services are available to attend COVID-19 patients without compromising all the other health services in the medium and long terms.In the document published on 15 th April by the WHO (2020), a set of recommended actions for public policies are outlined, in which the continuous tracking of the virus is recommended to be able take regional public health and social measures, so-called lockdowns, only at high-risk regions, or places where contagions return high.At the centre of the recommendations is the importance of testing (Sanchez, 2020) and the use of serological tests in line with scientific recommendations (CDC, 2020).Likewise, the Organisation for Economic Cooperation and Development (OECD) (2020) highlights the importance of testing by presenting an analysis of a better performance observed in countries with a high number of tests per million inhabitants.It is also pointed out that the increase in tests will help gather essential information to study the virus, especially to determine whether the population is developing antibodies, whether the virus can mutate and how to deal with COVID-19 in the following months.In addition, it is particularly important to find the asymptomatic proportion in the population, first to assess the probability of contagion from such individuals to others and second, to estimate the true CFR.
There is great diversity in the public health and social measures taken by each country against the pandemic, which can be grouped into three lines of action.First, it ensures a good supply of medical equipment and vacates the hospitals as much as possible.Second, social distancing measures, from banning international travel, suspending schools, encouraging teleworking, etc. Third, economic measures are needed to guarantee the wellbeing of the population, with special support for firms and families.Naturally, not all countries have followed the same set of actions.In fact, there are wide differences in the economic and social distance measures.Some countries implemented severe restrictions once the domestic contagions increased considerably, such as Italy, France, and the United Kingdom, while Peru and the United States (US) closed the international airports shortly after the first COVID-19 case was confirmed, yet this measure was not that effective, especially for the latter.Others implemented massive testing preventing the cases from exponential increase, such as Iceland, Singapore and Korea (OECD, 2020).Additionally, among the countries with a larger number of applied tests is Luxemburg, which has recently been published to test all its population2 .
In addition, law enforcement capacity and political organisation might have also played a significant role in this regard.For instance, in Mexico and the US, sub-national governments could regulate regional social distance measures.Meanwhile, the economic organisation, informality and the limited or null presence of the welfare state hinder the social and economic lockdown (Loayza, 2020), namely, entrepreneurs and employees in the informal economy might not access economic aid3 .According to the World Labour Organization (WLO), more than 60% of employment in the world is informal, breaking into regions; in Africa, 85.8% of employment is informal, in Asia and the Pacific 68.2%, 68.6% in the Arab States, 40.0% in the Americas and 25.1% in Europe and Central Asia4 .In addition, according to Loayza (2020), in developing countries, lockdown measures are less effective for several reasons, namely, people will continue to work if their income is compromised, confinement in overcrowded dwellings with poor sanity access might increase the risk of contagion, and displacement of people from urban to rural areas would move the contagions spreads to rural areas, which frequently have less access to medical services and sanity.
It is important to note that there are 70 countries in the sample, and they concentrate 96% of confirmed cases worldwide.The distribution is shown in Figure 1.It is clear that the majority of cases are concentrated in developed countries, while developing economies only account for approximately 20% of the cases.Africa registered only 1% of worldwide cases.From the initial analysis presented with the Chinese experience, it has been stated that the health of individuals, as well as their age, are important drivers for virus fatalities (The Novel, 2020).However, there is still little evidence about the correlation between the aggregated indicators of population health and health infrastructure and fatalities.
Resuming, the effectiveness of lockdown measures has been questioned, given that it is likely that the virus will continue to spread in the long term, while there are huge economic losses.
The likely underidentification of cases in developing nations would prevent further control The paper is organised as follows: in the second section, the materials and methods are explained, the third section presents the results, the fourth section presents a discussion, and the fifth section summarises the conclusions and policy implications.
Data
The data employed were taken from different sources.For COVID-19 cases and testing, the data came from ourwoldindata.org, in combination with GitHub5 , and the data on cases, deaths and tests encompassed7 May.For health indicators, the OECD6 and WHO 7 databases were consulted.The data collected correspond to the most recent data available.
For the cross-section models, the countries included are those that reported a 3-day average of 3 new deaths in at least one day.This criterion has been made to take out of the sample the countries in which COVID-19 has not been widely spread until now.Upon this criterion, a sample of 71 was obtained, and the full list is in the additional files (see Additional file 4).
A subsample for the OECD was also built.Not all OECD members were included due to lack of information or because they do not meet the abovementioned criterion for COVID-19 deaths.For the panel data analysis, all available information was used, yet given that many countries do not report daily ciphers, or they do not change over time, the sample is smaller, reduced to 66.A full list of the countries used per model is presented in the additional files (see Additional file 4).
Ordinal Probit model specification
An ordinal probit model allows the use of an ordinal list as a dependent variable, which can be numeric or categorical.The model was estimated with Stata.The dependent variable for this model is the CFR, which takes values from 1 to N, where 1 is assigned to the countries with the lowest CFR.
The estimation of CFR is difficult for several reasons.First, the universe of confirmed cases.
Due to the very different criteria for test applications, in most countries, the tests are administered only to those presenting symptoms, at least fever, or those requiring hospitalisation.Therefore, the universe of cases is well underestimated.Nonetheless, there is still no agreement over the likely size of this underestimation; depending on the study, the asymptomatic cases are estimated to be between 5% and 80% (Heneghan, Brassey and Jefferson, 2020).For instance, Iceland is the country with more tests applied per million inhabitants due to a massive testing strategy.In this case, they identified 50% of the positive cases as asymptomatic (Heneghan, Brassey and Jefferson, 2020).In the case of the Diamond Princess cruise ship, the proportion of asymptomatic to total infected was estimated to be 17.9% (Mizumoto, K., Kagaya, K., Zarebski, A., Chowel, G., 2020).Second, differences in registers.Some countries recognize COVID-19 death as suspicious; this is that lived with a former late COVID-19 patient or was closely related; meanwhile, other countries only account for the confirmed cases.Third, the timing matters.It has been confirmed that, similar to other viruses, once a person is infected, it takes up to two weeks to develop symptoms; if that is the case, a person can develop a mild flu-like illness, which according to the first Chinese analysis, this proportion was estimated to be up to 81% (Novel Coronavirus Epidemiology Response, 2020).However, those entering severe and critical states might be hospitalised, and it takes several days until a fatality occurs.In view of that, obtaining the CFR by using the proportion of current deaths to current cases is a misleading indicator, since the actual deaths from current cases will be reported later (Battegay et al., 2020).
Following the recommendation by Battegay et al. ( 2020), the third problem has been addressed by estimating the CFR as follows: This measure is larger than a current indicator, yet it might be more accurate.Figure 2 shows three different CFRs throughout the world.It is clear that the larger the lag in the total cases, the larger the CFR will become.However, it is noticeable that they tend towards convergence.
Figure 2 CFR for the world.Source: own elaboration.
In Table 1, the values at the beginning and end of the period are shown.For the three indicators, the CFR is higher at the end of the period, and the difference among them diminished.It is also important to mention that the first reported death came on the 12 th day after the first case was registered.Therefore, it is important to use a lagged number of cases for a better estimate.The model used is as follows: where is the Case Fatality Rate ranking for country ; for the full CFR per country, see the additional file (see Additional file 1), is a vector of variables corresponding to health indicators, both on infrastructure and on population health, which could help to explain the difference in CFR across countries, such as obesity, diabetes, presence of elderly people, and The models are specified as follows: For all the models, the explanatory variables are two: the 7 th lag of new tests per million inhabitants and the square of the stringency index.The seventh lag of new tests per million is used given the claims that early testing reduces the chances or greater infections (OECD, 2020).At the same time, similar to CFR, it is considered the time for the virus to develop; for instance, a person who is asymptomatic today might develop symptoms within a week.
Mizumoto et al. ( 2020) estimated a range of 5.5 to 9.5 days for incubation, yet it is still uncertain.There are cases in which people might show symptoms and die within a few days8 .Given the difficulties determining the best lag to consider, two choices are shown, the 7 th and the 15th.Regarding quarantine measures, many countries converge to similar levels in the index at the end of the period, yet squaring the variable allows us to model the fact that the index has a maximum, and its marginal effect is smaller in the time.
Additionally, countries taking early measures should be able to content the spread to a larger extent; thus, this is modelled through the initial larger marginal effect on the dependent variables of a squared variable.
In equation 5, the model has as a dependent variable the natural logarithm of the first difference in CFR.In equation 6, the dependent variable is the natural logarithm of new COVID-19 cases per million (first difference of total COVID-19 cases per million) and, in a similar fashion, the natural logarithm of new deaths per million (first difference of total COVID-19 deaths per million).By using weighted variables per million inhabitants, the population size differences across countries are addressed.
All the variables and their summary statistics are shown in Table 2.As seen in the last table, the mean CFR is similar for both datasets (0.0683694 and 0.0633442), which implies that the CFR keeps its trend in the time period analysed.Although this is not the case for the coefficient of variation 9 , which is greater for the panel data (268.80)than for the cross section (69.15), which is explained by the different results in the period for the different countries.
It is also worth noting that the maximum CFR in the panel data can be higher than 1.The reason is that in countries with very explosive growth, the total cases confirmed one week are less than the total deaths occurring the following week, by which time the confirmed cases grew exponentially.
Results
In Table 3, the results for the ordinal probit model are presented.The infrastructure variables and the population's health indicators were not statistically significant; instead, an indicator for health expenditure was used.Since health expenditure is related to infrastructure endowments and some population health indicators are related to expenditure, the variables on infrastructure/population health and expenditure are alternatively used.Full tables with all the considered variables are shown in the additional files (see Additional files 2 and 3).Columns 1 and 3 present the results for the sample with 70 countries, while columns 2 and 4 present those for the OECD members.A negative sign is shown between CFR ranking and the total test per million; therefore, countries running more tests observed a larger probability of having a lower CFR.In contrast, countries with larger expenditures on health observed a larger probability of having a higher CFR.For the OECD subsample, only the first variable was statistically significant.Finally, the stringency index is not statistically significant in any case.
In Table 4, the results from the cross-section model are displayed.In this model, only the explanatory variables that were statistically significant in the previous model were used.
Columns 4 and 5 show that there is a positive correlation between the number of tests and the total cases, which only confirms that the countries running more tests are identifying more cases, yet this is not directly related to the number of deaths.In other words, the total tests per million did not show a significant correlation with the number of fatalities.
Health expenditure is statistically significant for all the models.This is definitively related to a problem of COVID-19 cases and deaths identification and records, rather than to causation.This is, higher health expenditure as a GDP proportion cannot be a causal factor for larger contagions and deaths related to COVID-19, but the positive correlation confirms that countries spending more on health are identifying more cases and deaths.For instance, this variable has a larger coefficient for OECD members, from which the majority are developed countries and spend more on health as a GDP proportion.Namely, for OECD countries, the average was 8.8%, while for non-OECD countries, it was 5.32, while the difference in purchasing power parity dollars is wider; on average, OECD countries spent $2547 USD vs $1088 USD in non-OECD countries.Finally, the results from the panel data analysis are shown in Table 5.Fixed effects were chosen over random effects using the Hausman test as the criterion.In column 9, new tests per thousand inhabitants show a negative correlation with first difference of CFR, which means that countries applying more tests per capita showed smaller differences on CFR across the period; that is, CFR observed a trend of reduction.Consequently, this supports that the widespread application of tests to reduce the fatality rate has been effective.In addition, it is also expected that CFR from countries identifying more positive cases converge to the real CFR, given that massive testing will give the true proportion between contagions and deaths.In the same model, the Stringency index coefficient is not statistically significant, and the trend is negative, as expected, since it should be smaller over time.It is important to note that the panel data are unbalanced, and all countries with available data are included, which are mostly from Europe, Asia, North America and South America.
In columns 10 and 11, the dependent variables showed a high positive correlation with new tests, similar to the previous models.This means that the correlation between testing the new deaths and new cases is sustained over time.Meanwhile, the stringency index showed a negative coefficient; nonetheless, it is only statistically significant in column 11, with new deaths as the dependent variable.Therefore, it is confirmed that stringency measures have helped to reduce the number of COVID-19 deaths, but there is no statistical evidence of being effective in reducing the number of new cases.The trend means that new deaths have a significantly positive trend, meaning that they are still growing.As a robustness check, a longer lag has been included, which is the 15 th lag of new tests per million, to control if there is any change over time.The results are very consistent, the variables kept the same sign, and they remained statistically significant.The value of R 2 diminished for the three models, which can be affected by the smaller number of observations and countries included.
Discussion
Our results support the WHO recommendations to increase testing and track of COVID-19 cases in all countries, given its definitive impact on reducing the CFR.In line with Stojkoski et al. (2020), we found that the countries' expenditure on health as well as their development level is positively related to CFR, cases and deaths, which cannot be interpreted as causation, but it indicates that developing countries do not track enough cases yet.Consequently, we claimed that there is an underidentification of data given the positive correlation between cases and deaths and testing, meaning that testing is still reactive and with little identification of asymptomatic, which is also highlighted by the OECD (2020) and the WHO (2020).
Furthermore, given the under identification of cases, it is still very difficult to identify the country-specific drivers for contagions and CFR.
Lockdown measures, by the Stringency index, were shown to be effective at reducing the number of new deaths, yet it was not for new cases and CFR.Therefore, the results support the propositions to stop severe lockdown measures given the heavy economic losses and burdens for governments, which in turn will not significantly reduce the number of cases and CFR.
One significant limitation of this study is the usage of aggregated national data, rather than regional data, which could have helped to identify regional socioeconomic drivers for the COVID-19 spread and CFR, given that in some countries, the cases seemed to be very concentrated within few cities or regions.
Conclusions
Testing proved to be a significant factor in decreasing CFR; thus, it should be supported as the main strategy to follow for pandemic control in the medium and long terms.The findings suggest that there is a large underidentification of COVID-19 cases, especially for developing countries, which compromises the long-term control of the pandemic.Thus, it is essential to make agreements with all nations to keep increasing the testing for further knowledge of the COVID-19 and its spreading drivers at the national level, allowing tailored public policies.
The data show a particular performance for the cross-section, in which the coefficient of variation is very low, but this trend changes when using panel data, in which the coefficient of variation shows a significant change.In this case, the panel data regression analysis captures the idiosyncratic errors in this time period, with a more precise estimation of the effects of the test per million habitants.
By means of using the Stringency Index, it was found that lockdown measures have been effective in reducing the number of new deaths, while they showed no impact on new cases and CFR reduction.This has public policy implications, since lockdown measures generate great economic losses and are already inducing economic crises all over the world, with greater affectations for developing and less developed countries (Loayza, 2020).
Another general conclusion is that the availability of data for all countries is still very limited, which hinders further analysis of COVID-19 spread and CFR drivers at the national level.This is, the question remained unanswered whether countries with large proportions of the population aged over 65 or over 80, such as Japan or Italy, are more susceptible to greater CFR.Additionally, at the aggregate level, it was not possible to link variables such as obesity and diabetes with a higher CFR or number of deaths.Likewise, there is a significant difference in infrastructure endowments across the sample used; nevertheless, the CFR or the number of deaths appeared to be statistically explained by these factors.
The pandemic is still developing, and there are countries in which the highest peak of contagions has not yet been reached; thus, further analysis for narrowed public policies will be needed.The current recommendation from the WHO, OECD, and other medical bodies to increase testing proved to be the wiser path to follow at the moment.
Figure 1
Figure 1 Proportion of cases by country by 7 th May 2020.Source: own elaboration with data from Ourworldindata.org others.It is important to mention that not all the variables are included at the same time in the models to prevent biases, especially by the correlation among health expenditure, infrastructure and population health indicators; the variables are not included in the model at the same time.The number of tests per million inhabitants is also included, since it has been claimed that the only way to decrease the CFR in the long term is to massify the applied tests(OECD, 2020).Finally, considering that quarantine measures have been considered a determinant factor for fatality rate, the Stringency index byThomas et al. (2020) is also added as an explanatory variable.This index is a wide indicator of all the different social measures taken by governments to reduce the speed of spread, such as schools closing, cancelation of public events, closing borders, etc.It is available daily for several countries.It gives a weight to each measure taken, and the highest level for any given country is 100.Cross-section model specification.These models are estimated by ordinary least squares (OLS) in Stata.The first model uses as a dependent variable the total cases per million inhabitants, and the second model uses the total of deaths per million inhabitants.The aim of this model is to show a robust statistical correlation between the cases and death and the explanatory variables that were statistically significant in the first model.The models are specified as follows: = 0 + + (3) ℎ = 0 + + (4) Panel Fixed Effects models Finally, a group of panel data estimations have been made for evaluating greater robustness for the models specified above.Panel data models can potentially include a larger number of data by combining cross-section and time-series analysis.The cross-section models were used to be able to link the dependent variables varying daily to annual variables by using one static picture at the data.Instead, for the panel analysis, only data varying daily are used, including cases, tests, deaths and the Stringency index.Given the type of data, these models allow the use of dynamic variables.Thus, first differences of the dependent variables are employed.Natural logarithms are used to find elasticities. Figures
Table 1
CFR for the Wold.Source: Own estimation with data from Oueworldindata.org
Table 2
Summary statistics.Source: Own elaboration
Table 3 Estimation results from the ordinal probit model
. Source: Own elaboration
Table 4 Estimation results for cross-sectional models.
Source: own elaboration.
Table 5
Panel data estimation results.Source.Own estimation | 5,460 | 2020-05-28T00:00:00.000 | [
"Economics",
"Medicine",
"Sociology"
] |
Late Gadolinium Enhancement Cardiovascular Magnetic Resonance Assessment of Substrate for Ventricular Tachycardia With Hemodynamic Compromise
Background: The majority of data regarding tissue substrate for post myocardial infarction (MI) VT has been collected during hemodynamically tolerated VT, which may be distinct from the substrate responsible for VT with hemodynamic compromise (VT-HC). This study aimed to characterize tissue at diastolic locations of VT-HC in a porcine model. Methods: Late Gadolinium Enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging was performed in eight pigs with healed antero-septal infarcts. Seven pigs underwent electrophysiology study with venous arterial-extra corporeal membrane oxygenation (VA-ECMO) support. Tissue thickness, scar and heterogeneous tissue (HT) transmurality were calculated at the location of the diastolic electrograms of mapped VT-HC. Results: Diastolic locations had median scar transmurality of 33.1% and a median HT transmurality 7.6%. Diastolic activation was found within areas of non-transmural scar in 80.1% of cases. Tissue activated during the diastolic component of VT circuits was thinner than healthy tissue (median thickness: 5.5 mm vs. 8.2 mm healthy tissue, p < 0.0001) and closer to HT (median distance diastolic tissue: 2.8 mm vs. 11.4 mm healthy tissue, p < 0.0001). Non-scarred regions with diastolic activation were closer to steep gradients in thickness than non-scarred locations with normal EGMs (diastolic locations distance = 1.19 mm vs. 9.67 mm for non-diastolic locations, p < 0.0001). Sites activated late in diastole were closest to steep gradients in tissue thickness. Conclusions: Non-transmural scar, mildly decreased tissue thickness, and steep gradients in tissue thickness represent the structural characteristics of the diastolic component of reentrant circuits in VT-HC in this porcine model and could form the basis for imaging criteria to define ablation targets in future trials.
INTRODUCTION
Viable myocytes within and on the border of dense scar display abnormal electrophysiological characteristics that promote reentrant arrhythmias (1)(2)(3). In most clinical VT ablation procedures, electrophysiological criteria alone are normally used to identify arrhythmogenic tissue (4)(5)(6). Even with a combination of ablation of clinical VT identified through activation and entrainment mapping and pace-mapping as well as substrate-based ablation (7), medium term outcomes remain modest with up to 50% of patients experiencing a recurrence of VT within a year of ablation (8).
When imaged ex-vivo at the near-cellular level, lategadolinium enhanced (LGE) cardiovascular magnetic resonance (CMR) accurately identifies peri-infarct heterogeneous tissue (HT) as regions of intermediate signal intensity (ISI) (9). This tissue represents the critical substrate responsible for post-MI reentrant VT (10). These observations have been extended to invivo imaging where it has been reported that regions of ISI may also display abnormal electrophysiological properties (11,12) described as the peri-infarct borderzone (BZ). This suggests that in-vivo LGE-CMR may offer a low-risk option for generating a 3D assessment of substrate that may have utility in guiding substrate-based VT ablation.
Pre-procedural cross-sectional imaging has been proposed as a non-invasive approach that may facilitate more comprehensive arrhythmogenic substrate identification and ablation (13). Effective substrate identification, either using electrophysiological or structural criteria, may be of particular importance in the context of the growing population of patients presenting with VT with hemodynamic compromise (VT-HC) (14,15), in which only a substrate-guided approach is usually possible (6).
We hypothesized that under idealized experimental conditions the quality of LGE-CMR could be optimized in order provide accurate anatomic information about the tissue substrate for VT-HC. We aimed to characterize the tissue substrate responsible for VT-HC using high-resolution in-vivo 3D LGE-CMR and acquiring high-density activation mapping under hemodynamic support to identify the location of diastolic electrograms during post-MI VT-HC in a chronic porcine infarct model.
MATERIALS AND METHODS
Animal studies complied with French law and were performed at the Institut de Chirurgie Guidée par l'image (IHU), Strasbourg, France. The experimental protocol was approved by the local and national institutional animal care and ethics committee. Eight domestic pigs underwent a 180 minute balloon occlusion of the mid left anterior descending (LAD) artery to create experimental ischemia-reperfusion myocardial infarction (MI) as previously described (16). Seven weeks following MI each pig underwent late-gadolinium enhanced (LGE) cardiovascular magnetic resonance (CMR) imaging. One week later each pig underwent electrophysiology study with prophylactic hemodynamic support during which VT was induced and assessed using high-density activation mapping.
CARDIOVASCULAR MAGNETIC RESONANCE IMAGING DATA ACQUISITION
All imaging was performed on 1.5T scanner (MAGNETOM Aera, Siemens Healthineers, Erlangen, Germany) with an 18channel body matrix coil and a 32-channel spine coil.
3D Late Gadolinium Enhanced Cardiovascular Magnetic Resonance Imaging
Isotropic navigator-gated ECG-triggered 3D IR sequence was acquired in the mid-diastolic phase as identified from cine imaging (fat saturation prepared; bSSFP; coronal orientation; linear k-space reordering; TE/TR/α: 1.58ms/3.6ms/90 • ; gating window = 7 mm; parallel imaging using GRAPPA with acceleration factor 2; resolution 1.2 x 1.2 x 1.2 mm 3 ; FOV: 400 x 257 x 96 mm 3 ; 2 R-R interval ECG triggering) with full ventricular coverage. Subjective and objective parameters of image quality were compared between 2D and 3D LGE CMR imaging (further details in Supplementary Figure 1
Tissue Thickness and Scar Transmurality Assessment
Field lines between the endocardial and epicardial surfaces were derived by solving the Laplace equation as previously described (18). Briefly, the endocardial and epicardial surfaces were tagged. To calculate the distance from endocardial to epicardial surface, the Laplace equation (∇ 2 u = 0) was solved with Dirichlet boundary conditions assigned at the endocardial (u = 0) and epicardial (u = 1) surfaces. Wall thickness was evaluated as the total length of the continuous path from the endocardium to the epicardium when moving orthogonally between adjacent isopotential surfaces. Tissue thickness gradients were assessed using a radial basis function to identify the gradient in tissue thickness (further details in Supplementary Materials). Transmurality of scar and HT was defined as the proportion of nodes along the path between endocardium and epicardium corresponding to each tissue type. The distance of each endocardial node to the nearest neighboring node assigned as heterogeneous tissue (HT) was automatically calculated.
Segmentations were processed to generate a surface mesh containing endocardial and epicardial surfaces, scar and aorta in the Digital Image Fusion (DIF) format for import into the Precision Electroanatomic Mapping System (EAMS, Abbott, St Paul, MN, USA) within which they were registered with the EAMS geometry, as described below.
Electrophysiology Study and Hemodynamic Support
Electrophysiology procedures were conducted 8 weeks after MI under general anesthetic (further details in Supplementary Materials) using the Precision electro-anatomic mapping system (EAMS, Abbott, Chicago, IL).
Prior to the EP study pigs were established on venousarterial extra-corporeal membrane oxygenation (VA-ECMO) via the left femoral artery and vein and hemodynamic support was prophylactically instituted using a Maquet Cardiohelp machine (Maquet Getinge group, Rastatt, Germany, see Supplementary Figure 1). Venous and arterial access was established for the EP study and hemodynamic monitoring/drug administration. Hemodynamic compromise (HC) was defined as a rhythm associated with a mean arterial pressure (MAP) below 50 mmHg or a pulse pressure lower than 20 mmHg. During rhythms not associated with HC, VA-ECMO circuit flow was reduced to 0.5 L/min. During rhythms associated with HC, VA-ECMO flow ranged between 0.5 and 1 L to 4 L depending on hemodynamic status to maintain a MAP of 65-70 mmHg. Further details are provided in the Supplementary Materials.
A sensor-enabled Abbott FlexAbility TM ablation catheter was advanced to the aorta via the right femoral artery and used to acquire aortic root and coronary ostia geometry prior to gaining retrograde left ventricular (LV) access across the aortic valve. A multipolar mapping catheter [HD Grid TM or LiveWire TM duo-deca (Abbott, Chicago, IL)] was advanced through an Agilis sheath via the aorta to acquire LV endocardial geometry.
LV endocardial activation maps were acquired while pacing from the RV apex at 500 ms and then 300 ms.
An attempt was made to induce ventricular tachycardia (VT) using an adapted Wellen's VT stimulation protocol (19) with up to four extra-stimuli from right and then left ventricular sites. In the event that no VT was induced, the non-selective beta agonist isoproterenol was commenced as an infusion at an initial rate of 2 µg/min and up titrated to a maximum rate of infusion of 20 µg/min during which programmed electrical stimulation was repeated.
Activation maps were acquired using the Automap function within Precision TM using the initial deflection seen on any lead of the surface ECG leads as the timing reference with strict settings applied (EAMS, Abbott, St Paul, MN, USA) (Score = 85; speed limit = 10 mm/s; distance 1 mm; enhanced noise rejection: off). Each individual bipolar and unipolar EGM from the mapped VTs and maps during pacing were subsequently reviewed offline and the activation time reassigned when necessary. Following spontaneous or pacetermination, VT induction and mapping was subsequently repeated. Final activation maps were reviewed by at least two experienced electrophysiologists.
The extent of conduction block (identified as regions with tightly spaced isochrones in which > 15 ms separated adjacent activation points; in which activation on either side of tightly spaced isochrones from wave fronts moving in different directions, and in which double potentials identified in proximity to the boundary between adjacent waves of conduction) during RV pacing and during each mapped VT was estimated using the surface distance function within the EAMS.
Co-registration of Imaging and Electrophysiological Data
After the EP study, using the re-map function within the EAMS, a surface mesh generated from the LGE-CMR scan was imported into the EAMS in the digital image fusion (DIF) format. The EAMS LV geometry was registered to the DIF model with an initial landmark-based registration with the aorta, coronary ostia and LV apex as initial fiducial landmarks, followed by surface registration using the proprietary surface registration function within the EAMS. EAMS points manually identified as having healthy EGMs during RVP 500 and those demonstrating diastolic activation during VT were exported from the EAMS. The location of these points from the EAMS mesh were automatically mapped to nodes on the CMR derived mesh within a 1 mm radius, to allow a node-wise assessment of the structural characteristics of tissue demonstrating diastolic activation during VT and healthy tissue could be made.
Episcopic Auto-Fluorescence Cryomicrotome Imaging
Following euthanasia, each heart was processed and then imaged using episcopic auto-fluorescence cryomicrotome imaging as previously described (20) (see Supplementary Figure 2).
Statistical Analysis
Normality of distribution of variables was assessed using the Shapiro-Wilks test. Normally distributed continuous variables are expressed as mean ± SD, otherwise variables are reported as medians with interquartile ranges. Normally distributed data were compared using a two tailed, paired sample t-test. Ordinal and non-parametric continuous data were compared using a Mann-Whitney U test or Kruskal-Wallis H-test as appropriate, with pairwise comparisons performed using Dunn's (1964) procedure with a Bonferroni correction for multiple comparisons where appropriate in which case adjusted p-values are presented. Two-tailed values of p<0.05 were considered significant. Statistical analysis was carried out in SPSS (v24, IBM Corporation, New York).
RESULTS
One post-MI pig died after CMR imaging and prior to EP study. The study protocol was completed in the remaining seven post-MI pigs and two control pigs who did not undergo CMR imaging. During this study all the VT that was induced was associated with hemodynamic compromise as defined above.
Late Gadolinium Enhanced Imaging Under Contrast Steady State
3D LGE-CMR imaging was acquired in eight animals (weight 62.6+/−3.7 kg) in the supine position under general anesthesia and CSS achieved in seven animals. In a single animal failure of the peripheral cannula interrupted the continuous Gd infusion, which prevented CSS during imaging. In the remaining seven animals 3D LGE-CMR imaging was acquired using a total Gd dose of <0.2 mmol/kg. 3D LGE-CMR was acquired in middiastole with a data acquisition duration of 83-104 ms, 23-42 segments, and a total imaging time of 50+/−12 min. Under conditions of CSS mean variation in TI myocardium was 5.2% (±3.1%, range 2.6-9.3%) and mean variation in TI blood was 9.8% (±2.1%, range 6.7-13.5%). Representative imaging examples and demonstration of CSS are shown in Figure 1. All 3D LGE-CMR imaging is available for review online (Supplementary Material).
A comparison of the image quality with clinical standard 2D LGE-CMR imaging was undertaken and is included in the Supplementary Materials.
Induced Ventricular Tachycardia
Twenty episodes of VT-HC demonstrating a macro reentrant pattern of activation were mapped (18 complete maps, 2 incomplete) and the locations of diastolic electrograms (EGMs) assessed to be part of the reentrant circuit were identified and Labeled. A summary of the characteristics of the induced VTs is shown in Table 1. All mapped VTs were dependent on at least one region of conduction block that was not evident during RVP 500 . All VTs with a complete activation map demonstrated at least two distinct regions of conduction block.
There was an increase in the extent of conduction block between RVP 500 and VT (mean increase = 45 mm, 95% CI 27-63 mm, p < 0.001) and between RVP 300 and VT (mean increase 23 mm, 95% CI 5-42 mm, p = 0.016), and a trend toward, an increase in conduction block between RVP 500 and RVP 300 (mean increase = 22 mm, 95% CI−1-45 mm, p = 0.059). Pearson correlation coefficient demonstrated a negative correlation between activation rate and total extent of conduction block (r 2 = 0.288, p = 0.001). These data are illustrated in Figure 1. Entrainment mapping was attempted in a subset of the induced VT-HC but was not successful due to failure to entrain or degeneration of the VT to ventricular fibrillation (VF).
Tissue Structure Assessment With in-vivo CMR at Locations With Diastolic Activation
Representative examples of diastolic EGMs recorded during VT are shown with their corresponding locations on in-vivo CMR in Figure 2. As shown in these examples, diastolic EGMs were visualized in regions with subendocardial HT (location 1), neartransmural scar (location 2) and pure HT (location 3). Tissue thinning is also seen in all locations. A representative example of the position of a diastolic EGM, corresponding in-vivo imaging and episcopic auto-fluorescence cryomicrotome imaging (EACI) data is shown in Figure 3.
Non-enhanced tissue adjacent to regions of enhancement on in-vivo CMR were frequently observed in the septal area and corresponded to regions where muscle fibers traversing the RV cavity, including the moderator band, inserted into the septum. An example of this is shown in Figure 4, in which a corridor of preserved myocardium surrounded by dense scar forms the principal path of the diastolic isthmus of one VT, which in this case is approximately aligned with the direction of the long axis of the left ventricle.
The structural characteristics of locations at which diastolic EGMs occurred during the observed VTs were systematically examined and compared with locations at which normal EGMs were recorded. The normal EGMs were all recorded at positions with 0% scar/HT transmurality. Diastolic locations had a median scar transmurality of 33.1%, [IQR 0-77.3%, mean 37.4(±36.3) %, p < 0.001] and a median HT transmurality 7.6% [IQR 0-30.3%, mean 18.3(±22.9)% p < 0.001]. The majority (80.1%) of diastolic locations were found within areas with non-transmural scar or HT. Tissue activated in the diastolic component of VT circuits was thinner than healthy tissue (median thickness tissue with diastolic activation = 5.5 mm vs. 8.2 mm healthy tissue, p < 0.001) and was closer to HT (median distance from HT diastolically activated tissue 2.8 mm vs. 11.4 mm healthy tissue, p < 0.001). Of those diastolic locations that were in regions without scar or HT, all were within 15 mm of HT and
Non-scarred Regions With Diastolic Activation During Ventricular Tachycardia
Approximately 20% of diastolic points had 0% scar/HT on in-vivo CMR. As noted, all regions activated during diastole during VT-HC were within 15 mm of HT. An example of EGMs recorded from a reentrant VT-HC is shown in Figure 6. In this example, early diastolic activation is identified within nonscarred tissue, before the wavefront continues into a region demonstrating non-transmural scar in which the remainder of the isthmus is located. On the corresponding in-vivo CMR and EACI data the EGMs are located adjacent to a sharp gradient in tissue thickness just outside the thinned scar region. When all locations without scar were considered, locations with diastolic activation were closer to steep gradients in tissue thickness than non-diastolic locations (diastolic locations distance = 1.2 mm (IQR 0-3.6 mm) vs. 9.7 mm (IQR 2.9-18.6 mm) for non-diastolic locations, p < 0.001). When diastolic locations were classified according to whether they demonstrated early, mid or late activation within the diastolic window, defined as being from the end of the QRS complex on the surface ECG, the median distance to regions of steep gradient in tissue thickness was lowest at late-diastolic sites (distance = 1.5 mm), followed by early-diastolic sites (distance = 2.8 mm) and greatest at middiastolic sites (distance = 3.9 mm, p < 0.001 for group differences and p < 0.001 for all pairwise comparisons).
DISCUSSION
In a chronic porcine infarct model high resolution 3D LGE-CMR acquired during contrast steady state can be used to define the structural characteristics of components of post-infarct reentrant VT circuits. Our data demonstrate that the majority of diastolic locations in this model of VT-HC are located in regions with nontransmural (<95%) scar/HT with lower tissue thickness than healthy tissue. In addition, ∼20% of diastolic points during VT-HC are in non-scarred tissue that is adjacent to steep gradients in tissue thickness.
Late-Gadolinium Enhanced Imaging Under Conditions of Contrast Steady State
Establishing CSS was technically feasible and resulted in stable TI myocardium for prolonged periods during image acquisition. For high-resolution 3D imaging sequences used in previous imaging-supported VT ablation studies, TI myocardium drift during acquisition routinely necessitated 30-80 ms being added to the TI myocardium prior to the start of a 3D acquisition, which may be of up to 29 min in duration (21)(22)(23). In the present data, drift in TI myocardium across the course of an extended acquisition duration was always <10% and the mean was 5.2%, corresponding to an average change in TI myocardium of approximately 12 ms. In clinical practice, establishing CSS for 3D image acquisition even during scans of routine duration would minimize the drift in TI myocardium and may be helpful for optimizing image quality.
The non-standard approach to contrast delivery has not been robustly validated in this study. However, it is noted that the strategy of continuous contrast infusion, on which the protocol for the current study was based, has been validated against histological samples during extra-cellular volume mapping for the assessment of fibrosis (24). There is no consensus regarding the optimal method for thresholding LGE-CMR for scar (25), and the optimal strategy may depend on the contrast administration protocol used during imaging. The Full Width-Half Maximum technique represents a strategy with robust histological validation (26), however, clinical studies have suggested that thresholding for dense scar at 60% of the maximum SI best identifies electrophysiologically relevant left ventricular substrate. This consideration led to the current strategy being chosen for this study (17). The current study does address the unresolved issue of the optimal thresholding strategy for LGE-CMR.
The 3D LGE-CMR imaging in this study was acquired with a FA of 90 • which likely resulted in reduced blood-scar contrast due to the higher T2 weighting that resulted (see Supplementary Figure 3). While in this study we did not feel low contrast between blood pool and scar prohibited confident identification of the blood-myocardial interface, the differentiation of endocardial scar from the blood pool could be improved in subsequent studies through using a lower flip angle. We note that this issue is encountered to some degree in all bright-blood LGE sequences and represents a motivation to the development of dark blood sequences to overcome this effect (27).
Transmurality of Scar
The calculated transmurality of scar in locations with diastolic activation reported here is lower than has been reported previously. In previous reports, mean scar transmurality at VT isthmus sites assessed using 2D 1.4 x 1.4 x 8 mm 3 LGE-CMR with scar segmented according to a Full-width at half-maximum (FWHM) threshold was 60 ± 38% (28) or using a similar 2D LGE-CMR imaging 66 ± 22%, which rose to 76 ± 16% at sites of concealed entrainment and to 70 ± 21% at termination sites (29). The use of 2D imaging, bolus contrast administration and different image analysis protocols used in previous experiments is likely to contribute to observed differences in scar transmurality. In addition, there may be mechanistic differences between the VT-HC described here and VTs studied previously, both of which included at least some hemodynamically tolerated VTs. Early reports of hemodynamically tolerated scar-mediated VT often localized the isthmus to within a thin walled LV aneurysm, which would likely be identified on CMR imaging as transmural scar (30,31). In contrast, the re-entrant VT-HC observed in this study displayed a greater dependence on functional rather than fixed conduction block, suggesting that the structural substrate of VT-HC may be distinct from slower and hemodynamically tolerated VTs. In addition, the use of high-resolution imaging and the measures taken to minimize artifact in this report may have contributed to an increased sensitivity for the identification of tissue with preserved viability and reduced the calculated scar transmurality at sites with diastolic activation.
Tissue Thickness
Tissue thickness has been demonstrated to be a sensitive structural marker for arrhythmogenic tissue in the post-MI LV when assessed using coronary computed tomographic angiography (CCTA) (32). Among patients undergoing catheter ablation of drug refractory post-MI scar-related VT, 98% have been reported to demonstrate wall thinning on CCTA (32). As well as demonstrating a strong relationship between low voltage regions and local abnormal ventricular activations (LAVA), 89% of RF termination sites were located within the imaging substrate [defined as regions of wall thinning (tissue thickness < 5 mm) or severe wall thinning (tissue thickness < 2 mm)], with the majority of these located within 10 mm of the margin. The data presented here also indicate that regions of diastolic activation during VT-HC tend to be thinner than healthy tissue. However, examination of the histograms shown in Figure demonstrates that there is significant overlap between the tissue thickness assessed with diastolic activation and healthy tissue, and therefore tissue thickness alone would not adequately differentiate regions of diastolic activation during VT from healthy tissue. In the present study, the shorter VT cycle length and dependence of the VTs on functional rather than fixed anatomical conduction block, as might be expected in an aneurysm demonstrating severe wall thinning, suggests a possible explanation for why diastolic activation during VT was observed in tissue with a wider range of thicknesses than in previous reports which have used CT imaging.
Non-enhancing Tissue Activated During Diastole in VT
Approximately 20% of diastolic locations present were found in locations with no scar/HT identified on in-vivo imaging. This tissue was located in close proximity to HT and adjacent to steep gradients in tissue thickness. The diastolic activation of such tissue has not been previously demonstrated in the reentrant paths of hemodynamically tolerated VT. This tissue may include regions of tissue with microscopic fibrosis that occurs adjacent to scar and that is not identified by in-vivo LGE-CMR imaging but is likely to demonstrate distinct electrophysiological properties that promote its participation in VT-HC. Intrinsic tissue anisotropy at the scar interface, which may be enhanced in diseased ventricular tissue (33), is also likely to affect conduction behavior in this region. In canine models of scar related re-entry, it is established that the arc of functional conduction block responsible for the initiation of reentry localizes to regions of sharp gradients in tissue thickness and previous experiments have demonstrated that conduction velocity during pacing in this model is slower in regions of steep gradients in tissue thickness (34). In addition, the lowest conduction velocity (CV) in the re-entrant circuits of VTs in the same model may be found at exit sites (35). A wave of depolarization slows when transitioning from a small to a larger body of tissue due to the dispersion of the small source transmembrane current to a large number of downstream cells (source-sink mismatch) (36) and due to wavefront curvature that is a consequence of the geometric expansion of tissue encountered by a propagating wavefront. These observations suggest a mechanistic explanation for the diastolic activation of non-scarred tissue adjacent to steep gradients in tissue thickness in the reentrant path of VT-HC defined by functional conduction block, and the greatest proximity of the late diastolic component to these areas.
Unique Anatomical Characteristics of the Isthmus
In this study, tissue with scar/HT transmurality and tissue thickness across a wide range has been identified as activated during the diastolic component of the reentrant VT-HC studied. In a previous study of porcine post-MI VT-HC (10), despite highresolution ex-vivo CMR (0.4 x 0.4 x 0.4 mm 3 ), anatomic features distinguishing HT harboring diastolic activation during VT from non-participating HT were not identified. At a histological level, structural differences exist between regions harboring a critical diastolic isthmus that distinguish this tissue from other HT (37). However, since it was impossible to identify these differences using high resolution ex-vivo CMR, it is extremely unlikely that such arrhythmogenic HT would be distinguishable from nonarrhythmogenic HT with CMR at current in-vivo resolution. The tissue in which the diastolic isthmus of VT-HC is expected to be located has been characterized in this model, however the current data does not indicate that within regions of nontransmural scar/HT, arrhythmogenic and non-arrhythmogenic substrate may be differentiated.
Limitations
Activation mapping alone was used to characterize the VT-HC circuits in this study. Entrainment mapping to confirm participation of the regions with diastolic activation was attempted but not routinely achieved and this represents a limitation of the presented data. The registration of electrophysiology and imaging data represents a major challenge when attempting to establish the structural basis for observed electrophysiological phenomena. Despite meticulous care in the registration between LGE-CMR and EAMS data, it is acknowledged that registration error has not been avoided entirely. Establishing the correct rotation around the long axis of the LV remains a significant challenge even with the use of coronary ostia and other anatomical landmarks as guidance. Change in the shape of the LV between the time of electrophysiology procedures and imaging are expected due to differences in the loading conditions, which represents an additional challenge, as do absolute limitations in the accuracy of the localization of EGM signals. The degree to which registration error affects the accuracy of results is difficult to quantify and is not accurately described by surface distance between EAMS and imaging data. There is no consensus regarding the optimal threshold to apply in order to accurately identify scar on LGE-CMR (25). The challenge of quantitatively assessing scar and fibrosis are compounded by differences in imaging parameters and contrast administration protocols, which will affect degree of hyperenhancement of tissue. The absence of standard histological data from this study due to the destructive nature of the EACI process is a further limitation of the current study and prohibits formal validation of the imaging data. Common to most large animal pre-clinical studies, this study included a relatively small number of animals, in consideration of minimizing the use of animals in experiments and cost. Despite these limitations, the comparison between LGE-CMR and high density electrophysiological mapping data demonstrate that the imaging acquired and processing strategy reliably identified electrophysiologically relevant tissue in this model.
CONCLUSIONS
The structural characteristics of the tissue demonstrating diastolic activation during reentrant VT-HC, including nontransmural scar, mildly decreased tissue thickness and steep gradients in tissue thickness, are likely to promote functional conduction block which is demonstrated to be an important mechanism underlying the VT-HC observed in this model. The late diastolic segment of activation during VT-HC is closest to steep gradients in tissue thickness which may be a contributory factor to the maximal wavefront slowing seen in this region. The characterization of the myocardial substrate for post-MI scar mediated VT-HC could form the basis for imaging criteria to define ablation targets during future trials.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Animal studies complied with French law and were performed at the Institut de Chirurgie Guidée par l'image (IHU), Strasbourg, France. The experimental protocol was approved by the Local and National Institutional Animal Care and Ethics Committee.
AUTHOR CONTRIBUTIONS
JW: conceived and designed study, analyzed data and prepared first draft and subsequent modifications of the manuscript. RN and SR: designed and implemented CMR protocols, assisted in acquisition of imaging and contributed to analysis of data. SK: contributed to experimental design, assisted with acquisition of electroanatomic mapping data, and contributed to analysis of data. AC: key contribution to analysis of electrophysiologic data. TA: planning and acquisition of electrophysiological data. JC, MW, and JS: electrophysiologist who acquired electroanatomic mapping data. RK: developed computational tools for registration of imaging and electrophysiologic data.
ACKNOWLEDGMENTS
We are grateful to the Abbott UK, France and US teams who generously loaned the Precision and Claris systems for these experiments, provided outstanding technical support and donated all the EP mapping catheters, cables and consumables, Sagar Haval, Lynn Calvert and the Maquet, Getinge Group who generously provided a CardioHelp machine, provided technical support and donated all the ECMO consumables as well as Sophie Pernot and the staff at the Institute de Chirurgie Guidée par l'image (IHU), Strasbourg, France for their support for these experiments and the outstanding animal care they provided. We are grateful to Professor Maria Siebes for her assistance with establishing the protocol for and assistance with the processing of the cryomicrotome imaging. | 6,883.8 | 2021-10-26T00:00:00.000 | [
"Medicine",
"Biology",
"Engineering"
] |
A systematic review of RdRp of SARS-CoV-2 through artificial intelligence and machine learning utilizing structure-based drug design strategy
Since the coronavirus disease has been declared a global pandemic, it had posed a challenge among researchers and raised common awareness and collaborative efforts towards finding the solution. Caused by severe acute respiratory coronavirus syndrome-2 (SARS-CoV-2), coronavirus drug design strategy needs to be optimized. It is understandable that cognizance of the pathobiology of COVID-19 can help scientists in the development and discovery of therapeutically effective antiviral drugs by elucidating the unknown viral pathways and structures. Considering the role of artificial intelligence and machine learning with its advancements in the field of science, it is rational to use these methods which can aid in the discovery of new potent candidates in silico. Our review utilizes similar methodologies and focuses on RNA-dependent RNA polymerase (RdRp), based on its importance as an essential element for virus replication and also a promising target for COVID-19 therapeutics. Artificial neural network technique was used to shortlist articles with the support of PRISMA, from different research platforms including Scopus, PubMed, PubChem, and Web of Science, through a combination of keywords. “English language”, from the year “2000” and “published articles in journals” were selected to carry out this research. We summarized that structural details of the RdRp reviewed in this analysis will have the potential to be taken into consideration when developing therapeutic solutions and if further multidisciplinary efforts are taken in this domain then potential clinical candidates for RdRp of SARS-CoV-2 could be successfully delivered for experimental validations.
respiratory syndrome) in 2002 and 2013 respectively, causing pulmonary dysfunction and gastrointestinal problems [12].While, SARS-CoV-2 caused the third outbreak that led to pandemic; creating more damage than ever before, the symptoms that varied from cold to respiratory failure and death [13].This disease named COVID-19 has infected more than 20 million people worldwide, with death toll of about 3 million at the time of the review [14].Presently, the virus is still spreading all over the world, and world's population is hoping for particular anti-SARS-CoV-2 medicines along with vaccines [15,16].Clinical scientists are considering an increasing number of drug prospects, and many trials have begun worldwide.So far, only remdesivir, a RdRp based medicine has been approved by FDA (Food and Drug Administration, United States of America) [17][18][19].
The structure of the coronaviruses reveals that it has envelopes with a single stranded RNA genome [20].Further into this, the structure of RNA plays a crucial role in the life cycle of COVID-19.Amid other structural proteins, the intense modifications in viral spike have caused the high affinity of virus to the host receptor far more than SARS-CoV [21].
Coronaviruses have a complex of RdRp for the replication and transcription of their genes [22,23].In RNA infections, genomic replication process is controlled by RNA dependant RNA polymerase, which virus inscribes [24].After the cell is attacked by the virus, the genomic sequence of viral RNA is used as a template, and the protein synthesis for the translation of RdRp is done by host cell.As a result, RdRp completes the transcriptional process of various structural protein related mRNAs as well as viral genomic RNA.RdRp consequently can synthesize millions of nucleotides and hence facilitates virus to perform biological activities in the host cell [10].This complex targets nucleoside analogue inhibitors.RdRp of coronavirus SARS-CoV-2 consists of catalytic subunit nsp12 with axillary subunits nsp7 and nsp8 respectively [22,25,26].The structure of this RdRp is quite similar to the structure of RdRp of SARS-CoV that spread in 2002 [12,27].In the single subunit of polymerase, the RdRp domain is similar to a right hand, comprising palm, thumb and fingers, where thumb represents subunits nsp7 and nsp8, and fingers as additional units of nsp8 [27][28][29].Coronaviridae encode replicationtranscription complex RTC into two open reading frames ORF1a and ORF1b, which are translated by genomic RNA.ORF1a encodes pp1a (polyprotein 1a) and both ORFs jointly encode pp1ab.The polyproteins 1a and 1ab are proteolytically processed into various nonstructural proteins called Nsps with the contribution of Mpro, an ORF1a encoded main protease, also named as 3CLpro.These Nsps joined into large RTCs that catalyzes replication and transcription of the genomic RNA.RTC core includes RdRp, which facilitates the whole process [30].
The RdRp protein varies from 240 kD to 450 kD and considered as conserved protein that could be a potential target for the development and discovery of antiviral drugs [31][32][33][34].It can be safely concluded that targeting the RdRp site can lead to a therapeutically effective approach by restricting its region and inhibiting the viral replication [35].
Recent approaches using statistical tools such as molecular simulation and bioinformatics, by applying artificial neural network and machine learning techniques resulted in accelerated and stimulated production of active moieties and drugs that can be further investigated by clinical trials [13].For example, structures, by implying machine learning using Q-UEL (XML-like Web Effort) can anticipate huge amount of real data to access pertinent and applicable literature.This technology can provide bioinformatics resources publically, available via internet, and a vast array of amino acids and proteins of multiple coronaviruses including SARS-CoV-2's preserved sequences can be accessed in no time [2].Q-UEL is a platform that aids in data mining in pharmaceutical and biomedical fields through artificial neural network by giving access to knowledge-based tags including probabilistic statements and general wisdom from encyclopedia, thesaurus and internet surfing [36].Applying these methods may lead to the manufacturing of new synthetic anti-SARS-CoV-2 drugs and vaccines.
Bioinformatics with molecular simulation is playing indubitable role in finding treatment, diagnostic and preventive measures of COVID-19.The long haul journey in screening the active moieties have made simulation and discovery of new genetic sequencing and primers easy, economical and more precise with the help of artificial neural network.
The role of bioinformatics in combination with molecular simulation in the hunt for diagnostic, treatment and preventive methods of COVID-19 is unquestionable.Processes such as the screening of bioactive molecules, the simulation of biomacromolecule models, the discovery of primers and genetic sequencing compounds can be quicker, more precise and cheaper with the assistance of software based on artificial neural network.In view of the above, the present research aims to establish a systematic overview of structural data on all the efficient anti-SARS-CoV-2 agents based on RdRp of emerging RNA virus of SARS-CoV-2 using the approach of artificial neural networks and machine learning.
Data collection
Data is collected through an extensive literature retrieval using artificial neural network technique from different research platforms including Scopus, PubMed, PubChem and Web of Science, through a combination of keywords with the support of PRISMA."English language", from year "2000" and "published articles in journals" were selected to carry out this research.The framework methodology adopted in this research is shown in Figure 1.Articles were screened using keywords "SARS-CoV-2", "RNA dependant RNA polymerase", "Artificial intelligence", "structure based drug design" and "COVID-19" till July 2021.To make an arrangement between the articles based on these keywords ANN model was used which is explained below in detail (Figure 1).
Endnote was used to extract database files and abstract screening was done using Excel software.Selected full text articles were screened for inclusion in the systematic review.The map was constructed using VOSviewer software entailing output information from ANN (Figure 2).
After preliminary screening total of 105 articles were found, removing duplicates made them to 85. Abstract and title screening led to a total of 62 articles remaining.Full text screening excluded 24 articles and a total of 38 articles were included in this review.The PRISMA flow diagram is shown in Figure 3.
Result and discussion
The ANN data application has shown that while the number of RNA viruses that have triggered outbreaks in recent years is comparatively high.The emphasis in this study is on those viruses that we agree upon to have the highest therapeutic significance, social influence and scientific importance.The new coronavirus pandemic is disrupting global health services, and the research community is making an ongoing multidisciplinary attempt to resolve the immediate need for both care and prevention of serious acute coronavirus 2 (SARS-CoV-2) infections.SARS-CoV-2 case studies presented in this research may encourage the development of anti-RNA virus drugs.A detailed RNA based overview of coronaviruses is discussed below in detail.
Sequence of RNA based targeted coronaviruses
Data retrieved through machine learning emphasized that since RNA viruses are highly polymorphic so continuous surveillance is needed for new variants to be identified and distributed to new geographical areas or populations of patients.Therefore, it is also a challenge to update the awareness of RNA viruses such as SARS-CoV, SARS-CoV-2, and MERS-CoV.Furthermore, veterinary virology is an important part of this activity, in order to promote the ability to characterize zoonotic viruses, breaching the species boundary and entering the human population.When a novel human pathogen is suspected or confirmed, it is a significant step towards the control of the disease and outbreak to acquire partial or full genome information.
Data collected and analyzed utilizing ANN confirmed more than 14,000 nucleotide sequences of SARS-CoV-2 in NCBI GenBank, mostly collected from cities in USA and China.The full sequence of the gene of SARS-CoV-2 virus was first published by scientists from Adolf Lutz Institute and University of Sao Paulo in collaboration with Oxford University in February 2020 in GenBank accession number MT126808 [37].One of the first cases of respiratory infection in China, caused by a new virus led to the series of experiments that established the identity of this new virus by analyzing its RNA sequences collected from the fluid samples of lungs, that connected it from Coronaviridae RNA family.The genetic sequencing technique is helpful in understanding the genome of a population, and can be declared as a starting point to understand the role and structure of its genes.ANN acts as a house of genome of microorganisms such as SARS-CoV-2, it not only gathers data from a specific region, but also collects the sequencing from patients from all over the world, and it also had made it possible to track the profile of infectious disease, its spread in different nations, and eventually helping in finding disease-fighting techniques and mapping the mutation rate [13].
Another method for the determination of the key structural models of SARS-CoV-2's RNA translated polyproteins is by acquisition of crystallographic particulars using x-ray diffraction technique [38].These structural models were then assessed by already available structural model of SARS-CoV in Protein Data Bank (PDB), which calculated 96% sequential identification.Furthermore, the structural data of SARS-CoV-2's RNA protein, obtained by cryogenic electron microscopy, helped in the development of models by using SARS-CoV glycoproteins as a reference [39].The structural details obtained by these methods can act as a guide to understand the actual configuration under the predefined experimental conditions.The study of the viral RNA structure with the assistance of computational methods and machine leaning, the main sockets of interaction between the enzyme and the inhibitor were possible to visualize at molecular level.This could be related to the direct improvements and amendments in the structures of RNA, and inhibitory function of the drugs can be easily modified [38].
The urgency of successful anti-COVID-19 therapy led to the virtual screening methods of selecting bioactive moieties by using artificial intelligence.This ended up improving this road of discovery, by promoting the initial selection of molecules with similar steric and electronic properties, which is also known as ligand-based drug planning.On the basis of these, it has been made possible to pick potential inhibitory targets, agonists and antagonists from virtual databases of compounds against a single strand of RNA this will lead to impact compounds with targeted structural changes [39].
In the lifecycle of coronaviruses, conserved structured elements play important functional roles [40].These structural elements add complexity to the regulatory functions that encode viral RNA by directly interacting with helicases and RNAbinding proteins.If the functions of these structured elements are disrupted, this can lead to unexplored strategy in which decreased viral loads will have minimum to no effect on normal biological cells [41].Despite the existence of this idea around 6 years ago, the advances in computational modeling and artificial intelligence has aided it in overcoming the critical barriers by a quantum leap in high throughput RNA structure analysis [42].
Many functionally validated viral families, including coronaviruses, have identified highly conserved RNA structured elements such as 5'UTRs and 3UTRs that highly impact the viral replication [40,[43][44][45].A total of 106 regions in these RNA elements have already been reported that can be a potential target for novel antiviral drugs [46].
RdRp and SARS-CoV-2 in light of ANN
Digital libraries of compounds such as ZINC provide a huge variety of structures and sources, which helps in finding new lead compounds via high throughput screening, which ultimately increases the quality of the work and eases search process.The databases can be explored to classify potential anti-RdRp molecules.Researchers selected more than 300 compound from these digital libraries, which were then docked against proteases (PDB ID 6LU7), and their interaction energy of target-ligand bond was evaluated [47].
The evolution in virtual screening has led to the addition of three-dimensional bimacromolecular structural models in the databases which are now publically available to analyze data using molecular simulation tools.PDB now has more than 160,000 three-dimensional structures in its library (Table ).SARS-CoV-2, discovered by a metagenomic method, sequenced and identified as a new member of the Coronaviridae family based on sequence homology, is a notable example of this kind [48].Undoubtedly, the discovery of new mutations and varieties of viruses has been encouraged by the rapid progress of whole-genome sequencing by introducing a variety of next-generation sequencing methods, i.e. machine learning and artificial intelligence, allowing full-length sequencing at a fraction of the time for the discovery [49][50][51][52].The genetic study of the RNA viruses considered in this review is computed in a formal description of the RdRp sequence of the various species.RdRp sequences can then be split into three clusters, which are especially linked to the family and genus to which they belong (Figure 4).SARS-CoV-2's RdRp shares the highest amino acid identity (96 percent) with SARS-CoV's RdRp among coronaviruses, while MERS-CoV's RdRp homology is only 70 percent.While the main method for monitoring outbreaks and virus evolution is whole-genome sequencing, genome processing is just the first step in developing and evaluating antiviral drugs.Indeed, to determine the drug capacity of a possible target protein, structural specifics are important and may be provided by structural biology attempts or, to a less precise but more immediate extent, by homology modeling [53,54].
Using a machine learning method, we analyzed that ClustalX version 2.1 with the pair wise alignment algorithm, the sequence alignment retained the default amino acid color scheme, i.e. blue = hydrophobic; red = positive charge; magenta = negative charge; green = polar; pink = cysteine; orange = glycine; yellow = proline; cyan = aromatic; white unconserved or gaps [34].In this study, the black bar below the alignment corresponds to the degree of conservation: at any position, the higher the bar, the higher and the conservation.We conclude that the amino acids that are retained in the aligned sequences are indicated.In particular, this contributed to a deeper understanding of the structural characteristics of the emerging coronavirus RdRp, which is discussed below.
Structure of coronaviruses RdRp in silico
The ultimate shape of SARS-CoV-2 RdRp resembles a closed right hand with the sub domains of the palm, thumb, and finger, much like all other polymerases.The fingers are further classified into index finger, middle finger, ring finger and pinky finger [55].The palm area conserves catalytic site while fingers and thumb are subdomains that makes two tunnels and meet at the catalytic site.It has a nidovirus-specific domain on the N-terminal tail with nucleotidyltransferase activity [53].Unfortunately, only a few SARS-CoV RdRp crystallographic structures have been solved to date, while no MERS-CoV RdRp structures are available.In comparison, Gao et al. (2020) recently provided detailed details on the apo RdRp structure as well as the elucidation of the conformational modifications of the protein by binding to RNA and a nucleoside analogue inhibitor.Within less than a year of the SARS-CoV-2 outbreak, the Cryo-EM technique solved nine threedimensional structures of its RdRp (Figure 5) [44].
Drugs against RdRp-ANN approach
In the ongoing pandemic, scientists from all over the world proposed the use of already marketed antiviral drugs against COVID-19.The efficacy of these drugs turned out to be limited.Other reports showed that the use of preexisting antiviral drugs, suggested by health service providers, is a cost effective and time saving initiative as the de novo drug discovery takes years while the mortality rate is increasing day by day [35].The following few drugs have tested against COVID-19 in recent times including remdesivir, ribavirin, corticosteroids, lopinavir-ritonavir and interferons [21,56].
Using ANN and molecular modeling, the anti SARS-CoV-2 agent PubChem CID 444745 compound (Figure 6), has demonstrated the enzyme inhibition potential with protease forming the most stable complex.Digitoxin is a cardiac glycoside; however, currently it has been delineated as antiviral agent that is active against coronavirus [57].Structure of digitoxin explains it has a steroidal nucleus which shows involvement of glycosidic linkage through oxygen atom.Glycoside's aglycone type not only reveals its physicochemical characteristics but also shows various therapeutic uses [58].Numerous studies have shown digitoxin's anti-COVID-19 potential using structure-based screening of different databases [59][60][61].
The ZINC databases of drugs were virtually examined for their interaction with various possible molecular targets of the virus, emphasizing the promise of zorubicin against SARS-CoV-2's glycoprotein, ribavirin against PLpro; lymecycline against 3CLpro; and valganciclovir against RNA-dependent RNA polymerase (Figure 6) [62].Studies have revealed that spike proteins are new hotspot for viral mutations, and they help in either transmission or in enhanced binding by altering the receptor-binding domain RBD.Zorubicin have virtually shown promising anti-COVID-19 results by specifically binding to RBD and inhibiting the binding of S-protein [63].Likewise, the docking model of PLpro with ribavirin showed the inhibition of PLpro by forming hydrogen bond between Gln270, Gly164, Tyr274 and Asp303 [64].Only the possible inhibitor of drugs or substances in clinical trials with anti-HCV activity has been tested in silico against SARS-CoV-2 RdRp is ribavirin.On the other hand, lymecycline has shown remarkable affinity to 3CLpro.3CLpro is also known as Nsp5, which produces mature enzymes to cleave downstream Nsps on 11 sites which releases Nsp14-Nsp16.3CLpro conciliates maturation of Nsps, which is vital for virus life cycle, hence lymecycline is an attractive target in the development of SARS-CoV-2 drugs [62].Valganciclovir has also exhibited good binding affinity with PLpro as well as RdRp [62,65].While docking studied on pemirolast revealed that they have tremendous binding affinity with S-protein as well as ACE2 inhibitors [66].
RdRp and remdesivir
Remdesivir has been using against Ebola virus and it has clinically been proven that it targets RdRp by inhibiting the viral RNA synthesis [74,76,77].In a recent study, it has been concluded that remdesivir can bind to RNA-binding channels of SARS-CoV-2 [62].Recent research on structural studies of RdRp has revealed promising use of remdesivir by inhibiting the nucleotide analogue and provided a structural template to further investigate potential antiviral drugs against COVID-19 [18,78].
Anti-RdRp drugs using molecular docking
Ahmad et al. [79] screened all the marketed drugs and the drugs that are in clinical trial and reported that all the FDA approved drugs have stable interaction and lowest binding energies with the key residues.This concluded that these drugs have high potential of inhibiting the activity of RdRp.The most promising out of these drugs that interacted with single core RdRp are Argiprestocin, ornipressin, carbetocin, otosiban, demoxytocin, lypressin, examorelin and polymyxin B1.While the compounds that interacted with the complex of RdRp are cistinexine, pegamotecan, nacartocin, cisatracurium, ebiratide, diagastrin and benzquercin.Among these, the top candidates that showed strong structural binding efficacy with both single core and complex RdRp are lypressin, polymyxin B1 and ornipressin.
Another report predicted that small molecule inhibitors can impressively target coronavirus's RdRp by using molecular docking technique.The study reported that the guanosine triphosphate GTP site has the close proximity to RNA primer and RNA template as well as in the active region between palm and thumb.As the RdRp functions both primer independently and dependently, thus, depicts the dual functionality of GTP sites [80].The initiation of primer independent RNA replication process, this site incorporated GTP as second nucleotide [81].While in case of primer dependant initiation process, prior to the summation to the RNA chain, this site can be processed as a potential nucleotide binding site for an upcoming nucleotide.These two modes of primer assisted and unassisted RNA replication chains can be impeded by use of small molecules, hence, making RdRp dysfunctional [80].
Conclusion
Based on the evidence provided in this study, it is highly plausible that if multidisciplinary and concentrated efforts are committed to this task, drugs operating on the RdRp of coronaviruses could be successfully produced.The in silico research for small molecules is also intended to boost the resilience of health services and foreign organizations for potential future pandemics.In this respect, the lack of structural information on catalytic-competent or ligand-bound RdRps distinguishes SARS-CoV-2 and could hinder the effective use of structure-based drug design in both repositioning and traditional approaches.Overall, in this review, we summarized the recent findings in targeting the RdRp of RNA viruses through machine learning and found the existing studies fail to resolve a unified approach.In this regard, our study successfully produced an aggregated response in the past to aid the process further.We found that there is no general agreement among the researchers in this domain.Together with structural hints, the aggregated literature discussed here should motivate the design of additional small molecules and set the foundation for advanced structure-based approaches.However, to improve the strategies and methodologies for novel research, there is still a need of sophisticated AI tools and data visualization for not only decision making but also for future global outbreaks.
Figure 1 .
Figure 1.Framework of the research.
Figure 2 .
Figure 2. Keywords found in retrieved data related to RdRp sequences of coronaviruses RNA.
Figure 3 .
Figure 3. PRISMA flow diagram depicting criteria used for analysis of RdRp of SARS-COV-2 using structure-based drug design and AI.
Figure 4 .
Figure 4. Schematic representation of the RdRp identity percentages shared by RNA viruses of the same and different clusters.
Table .
AI tools used to access different hence sequences for SARS-CoV-2. | 5,128.2 | 2021-12-27T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
An Organic Solvent-free Approach towards PDI/Carbon Cloth Composites as Flexible Lithium Ion Battery Cathodes
An acidic solution based method towards flexible lithium ion battery (LIB) cathodes is developed in this work with perylene diimide (PDI) as the electroactive component and carbon cloth (CC) as the current collector. In this approach, PDI is firstly dispersed in concentrated sulfuric acid (H2SO4) and then deposited on CC substrate after the dilution of H2S04, which provides an organic solvent-free strategy to construct integrated LIB cathodes. The acdic solution based fabrication process also allows the facile adjusting of loading amounts of PDI in the cathodes, which can effectively influence the battery performances of the PDI/CC cathodes. As the result, the acidic solution processed PDI/CC cathode can deliver a high specific capacity of ~ 36mAh.g-1 at the current density of 50 m A.g-1 in both half cell with lithium foil as anode and full cell with pre-lithiated CC as anode. In both types of the batteries, the PDI/CC cathodes show good cycling stabilities by retaining ~ 84% of the initial capacities after 300 charge-discharge cycles at 500 mA.g-1. Additionally, the excellent mechanical stability of the PDI/CC cathode enables the LIBs in pouch cell to maintain the electrochemical performances under various bending states, demonstrating their potentials for flexible LIBs.
INTRODUCTION
Lithium ion batteries (LIBs) have been deeply involved in modern life by powering our cell-phones, laptops, cars, drones, and even toothbrushes. [1−3] However, the foreseeable threat caused by people's increasing reliance on LIBs is not just the depletion of finite mineral resources for LIB electrodes such as lithium and cobalt, which could be relieved by developing the alternative electrodes [4−6] or batteries [7−11] with naturally abundant materials. In contrast, there is still no clear answer on the disposal of LIBs when they wear out. [12,13] Not only do the waste LIBs have a high risk of fire or explosion if damaged, but the extraction of the inorganic ingredients in the batteries requires harsh conditions, which would lead to hazardous environmental impacts such as water and air pollutions. [14−16] In commercial LIBs, ~ 90% of the material cost and ~ 25% of the battery weight are contributed by their cathodes. Replacing the inorganic species in present LIB cathodes with electroactive organic compounds is an appealing solution to-wards the aforementioned problems. [17,18] The major elements of organic molecules are carbon, hydrogen, oxygen, and nitrogen, which are neither scarce nor expensive. Compared with their inorganic counterparts, the decommissioning and recycling of organic materials are more simple and efficient. [19−21] Additionally, organic LIB materials also have intriguing features including molecular-level tuneable electrochemical properties and high mechanical flexibility, rendering the design and fabrication of flexible or stretchable LIBs. [22,23] However, the processing of the organic materials is often associated with the usage of large amounts of volatile solvent, which may become a new pollution source in the industrial-scale production. [22,24−26] Considering the environmental impacts of both inorganic and organic materials in LIBs, the fabrication strategy of organic cathodes without organic solvents would hold a high potential for practical applications.
Herein, we report an acidic solution based processing method for fabricating the integrated organic LIB cathodes with 3,4,9,10-perylenetetracarboxylic diimide (perylene diimide or PDI) as the electroactive component, carbon cloth (CC) as the current collector, and concentrated sulfuric acid (H 2 SO 4 , 98%) as the solvent. All the ingredients in this method are commercially available and metal free, and the whole fabrication process does not require any organic solvent. The concentrated H 2 SO 4 allows the effective dispersion of PDI and the subsequent dilution of the acidic suspension renders the deposition of PDI with adjustable ratios on the CC substrates. The resultant PDI/CC cathodes manifest excellent performances in half cell with lithium foil anode and full cell with prelithiated carbon cloth anode including a high specific capacity of ~ 136 mAh·g −1 at the current density of 50 mA·g −1 and a high cycling stability with ~ 84% retention rate of the initial capacity after 300 charge-discharge cycles at 500 mA·g −1 . More importantly, the outstanding lithium storage behavior of PDI/CC cathode can also be reserved in pouch cells under different bending states, confirming the advantages of the acidic solution processed organic cathodes in flexible LIBs. EXPERIMENTAL Materials 3,4,9,10-Perylenetetracarboxylic diimide (perylene diimide or PDI, 95%) was obtained from Shanghai J&K Chemical Technology Co., Ltd. Carbon black (Super P) was provided by Shanghai Xiaoyuan Energy Technology Co., Ltd. Concentrated sulfuric acid (H 2 SO 4 , 98%) and N-methyl-2-pyrrolidone (NMP) were bought from Shanghai Tansoole Scientific Co., Ltd. Carbon cloth (CC, W0S1009) was purchased from Suzhou Yilongshen Energy Technology Co., Ltd. All the chemicals were of analytical grades and used as received without further purification.
Preparation of the PDI/CC Cathodes
Carbon cloth was washed with ethanol and acetone and then dried under vacuum at 100 °C for 12 h. On the other hand, the mixture of PDI (90 wt%) and carbon black (10 wt%) was dispersed in concentrated H 2 SO 4 with magnetic stirring to form the homogeneous suspensions. To optimize the loading amount of PDI in the PDI/CC cathodes, three suspensions with different concentrations of the solid contents (PDI + carbon black = 20, 30, and 40 mg·mL −1 ) were prepared. Subsequently, the CC substrates were dipped in the suspensions for 5 min and then immersed in deionized water for several times to remove the residue H 2 SO 4 and allow the deposition of PDI. Next, the samples were vacuum dried at 100 °C for 12 h to generate the integrated PDI/CC cathodes. According to the concentrations of the suspensions, the cathodes were named as PDI/CC-1 (20 mg·mL −1 ), PDI/CC-2 (30 mg·mL −1 ), and PDI/CC-3 (40 mg·mL −1 ), respectively.
In controlled experiments, an N-methyl-2-pyrrolidone (NMP) suspension of PDI (90 wt%) and carbon black (10 wt%) with a total solid content of 20 mg·mL −1 was prepared by ultra-sonication for 30 min. The CCs were immersed in the suspension for 5 min and then washed with deionized water for several times. After dried under vacuum at 100 °C for 12 h, the sample noted as PDI/CC-N was obtained.
Preparation of PDI-A and PDI-N
PDI was dissolved in concentrated H 2 SO 4 to form a homogeneous suspension with the concentration of 20 mg·mL −1 . After stirred for 30 min, the PDI suspension was poured into ice water (100 mL), which caused the precipitation of PDI from the diluted H 2 SO 4 . Subsequently, the precipitate was filtered, washed with deionized water for several times, and vacuum dried at 100 °C for 12 h to produce PDI-A.
Similarly, PDI was dispersed in NMP with the concentration of 20 mg·mL −1 by ultra-sonication for 30 min. After that, PDI was filtered from the dispersion and vacuum dried at 100 °C for 12 h to produce PDI-N.
Structure Determinations
Attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectra of the PDI/CC cathodes were measured on a NEXUS 670 spectrometer. The Fourier transform infrared spectra (FTIR) of the powder-like samples were recorded with a Nicolet 6700 spectrometer by pressing into pallets with KBr. The crystallization degree of the samples was investigated by powder X-Ray diffraction (XRD, Rigaku D/Max 2500) with Cu Kα irradiation (λ = 1.54 Å) at a scanning rate of 5 (°)·min −1 . The morphologies of the samples were characterized by scanning electron microscopy (SEM, JEOL JSM-7800F Prime, 5 kV). The Raman spectroscopy was performed on a Dispersive Raman Microscope (Senterra R200-L) with an Ar-ion laser excitation at 785 nm with a power of about 100 mW.
Electrochemical Measurements
The electrochemical performances of the PDI/CC cathodes in half cells were measured in 2016 coin-type cells with PDI/CC as the cathode and lithium foil as the anode. The electrolyte was LiPF 6 (1 mol·L −1 ) in a mixture of ethylene carbonate (EC), dimethyl carbonate (DMC), and ethyl methyl carbonate (EMC) (1:1:1, V:V:V), and the separator was polypropylene (Celgard 2400). The batteries were assembled in an argon-filled glove box with the moisture and oxygen concentrations below 1 ppm. The cyclic voltammetry (CV) profiles of the samples were measured on a CHI760e electrochemical workstation at a scan rate of 0.5 mV·s −1 . The galvanostatic charge-discharge (GCD) tests were conducted on Land CT2001A testing system in the potential range of 1.5 V to 3.5 V. The specific capacities were calculated with the mass of PDI in the cathode. Electrochemical impedance spectroscopy (EIS) was performed on the CHI760e electrochemical workstation with an amplitude of 5 mV over the frequency range from 0.01 Hz to 100000 Hz.
The electrochemical performances of the full cell were measured in both 2032 type coin cells and aluminum laminated film wrapped pouch cells with PDI/CC-1 as the cathode, pre-lithiated carbon cloth (CC) as the anode, LiPF 6 (1 mol·L −1 ) in EC:DMC:EMC (1:1:1, V:V:V) as the electrolyte, and polypropylene (Celgard 2400) as the separator. The pre-lithiation of the CC anodes was performed through an internal short approach. Typically, electrolyte was dropped on the CC anode, and then a lithium foil was put on it. The CC and lithium foil were pressed by glass plates for 120 min to complete the prelithiation process. The sizes of the electrodes in pouch cells were 2 cm × 3 cm. And the pouch cells were sealed by a vacuum heat sealing machine. The pre-lithiation process and the assembling of the batteries were all performed in an argonfilled glove box with the moisture and oxygen concentrations below 1 ppm. The galvanostatic charge-discharge (GCD) tests of the full cell were carried out with a Land CT2001A testing system in the potential range of 1.5 V to 3.0 V. The cell capacity was calculated based on the mass of PDI in the cathode. The CV profiles of the full cell were measured by a CHI760e electrochemical workstation at a scan rate of 0.5 mV·s −1 .
RESULTS AND DISCUSSION
The fabrication process for the PDI/CC cathodes is illustrated in Fig. 1. Firstly, the mixture of PDI and carbon black with a mass ratio of 9:1 was dispersed in concentrated H 2 SO 4 . In this step, the protonation of PDI renders the good dispersibility of positively charged PDI in H 2 SO 4 , [27−30] thus leading to the homogeneous suspensions. Herein, PDI is selected as the electroactive component due to its high electrochemical activity and good stability. [19] Many researchers have managed to develop PDI based cathode materials for LIBs. [29−33] For instance, Krishnamoorthy et al. treated carboxylic acid containing PDI with hydrazine. The reduced PDI manifested a high capacity of 100 mAh·g −1 at the charging rate of 20 C. [31] Seferos' group synthesized a three-dimensional framework of triptycene and PDI. As the organic cathode of LIBs, the PDI-triptycene framework could deliver a capacity of ~ 76 mAh·g −1 at 0.05 C. [32] Recently, Ramanujam and co-worker examined the electrochemical performance of glycinyl substituted PDI as LIB cathode, which retained 70% of the theoretical capacity (110 mAh·g −1 ) after 2000 charge-discharge cycles. [33] On the other hand, the addition of carbon black is to reduce the resistance within the resulting PDI/CC cathodes. Subsequently, a piece of CC was dipped in the suspensions for 5 min to allow the sufficient adsorption of PDI and carbon black on the surface of CC via the aromatic and hydrophobic interactions between them. After that, the CC was transferred to deionized water to remove the residue H 2 SO 4 . With the dilution and departure of the acid, deprotonated PDI would form a stable layer on the CC substrate, which therefore generated the PDI/CC cathodes. In the whole fabrication process, concentrated H 2 SO 4 only acted as the solvent and no reaction happened except its dilution. Without high temperature, high pressure or volatile solvents, our method provides a low cost and environmentally friendly strategy to construct organic LIB cathodes. To find the optimized loading amount of PDI in the cathodes, three suspensions with different concentrations of the solid contents (PDI + carbon black = 20, 30 and 40 mg·mL −1 ) were prepared in this work. Accordingly, the resulting cathodes were noted as PDI/CC-1 (from the suspension with the concentration of 20 mg·mL −1 ), PDI/CC-2 (30 mg·mL −1 ), and PDI/CC-3 (40 mg·mL −1 ). In controlled experiments, a reference sample named as PDI/CC-N was fabricated by soaking the CC substrate in the NMP dispersion of PDI and carbon black with the concentration of 20 mg·mL −1 (PDI:carbon black = 9:1). Additionally, to elucidate the influence of solvents on the morphology of PDI, two PDI samples were further prepared by dispersing PDI in concentrated H 2 SO 4 and NMP, which were then named as PDI-A and PDI-N, respectively (the electronic supplementary information, ESI).
The amount of PDI in the resulting PDI/CC cathodes can be easily determined by checking the weight variations of the CCs after the acid assisted fabrication process. As summarized in Table S1 (in ESI), the loading ratios of PDI in PDI/CC-1, PDI/CC-2, and PDI/CC-3 show an obvious upward trend with the increased concentrations of H 2 SO 4 suspensions. In contrast, the amount of PDI in PDI/CC-N is almost 2.5 times higher than that of PDI/CC-1 and close to the value in PDI/CC-2. The PDI concentrations and solvents of the suspensions do not only influence the loading ratios of PDI in the PDI/CC cathodes. The surface morphology of these cathodes also has obvious relevance with the processing conditions, which can be evidenced by their scanning electron microscopy (SEM) images. As shown in Fig. S1 (in ESI), the pristine CC substrates are composed by bundles of carbon fibers with the diameter of ~10 μm and each fiber has a relatively smooth surface with a few thread-like shallow grooves along the axial direction. Differently, a layer of PDI can be found in all of the PDI/CC cathodes ( Fig. 2 and Fig. S2 in ESI). In the case of PDI/CC-1, the PDI layer is homogenously deposited on the surface of CC substrates (Figs. 2a and 2b). Some particles with the sizes of 100 nm are decorated within the PDI film, which should be the carbon black. Not like the typical rod-like pristine PDI (Fig. S3a in ESI), the PDI particles in PDI/CC-1 are fused together with vague boundaries, which are similar to those in PDI-A ( Fig. S3b in ESI). This kind of morphology can be attributed to the dissolution of PDI in H 2 SO 4 , [30] which is not observable in the samples treated with NMP. As displayed by Fig. S2 (in ESI), PDI on the surface of PDI/CC-N forms a rough film and still reserves the rod-like structure as the pristine PDI. A similar situation can also be found in the SEM image of PDI-N ( Fig. S3c in ESI), implying the different roles of H 2 SO 4 and NMP in the fabrication processes. Except for the solvent, the PDI concentrations of the acidic dispersions can also influence the morphology of the PDI/CC cathodes. As shown in the SEM images of PDI/CC-2, a condense PDI film with crumbles is wrapped around the CC substrates (Figs. 2c and 2d). And large agglomerations of PDI with disordered shapes can be found in PDI/CC-3 (Figs. 2e and 2f), which is due to the highest loading ratio of PDI in the cathode.
To further explore the composition and structure of the PDI/CC cathodes, the Fourier transform infrared (FTIR) spectra and X-ray diffraction (XRD) patterns of the samples are compared in this work (Fig. 3). To avoid the interference of CC substrates in the cathodes, the FTIR and XRD spectra of pristine PDI, PDI-A, and PDI-N are also recorded. In the FTIR spectra of pristine PDI (Fig. 3a), the absorption bands at 1687, 1362, and 3154 cm −1 can be ascribed to the stretching vibrations of the C=O, C-N, and N-H bonds in the imide groups of PDI. And the bands at 3044, 2919, 2856, and 1588 cm −1 can be assigned to the stretching vibrations of aromatic C-H bonds and perylene ring. [34,35] The pristine PDI, PDI-A, and PDI-N manifest almost identical FTIR spectra, suggesting that
Fig. 1
The schematic illustration on the acidic solution based processing method towards the integrated PDI/CC cathodes in this work. the different solution processing has no influence on the chemical structure of PDI. Similarly, the PDI/CC cathodes also have very closed FTIR profiles with only differences in the intensities of the absorptions (Fig. 3b), which all belong to the characterized absorption bands of PDI, confirming that the acidic solution based fabrication procedures are simple physical processes and no chemical reactions are involved. On the other hand, the XRD patterns of pristine PDI have distinct diffraction peaks at 10°, 12°, 20°, 25°, 27°, and 30° (Fig. 3c), which can be ascribed to the (011), (021), (002), (11 ), (12 ), and (140) planes of the monoclinic P21/n space group, respectively. [35] The similar diffractions can still be found in the XRD profiles of PDI-N, only with a slight shift to the smaller 2θ values, implying larger intermolecular distances. [36,37] In the case of PDI-A, the intensities of many diffraction peaks are greatly depressed and their positions shift to larger 2θ values, suggesting that the PDI molecules in PDI-A have a reduced crystalline degree but enhanced intermolecular distances. Different from their FTIR results, the XRD patterns of PDI-A and PDI-N have obvious differences, attributable to the different effects of the solvents. [29,30] In contrast, the PDI/CC cathodes have similar XRD patterns with one distinct peak at ~ 25° and multiple weak peaks in the range of 12°-30° (Fig. 3d). The strong and broad peak is ascribable to the (002) reflections of the graphitic carbon, [38] which should be derived from the CC substrate and the aromatic interactions between PDI and CC.
To evaluate the electrochemical performances of the PDI/CC cathodes, their cyclic voltammetry (CV) profiles were recorded in coin-type half cells with lithium foil as the anode in the voltage range of 1.5 V to 3.5 V at a scan rate of 0.5 mV·s −1 (Fig. 4a). In the cathodic scan, PDI/CC-1 exhibits a distinct reduction peak at 2.26 V and a much weaker peak at 2.50 V. Correspondingly, two oxidation peaks at 2.71 and 2.76 V can be found in its anodic scan. The similar redox couples at ~ 2.26/2.50 V and ~ 2.71/2.76 V can be found in the CV curves of PDI/CC-2 and PDI/CC-3, respectively. The current densities of the three PDI/CC cathodes are gradually increased, which should be due to the different loading amounts of PDI in them. [22,39] In contrast, the CV profile of PDI/CC-N only contains one pair of redox peaks at 2.25 and 2.74 V without fine structure. Moreover, the intensities of these peaks are much lower than those in PDI/CC-2 although they have similar amount of PDI, which might be caused by the difference in their conductivities. [40,41] According to previous literature, the electrochemical reaction of PDI with lithium ions in this voltage range involves the transportation of two electrons (Scheme S1 in ESI), corresponding to a theoretical specific capacity of ~ 137 mAh·g −1 . [42,43] On the other hand, the galvanostatic charge-discharge (GCD) profiles of the PDI/CC cathodes at a current density of 50 mA·g −1 indicate that all of the samples have an obvious charge/discharge plateau at about 2.60/2.40 V versus Li/Li + (Fig. 4b), which are in good agreement with their CV curves.
Based on the results from the GCD tests at varied chargedischarge current densities from 50 mA·g −1 to 2000 mA·g −1 , the rate capabilities of the PDI/CC cathodes are summarized in Fig. 4(c). It is notable that PDI/CC-1 delivers a high specific capacity of 136 mAh·g −1 at 50 mA·g −1 , closing to the theoretical value of PDI, while the specific capacities of PDI/CC-2 and PDI/CC-3 are only 113 and 66 mAh·g −1 , respectively. Even at a high current density of 2000 mA·g −1 , the capacity of PDI/CC-1 still can be retained at 118 mAh·g −1 , much higher than those of PDI/CC-2 (92 mAh·g −1 ) and PDI/CC-3 (44 mAh·g −1 ). Obviously, with the increased loading amounts of PDI, the capacities of the acidic solution processed PDI/CC cathodes have a downward trend along with the increasing loading amount of PDI, which can be correlated with their morphology from the SEM characterization. With the moderate ratio of PDI in PDI/ CC-1, the PDI component can form a homogenous layer over the CC substrate without aggregation, which allows efficient transportation of charge carriers within the whole cathode.
In the case of PDI/CC-2 and PDI/CC-3, the electrolyte cannot fully access the aggregated PDI, thus leading to the drastically decreased specific capacity. [44] Except the amount of PDI, the fabrication methods also show profound impacts on the electrochemical performances of the PDI/CC cathodes. As shown in Fig. 4(c), PDI/CC-N manifests a discharge capacity of 66 mAh·g −1 at 50 mA·g −1 , which drops to 39 mAh·g −1 at 2000 mA·g −1 . Given that PDI/CC-N has a similar loading amount of PDI to PDI/CC-2, its much reduced performances should be caused by the distinct morphology of the PDI layers in these cathodes. Besides the rate capabilities, the PDI/CC cathodes also exhibit different cycling stabilities (Fig. 4d). With the coulombic efficiencies (CEs) of 100%, the capacities of PDI/CC-1 vary from 126 mAh·g −1 to 106 mAh·g −1 after 300 charge-discharge cycles at a current density of 500 mA·g −1 , corresponding to a high retention rate of 83%, which is similar to that of PDI/CC-2 (84%) and higher than those of PDI/CC-3 (75%) and PDI/CC-N (75%). The differences in the electrochemical performances of the PDI/CC cathodes should be due to the distinct combination effects of PDI and CC caused by the processing method as well as the amount of PDI.
An evidence about the different solvent effects can be found in the optical behavior of PDI in H 2 SO 4 and NMP (Fig. S4 in ESI). PDI dispersed in H 2 SO 4 exhibits strong red fluorescence under the irradiation of 365 nm UV light, while its NMP dispersion shows no fluorescence at all. The UV-Vis spectra of PDI in H 2 SO 4 have four distinct absorption bands at 478, 513, 551, and 596 nm (Fig. S4b in ESI), which can be ascribed to the 0→3, 0→2, 0→1, and 0→0 vibronic π-π * transitions in PDI, respectively. [45] Generally, the self-aggregation of PDI will cause the shift of absorption bands gradually from 0→0 to 0→1 and 0→2 transitions, [46,47] which can perfectly explain the unresolved broad UV-Vis spectra of PDI in NMP with low intensity. In the fluorescence spectra (Fig. S4c in ESI), PDI in H 2 SO 4 exhibits a maximum emission wavelength of 631 nm, which is typical for the well-dispersed PDI in solution. Under the same conditions, the emission wavelength of PDI in NMP blue-shifts to 420−550 nm, which should also be caused by the strong aggregation of PDI. [45] Therefore, the acidic solution based processing is not a simple physical mixing of PDI with CC in the presence of solvent, but actually a dissolutionprecipitation processes. The Raman spectra of the PDI/CC cathodes were further recorded to explore the local structures of PDI in them (Fig. S5 in ESI). In the case of pristine PDI, the bands at 242, 546, and 1066 cm −1 can be ascribed to the bending vibration of the C-N-C group, ring radial vibrations of perylene and C-H bending vibrations, respectiv- ely. [48] And the bands at 1302, 1375, and 1444 cm −1 are from the stretch of the conjugated ring, while the ones at 1571, 1612, and 1586 cm −1 correspond to the C=C stretch vibrations. [49,50] It is worth noting that the absorption bands from PDI in the Raman spectra of PDI/CC-N perfectly coincide with those of the pristine PDI, while the bands in the three acidprocessed PDI/CC cathodes exhibit slight red-shifts. Basically, the Raman bands of a sample will shift to lower frequency when doped by electron donors. [51] Since PDI is a typical electron acceptor, its deposition on the surface of electron-rich CC may result in the charge transfer between them. Therefore, the red-shifts in the Raman spectra of the acid processed PDI/CC cathodes suggest the improved combination effects of PDI and CC.
To further examine the contact between PDI and CC, electrochemical impedance spectra (EIS) of the PDI/CC cathodes were employed to reveal the reaction kinetics for the insertion/de-insertion of lithium ions in them (Fig. 5a). According to the inset equivalent circuit, the charge transfer resistances in the electrolyte/electrode interface (R ct ) of PDI/CC-1, PDI/CC-2, PDI/CC-3, and PDI/CC-N are 116.9, 169.6, 234.9 and 230.0 Ω, respectively. The values of R ct from the acidic solution processed PDI/CC cathodes gradually increase in the sequence of PDI/CC-1, PDI/CC-2, and PDI/CC-3, and the vari-ation trend is in good accordance with the amount of PDI, which explains their different rate capabilities. [52,53] On the other hand, PDI-N has a high R ct of 230.0 Ω, which is close to the value of PDI/CC-3 and much higher than that of PDI/CC-2, attributable to the different influences of the solvents. The galvanostatic intermittent titration technique (GITT) was further applied to compare the PDI/CC cathodes by using a current density of 10 mA·g −1 with a charge-discharge intermittent of 1 h and a consequent relaxation time of 2 h (Fig. 5b). Under the conditions of the GITT test, PDI/CC-N, PDI/CC-1, PDI/CC-2, and PDI/CC-3 show discharged specific capacities of 61, 136, 111, and 96 mAh·g −1 , respectively. Compared with the values from GCD profiles (Fig. 4b), the capacities of PDI/ CC-N, PDI/CC-1, and PDI/CC-2 are nearly unchanged, while that of PDI/CC-3 improves dramatically from 66 mAh·g −1 to 96 mAh·g −1 . Therefore, the low capacity of PDI/CC-3 is mainly caused by the slow lithiation kinetics of the aggregated PDI layer, [54,55] which is still accessible to the electrolyte. In contrast, large proportion of PDI in PDI/CC-N is inactive even under the conditions of GITT, [56] suggesting the superiority of the acidic solution based processing method. Since PDI/CC-2 and PDI/CC-N have similar loading ratio of PDI, their GITT curves are further compared to elucidate the difference between H 2 SO 4 and NMP. As shown in Fig. 5(c), PDI/CC-2 has a more depressed cell polarization than PDI/CC-N with the normalized capacity. To quantify the cell polarization of both cathodes, the internal cell resistances of them were calculated in Fig. 5(d) as a function depth of discharge, which indicates the internal resistances of PDI/CC-2 are much lower than those of PDI/CC-N at any discharge states. Considering the results from the structural characterization, the acidic solution processing allows the fusion of PDI nanorods on the surface of CC, which can lead to the efficient combination of PDI and CC in the resulting cathodes, thus resulting in a better charge carrier mobility within PDI/CC-2 than PDI/CC-N. With excellent electrochemical performances, the integrated PDI/CC cathodes were obtained in this work without tedious synthesis processes or post treatments, [31−33] which is attributed to the advantages of the solution based fabrication method. Encouraged by the excellent performances of PDI/CC-1 in half cell, we further evaluated its lithium storage behavior as the cathode in full cell with pre-lithiated carbon cloth as the anode (Fig. 6a and Fig. S6 in ESI). Named as PDI/CC-1||CC, the full cell can have a capacity close to its theoretical value (136 mAh·g −1 ) at the current density of 50 mA·g −1 (Fig. 6b), which is almost the same as the result from the half cell. More impressively, PDI/CC-1||CC still holds a high capacity of 102 mAh·g −1 even at an ultrahigh current density of 5000 mA·g −1 , revealing its excellent rate capability. On the other hand, the good cycling stability of PDI/CC-1 can also be retained in the full cell. As indicated in Fig. 6(c), PDI/CC-1||CC manifests a capacity of 101 mAh·g −1 after 300 charge-discharge cycles at 500 mA·g −1 , corresponding to a high retention rate of 83% for its initial value (120 mAh·g −1 ). Benefiting from the good mechanical stability of its electrodes, PDI/CC-1||CC can reserve its capacities at bending states, indicating another feature of the acidic solution processed cathodes. With the gradual variation of the bending angles of 0°, 90°, 180°, and Re-0° (restored to flat state), the capacities of PDI/CC-1||CC at 250 mA·g −1 are 133, 132, 130, and 129 mAh·g −1 , respectively (Fig. 6d), which show no obvious changes at the different bending states. In addition, the GCD profiles of PDI/CC-1||CC under bending states further confirm the stable output voltages of the full cell (Fig. 6e). With a static bending angle of 90°, PDI/CC-1||CC can reserve 89% of the initial capacity with a quantitative CE after 200 charge-discharge cycles at 250 mA·g −1 (Fig. 6f). As summarized in Table S2 (in ESI), the electrochemical performances of PDI/CC-1||CC are comparable to recently reported full cells with either inorganic or organic electrodes, further demonstrating its appealing potential for the practical applications in flexible energy storage.
CONCLUSIONS
In this work, PDI and CC based LIB cathodes were prepared by an acidic solution assisted processing method, which can easily produce integrated cathodes with good electrochemical performances and high mechanical stability. The fabrication process needs neither organic solvents nor high temperatures, thus providing a simple and environmentally friendly avenue for organic LIB cathodes. Compared with the inorganic LIB cathodes, the PDI/CCs with low cost and high sustainability are more attractive considering the potential economic and environmental problems caused by the mineral components in present LIBs. Moreover, the general applicability of organic electrodes will also enable this method to be applied in the manufacturing of organic secondary battery cathodes with diversified charge carriers including sodium, magnesium, and potassium.
Electronic Supplementary Information
Electronic supplementary information (ESI) is available free of charge in the online version of this article at https://doi.org/10.1007/s10118-020-2388-8.
ACKNOWLEDGMENTS
This work was financially supported by the National Natural Science Foundation of China (Nos. 61575121, 51772189, 21772120, 21774072, and 21720102002). We also thank the | 7,028 | 2020-03-19T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry"
] |
Information technology and class action notification
. The article discusses the use of information technology for class action notification. The author analyzes the foreign experience of using information technology to notify potential group members about a class action field. In international practice, there are two ways of notification: public and individual. In both cases, the capabilities of the Internet can be used. So, a public notice involves non-personalized notification of the alleged group members. A public notice can be implemented, for example, by posting information about a class action on various sites, banners. It is also possible to use the social networks of the defendant or the plaintiff-representative. Individual notification in this context can be implemented via e-mail and social networks, and their messengers. As a result of the analysis of the Russian legislation on class actions regarding the notification of potential participants, the author notes the heterogeneity of procedural mechanisms for sending notifications and the limited use of information technologies.
Introduction
Class actions have long become elements of modern procedural systems and services to protect not only "traditional" subjective rights. Class actions have become a means of defense for numerous groups of individuals whose rights and interests have been violated on the Internet. Examples of such claims are claims brought against Facebook and Google in connection with the acquisition and dissemination of users' personal data.
At the same time, even though class actions are designed to protect "new" rights and legitimate interests, for a long time, this procedural institution did not use new information technologies to ensure the most efficient and fast work. At the present stage, information technologies are actively being introduced into the field of legal proceedings. The institution of class action in this regard does not stand aside. Information technology is generally used when considering class actions, including filing a claim, notifying potential participants in a class action, and forming a group (maintaining a register of participants). The most important task of group production is the task of forming a group, the solution of which is impossible without notifying the group members. That is why in this article, we focus on the possibility of using information technology to notify potential members of a large group of people.
Group claim notification models
Notice in class actions plays an important role. They allow notifying all potentially interested persons to ensure the formation of a group. Consequently, notices provide protection for multiple individuals and avoid multiple separate proceedings by individuals who were members of the group but did not know that the class action was being filed.
The procedural legislation of various countries to notify potential participants in a class action provides the opportunity to use both individual and public notices. Traditionally, the mechanism of sending notifications by mail with a return receipt was used as a particular notification method, publication in various media as a public method of notification. Various combinations of these methods (several individual and public) are also used [1]. It is the combination of methods that is recognized as the most correct and ensures compliance with the doctrine of due process of law and jurisprudence.
US and Australian experience
Naturally, new methods of notification are associated with the use of the Internet. Notification over the Internet can be made in a variety of ways, both individually and publicly. Banners can be placed on the desired site. The notices themselves can be published on sites that the group members are likely to visit. Also, notifications can be sent to the e-mail of the group members directly. As is the case with traditional notification methods, a combination of different methods can be used. The Internet is rapidly becoming the mainstay of class action notices in terms of a custom delivery mechanism. As a result, more potential group members may find out about the class action [2].
Individual notifications
Since the Internet is now ubiquitous and almost everyone has their e-mail address, individual class action filing can be more effective and less costly by sending notices via e-mail.
While researching the use of e-mail to send notices, American researchers questioned whether the e-mail addresses of group members could be obtained through "reasonable effort." This issue is primarily a practical problem. One of the reasons why e-mail notification may not be practical is that not all people have e-mail addresses yet. Another problem is that the e-mail address does not automatically include the citizen's full name, which makes it potentially more difficult to find a specific e-mail address than a street address. The third reason is that there is no single directory of e-mail addresses [3].
As the use of e-mail becomes ubiquitous, class action defendants may have lists of group members' e-mail addresses that, like street addresses, the court can order to provide.
One of the most compelling reasons an e-mail notice meets the due process criterion is that it is comparable to return postage, the most common way of sending notices. The name "e-mail" alone proves the analogy. While e-mail and return-of-delivery mail are naturally not the same, the similarities are so evident that e-mail must comply with due process requirements as a returnof-delivery mail.
E-mail and traditional mailing are so similar because the procedures by which they are created and sent are parallel to one another. In both cases, the sending procedure begins with the creation of a text document by the sender. The sender then "sends" this document to a person at a specific address. A legacy mail service and an online e-mail service deliver the letter to the recipient's mailbox. Both letters "wait" in the recipient's mailbox until he checks it, then reads it [4].
Over time, American courts have come to understand that e-mail and Internet notification are acceptable means of delivering notices in a group proceeding. Moreover, courts are beginning to accept the contention that Internet notification may be preferable to traditional notification publishing methods.
This situation is typical not only of the countries of Anglo-Saxon law. The use of e-mail to notify group members is also allowed in Japan and the Netherlands [5].
One of the exciting ways to send individual notifications that have appeared in recent years is through social networks: Facebook, Twitter, and LinkedIn [6]. For example, the Australian Capital Territory's Supreme Court allowed participants in the proceedings, including class members, to be notified via Facebook messenger [7]. The authors point out that Facebook post notification would be helpful in addition to and cannot replace other direct forms of individual notification such as email or postal mail at this time. The reason is that some people do not use Facebook, and of those who do, not all use their official name as it appears on the class list; some people, for example, use fictitious names to protect their privacy. Another problem is that a person may not be serious about a notification received through Facebook. The above is similar to the problems of this method of notification with email notification.
Public Notice
The second method of notification is public. First of all, it involves the placement of relevant information on Internet sites.
For the United States, posting notices on dedicated Internet sites, which may be visited by group members and combine more detailed information, is a helpful addition to the individual notice [8] but can also be used separately. This posting of notices can be done at a relatively low cost and has recently become quite effective as the percentage of the population that regularly uses the Internet is constantly growing.
A myriad of sites provides information and links to class action lawsuits. On the one hand, these websites serve the same purpose as individual notices: to alert group members to a class action that will determine their rights and to include those group members in either a proceeding or settlement. On the other hand, these websites are educational portals that provide information on everything from the most basic rules to the most complex group production issues. These websites can be categorized into "independent websites containing neutral information," "independent websites containing motivational information," and "websites owned by law firms and containing incentive information" [2].
The first is that independent sites containing neutral information have no connection with any law firms or referral services. They provide information about class actions on their own. These sites act as clearinghouses through which absent group members and others can obtain information without fear of bias or distortion. These websites include FindLaw Class Action Center (http://classaction.fmdlaw.com), Class Action Lawsuits (http://www.web-access.net/--aclarklframes45.htm.), and Class Action Litigation Information (http.//www.classactionlitigation.com).
Independent websites that contain motivational information like the ones above provide class action information on their own. However, this information is not neutral. Although these sites do not provide services, they "sell ideas" [2]. This type of website is a blog, and both well-known scientists and legal practitioners maintain these blogs. Blogs address various group production issues (e.g., Mass Tort Litigation Blog, http://lawprofessors.typepad.com/mass-tort-litigation; Products Liability Prof Blog, http-./lawprofessors.typepad.com/products liability; Class Action Defense Blog, http://classactiondefense.jmbm.com).
The last type of sites are websites that are owned by law firms and contain motivating information. These sites provide potential plaintiffs with the opportunity to become members of class action lawsuits. These sites do not provide class action information per se. Instead, these sites engage customers by providing information on pending or potential class claims. Such websites are either moderated by law firms that specialize in representing plaintiffs or defendants in a group proceeding or by other interested companies. Returning to the possibility of using social media for class action notices, researchers and US courts also point out the permissibility of using Facebook as a public notification method [6].
Specifically, the ad may be posted on the defendant's Facebook profile page (Kelly v. Phiten USA, Inc., 277 F.R.D. (S.D. Iowa 2011)) or a class-action law firm. The court approved the notification plan by posting a message on the defendant Phiten's Facebook page in the trial Kelly v. Phiten USA. While as follows from the judicial act, the notification on the social network was not the only one -potential group members were notified by sending them individual notices via e-mail. Moreover, in this case, the traditional methods of notification were not used for notification. This case shows how people can successfully combine public and private notices using the resources of the Internet.
Thus, American science and practice determine a fairly large number of ways to notify potential members of a large group about a class action filed, including using the Internet. The development of ideas about the possibility of notification through information technology and the conclusion that such methods comply with the doctrine of due process [3] led to the fact that in 2018, Rule 23 of the Federal Civil Procedure Rules were amended, which currently allows the use of "electronic means" for class action notification [9].
Russian experience
Initially, we note that the rules for handling class actions are not contained in just one piece of legislation. The Arbitration Procedure Code of the Russian Federation The Arbitration Procedural Code of the Russian Federation (Article 225.14) and the Civil Procedure Code of the Russian Federation (Article 244.26) establish as the primary method of notification in a class action -public notification by publishing a message in the media. Until November 1, 2019, the Arbitration Procedure Code of the Russian Federation allowed, along with the public and individual notice, by sending a registered letter with acknowledgment of receipt. It is also possible to consider a public method of notification posting relevant information about the filed class action on the website of the relevant court on the Internet. Only in exceptional cases is it allowed to request information about the group members from the defendant and send them the appropriate notifications [10].
In the Code of Administrative Procedure of the Russian Federation, there are no special rules for notification of class actions at all. Accordingly, notifications are allowed according to the general rules enshrined in Chapter 9 of the Code of Administrative Procedure of the Russian Federation. This circumstance means that only individual notices can be used in the form of registered letters and subpoenas with acknowledgment of receipt, telegrams and telephone messages, and facsimile communications. Unfortunately, it is impossible to use electronic communication means [11] (e-mail or SMS notification). The purpose is that these means do not allow "the court to make sure that the addressee has received a judicial notice." Even more so, they are allowed only with the person's consent indicated in the corresponding receipt. However, another important thing, in this case, is that often, potential group members cannot be identified. This circumstance may be especially relevant in cases of challenging regulatory legal acts, where the circle of potential participants is obviously not defined.
Analyzing the provisions of the Arbitration Procedural Code of the Russian Federation, the Civil Procedure Code of the Russian Federation, and the Code of Administrative Procedure of the Russian Federation concerning the methods of notification, it can be noted that the use of only one method of notification is insufficient. Why the plaintiff-representative, knowing the composition of the large group (or part of it), cannot use an individual notice in addition to the public one? This idea contrasts the US law's "best practice in appropriate circumstances" notices rule. It would be correct to fix both options in the procedural codes [12].
The problem with class action notification in Russia is the lack of use of information technology. Indeed, the Arbitration Procedure Code of the Russian Federation and the Civil Procedure Code of the Russian Federation indicates the possibility of posting a class action notice on the website of the respective court. However, how often do participants in civil and arbitration proceedings, administrative proceedings use the websites of the respective courts? The answer is rather negative. Even professional participants in the legal services market using only the services "My Arbiter" or the State Automated System "Justice." The possibility of placing a public notice on the court's website specified in the law is supplemented by the possibility of disputes on bringing the controlling person to subsidiary liability in the framework of bankruptcy cases, including the corresponding message in the Unified Federal Register of Bankruptcy Information (https://bankrot.fedresurs.ru). Unfortunately, for some reason, the experience of using this resource is not extended to the consideration of all cases of group proceedings.
As we said above, the use of information technology for public notice is not limited only to official Internet resources -in world practice, other resources are also used -various kinds of sites. In Russian jurisprudence, there are cases in which the issue of the admissibility of using various Internet resources for posting notices of a class action filed is considered. Thus, in case No. 2-729 / 2020, the Yegoryevsky City Court of the Moscow Region stated: "A publication on a website that is not a popular media outlet and has a different coverage area for potentially interested parties in the lawsuit under consideration should be considered inappropriate since it is not aimed at bringing information to an unlimited number of people. In addition, the publication of notices on websites does not provide an opportunity for the unambiguous perception and unimpeded recording of information by potential plaintiffs" (Determination of the Yegoryevsk City Court of the Moscow Region dated May 28, 2020, in case No. 2-729 / 2020). From the above, we can conclude that the court, in this case, considered the notice posted on the website inappropriate since it does not provide "broad involvement of the plaintiffs interested in the claim." Although, of course, the conclusion of the court is somewhat controversial. It seems that the opposite is true -traditional media (newspapers, television, and radio) are gradually losing their positions, new media, the platform of which is the Internet, are gaining more and more importance. And not using such a site is a significant omission.
A rule has appeared in the Arbitration Procedure Code of the Russian Federation and the Civil Procedure Code of the Russian Federation, according to which group members (primarily a person who has applied to protect the rights and legitimate interests of a group of persons) can agree on the procedure for incurring legal costs in a class action. Relevant information about class actions and the possibility of joining an already filed lawsuit can be posted. This rule should facilitate the involvement of the legal business in the scope of the class action. This circumstance would prompt be creating websites (or placement on existing ones) of law firms containing motivating information, similar to those described in clause 1.1.2 of this article. Another option is the emergence of specialized funds for financing class actions. For example, the PLATFORMA website (https://platforma-online.ru/) already contains information about the filed class actions. At the same time, as noted, the Russian procedural legislation does not consider such notification is appropriate.
It is even more challenging to talk about the possibility of using social media to notify potential class members than about using websites. There is only one reason -the publication of information on social networks is an inappropriate notification method. In Russia, the most frequently used social networks are Vkontakte, Odnoklassniki, and the same Facebook. Thus, in one of the cases, the court indicated that posting information on the Vkontakte social network could not correctly publish a proposal to join the claim. The purpose is such a proposal must be made by publishing a message in the media "(Determination of the Kirovsky District Court of 20.08.2020 in case No. 2-3435 / 2020). From the above, we can conclude, that formally, social networks are not mass media. Therefore, the choice of such a notification method does not comply with the law. At the same time, judicial practice does not offer convincing reasons why such a notification method is "inappropriate." This approach can also be found in the literature. Thus, it is noted that "some plaintiffs, however, show unnecessary creativity and post an offer to join in social networks. However, such a format, of course, is inappropriate" [13].
However, it seems that, as in the case of foreign experience, the use of several methods of notification, including the use of particular methods (e-mail), public methods (various kinds of sites, social networks), would give a better result.
At the same time, the reform of the Russian class action system was widely covered on the Internet on various platforms: from media sites (Izvestia, Kommersant) to sites of various law firms or even social networks for lawyers. In this regard, it would be helpful for Russian legal reality to use the information portals for lawyers "Zakon.ru" (https://zakon.ru) and "Pravo.ru" (https://pravo.ru), which have already been operating for several years. On these sites, it would be possible to create appropriate sections dedicated to group production. They would contain information about the class actions filed with a proposal to join, information about the person who claimed in defense of the group of persons, and about his representatives (lawyers). In addition, many well-known scholars and practicing lawyers blog on these sites, from where it will also be possible to obtain information about class claims and their practice.
Conclusions
Russian procedural legislation in terms of the rules for notification of class actions is heterogeneous: in one case, public notice is allowed; in the other, only the individual. In the context of the use of information technology, the rules for notifying potential group members are very limited. Thus, procedural legislation indicates the possibility of public notification (in the media). However, among other methods, it notes only the posting of information on the official websites of the respective courts. At the same time, the use of individual notifications using electronic almost and/or various kinds of messengers is not allowed. It should be admitted that the use of information technology in this area has been postponed for several years. | 4,618.6 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Law"
] |
Proteasome dysfunction triggers activation of SKN-1A/Nrf1 by the aspartic protease DDI-1
Proteasomes are essential for protein homeostasis in eukaryotes. To preserve cellular function, transcription of proteasome subunit genes is induced in response to proteasome dysfunction caused by pathogen attacks or proteasome inhibitor drugs. In Caenorhabditis elegans, this response requires SKN-1, a transcription factor related to mammalian Nrf1/2. Here, we use comprehensive genetic analyses to identify the pathway required for C. elegans to detect proteasome dysfunction and activate SKN-1. Genes required for SKN-1 activation encode regulators of ER traffic, a peptide N-glycanase, and DDI-1, a conserved aspartic protease. DDI-1 expression is induced by proteasome dysfunction, and we show that DDI-1 is required to cleave and activate an ER-associated isoform of SKN-1. Mammalian Nrf1 is also ER-associated and subject to proteolytic cleavage, suggesting a conserved mechanism of proteasome surveillance. Targeting mammalian DDI1 protease could mitigate effects of proteasome dysfunction in aging and protein aggregation disorders, or increase effectiveness of proteasome inhibitor cancer chemotherapies. DOI: http://dx.doi.org/10.7554/eLife.17721.001
Introduction
The proteasome is a multi-protein complex responsible for the majority of protein degradation in eukaryotic cells (Tomko and Hochstrasser, 2013). The essential function of the proteasome, and its highly conserved structure and mechanism of proteolysis renders it an attractive target for bacteria and other competitors. Production of small molecule inhibitors and protein virulence factors that target the proteasome by some bacteria and fungi exploits this vulnerability to gain a growth advantage (Fenteany et al., 1995;Groll et al., 2008;Meng et al., 1999). In addition, environmental stresses antagonize the proteasome by causing accumulation of unfolded and aggregated proteins that can form a non-productive inhibitory interaction with proteasomes (Ayyadevara et al., 2015;Deriziotis et al., 2011;Kristiansen et al., 2007;Snyder et al., 2003). Human diseases in which proteasome dysfunction is implicated highlight the importance of maintaining proteasome function in the face of these challenges (Ciechanover and Kwon, 2015;Paul, 2008;Tomko and Hochstrasser, 2013), and it follows that animal cells possess mechanisms to monitor and defend proteasome function.
A conserved response to proteasome disruption is the transcriptional up-regulation of proteasome subunit genes (Fleming, 2002;Meiners et al., 2003;Wó jcik and DeMartino, 2002). In mammalian cells members of the Cap' n' Collar basic leucine zipper (CnC-bZip) family of stress responsive transcription factors mediate this transcriptional response. Two CnC-bZip franscription factors, Nrf1/NFE2L1 and Nrf2, have similar DNA-binding domains and may regulate an overlapping set of downstream targets. However, only Nrf1 is required for upregulation of proteasome subunits following proteasome disruption, whereas Nrf2 may activate proteasome expression under other circumstances (Arlt et al., 2009;Radhakrishnan et al., 2010;Steffen et al., 2010). The events leading to Nrf1 activation in response to proteasome disruption are complex. In vitro analyses in human and mouse cells indicate that Nrf1 is an endoplasmic reticulum (ER) membrane associated glycoprotein that is constitutively targeted for proteasomal degradation by the ER-associated degradation (ERAD) pathway. Upon proteasome inhibition Nrf1 is stabilized, undergoes deglycosylation and proteolytic cleavage, and localizes to the nucleus (Radhakrishnan et al., 2014;Sha and Goldberg, 2014;Wang, 2006;Zhang and Hayes, 2013;Zhang et al., 2015Zhang et al., , 2007Zhang et al., , 2014. How processing of Nrf1 is orchestrated, and its significance in responses to proteasome disruption in vivo are not understood. Upon proteasome disruption, C. elegans induces transcription of proteasome subunit, detoxification, and immune response genes, and animals alter their behavior to avoid their bacterial food source (Li et al., 2011;Melo and Ruvkun, 2012). The transcriptional response to proteasome disruption involves skn-1, which encodes multiple isoforms of a transcription factor with similarities to both Nrf1 and Nrf2 Li et al., 2011). skn-1 was originally identified for its essential role in embryonic development (Bowerman et al., 1992), but is also required after these early stages for stress responses in a manner analogous to mammalian Nrf1/2 (An and Blackwell, 2003;Oliveira et al., 2009;Paek et al., 2012). SKN-1 binds to the promoters of proteasome subunit genes and mediates their upregulation in response to proteasome disruption, and is required for survival of a mutant with attenuated proteasome function (Keith et al., 2016;Li et al., 2011;Niu et al., 2011). The molecular mechanism that links SKN-1 activation to the detection of proteasome dysfunction has not been established.
Here, we use genetic analysis to uncover the mechanism that couples detection of proteasome defects to these transcriptional responses in C. elegans. We find that an ER-associated isoform of SKN-1 (SKN-1A), is essential for this response. Our genetic data show that the ER-association of this transcription factor normally targets it for poteasomal degradation via ERAD, but is also required for its correct post-translational processing and activation during proteasome dysfunction. After ER-trafficking, our data argues that the PNG-1 peptide N-glycanase removes glycosylation modifications that occur in the ER, and then the DDI-1 aspartic protease cleaves SKN-1A. Each of these steps in SKN-1A processing is essential for the normal response to proteasomal dysfunction. This pathway is essential for compensation of proteasome function under conditions that partially disrupt the proteasome; when compensation is disabled, mild inhibition of the proteasome causes lethal arrest of eLife digest Proteins perform many important roles in cells, but these molecules can become toxic if they are damaged or are no longer needed. A molecular machine called the proteasome destroys 'unwanted' proteins in animal and other eukaryotic cells. If the proteasome stops working properly, unwanted proteins start to accumulate and cells respond by increasing the activity of genes that make proteasomes. A protein called SKN-1 is involved in this response and activates the genes that encode proteasome proteins, but it is not understood how SKN-1 "senses" that proteasomes are not working properly.
Here, Lehrbach and Ruvkun used a roundworm called Caenorhabditis elegans to search for new genes that activate SKN-1 when the proteasome's activity is impaired. The roundworms were genetically engineered to produce a fluorescent protein that indicates when a particular gene needed to make proteasomes is active. Lehrbach and Ruvkun identified some roundworms with mutations that cause the levels of fluorescence to be lower, indicating that SKN-1 was less active in these animals. Further experiments showed that some of these mutations are in genes that encode enzymes called DDI-1 and PNG-1. DDI-1 is able to cut certain proteins, while PNG-1 can remove sugars that are attached to proteins. Therefore, it is likely that these enzymes directly interact with SKN-1 and alter it to activate the genes that produce the proteasome.
More work is now needed to understand the details of how modifying SKN-1 changes its activity in cells. In the future, drugs that target DDI-1 or PNG-1 might be used to treat diseases in which proteasome activity is too high or low, including certain cancers and neurodegenerative diseases. development. Thus we reveal a vital mechanism of proteasome surveillance and homeostasis in animals.
Results
The aspartic protease DDI-1 and ERAD factors are required for transcriptional responses to proteasome disruption The proteasome subunit gene rpt-3 is upregulated in a skn-1-dependent manner in response to proteasome disruption (Li et al., 2011). We generated a chromosomally integrated transcriptional reporter in which the rpt-3 promoter drives expression of GFP (rpt-3::gfp). This reporter gene is upregulated in response to drugs such as bortezomib or mutations that cause proteasome dysfunction. To identify the genetic pathways that sense proteasome dysfunction and trigger the activation of SKN-1, we took advantage of a regulatory allele affecting the pbs-5 locus. pbs-5 encodes the C. elegans ortholog of the beta 5 subunit of the 20S proteasome. The pbs-5(mg502) mutation causes constitutive skn-1-dependent activation of rpt-3::gfp expression, but does not otherwise alter fertility or viability (Figure 1-figure supplement 1). Following EMS mutagenesis, we isolated a collection of recessive mutations that suppress the activation of rpt-3::gfp caused by pbs-5(mg502), and identified the causative mutations by whole genome sequencing ( Table 1). The collection includes multiple alleles of genes encoding factors required for ERAD. In ERAD, misfolded glycoproteins are retrotranslocated from the ER lumen to the cytoplasm, where they are degraded by the proteasome . We isolated 3 alleles of sel-1, a gene that encodes the C. elegans orthologue of HRD3/SEL1. HRD3/SEL1 localizes to the ER membrane and recognizes ERAD substrates in the ER (Carvalho et al., 2006;Denic et al., 2006;Gauss et al., 2006), and a single allele of sel-9, which encodes the C. elegans orthologue of TMED2/EMP24, which is also ER-localized and implicated in ER quality control (Copic et al., 2009;Wen and Greenwald, 1999). We also found mutations in png-1, which encodes the C. elegans orthologue of PNG1/NGLY1. After ERAD substrates have been retrotranslocated to the cytoplasm, PNG1/NGLY1 removes N-linked glycans to allow their degradation by the proteasome (Kim et al., 2006;Suzuki et al., 2016). Most strikingly, we isolated six alleles of C01G5.6 (hereafter ddi-1), which encodes the C. elegans orthologue of DDI1 (DNA damage inducible 1). DDI-1 is an aspartic protease, highly conserved throughout eukaryotes (Sirkis et al., 2006). DD1's function is poorly understood, but it has been implicated in regulation of proteasome function and protein secretion (Kaplun et al., 2005;White et al., 2011).
We examined activation of rpt-3::gfp in ERAD and ddi-1 mutant animals following disruption of proteasome function by RNAi of the essential proteasome subunit rpt-5. rpt-5(RNAi) caused larval arrest confirming that all genotypes are similarly susceptible to RNAi. While rpt-5(RNAi) causes robust activation of rpt-3::gfp in wild-type animals, mutants lacking ERAD factors or ddi-1 failed to fully activate rpt-3::gfp ( Figure 1a). The requirement for sel-11, which encodes an ER-resident ubiquitin ligase required for ERAD , supports a general requirement for ERAD in activation of rpt-3::gfp expression. These genes are also required for upregulation of rpt-3::gfp following proteasome disruption by bortezomib (data not shown). Unlike the wild type, mutants defective in ERAD, or lacking DDI-1, arrested or delayed larval development in the presence of low doses of bortezomib ( Figure 1b). We analyzed independently derived alleles of png-1 and ddi-1, indicating that hypersensitivity to proteasome inhibition is unlikely to be a consequence of linked background mutations. png-1 animals consistently showed The following figure supplement is available for figure 1: the most severe defect, and were unable to grow in the presence of very low concentrations of bortezomib. Consistent with their drug sensitivity, mild disruption of proteasome function by RNAimediated depletion of the non-essential proteasome subunit RPN-10 causes a synthetic larval lethal phenotype in animals mutant for png-1. The bortezomib sensitivity of ddi-1; sel-11 double mutants was not enhanced compared to that of ddi-1 single mutants, suggesting that ddi-1 and ERAD factors act in the same genetic pathway. We conclude that ERAD and DDI-1 are required for transcriptional upregulation of proteasome subunits and survival during proteasome dysfunction. Given the defective activation of rpt-3::gfp, a direct target of SKN-1, it is likely that upon proteasome disruption, these factors are required to activate SKN-1.
We used CRISPR/Cas9 to generate an isoform-specific genetic disruption of SKN-1A, by introducing premature stop codons to the skn-1a specific exons of the skn-1 locus (hereafter referred to as skn-1a mutants). Homozygous skn-1a mutant animals are viable, and under standard conditions show a growth rate and fertility indistinguishable from the wild type. However, skn-1a mutant animals fail to activate rpt-3::gfp in the pbs-5(mg502) mutant background, or upon RNAi of essential proteasome subunit genes, or exposure to bortezomib (Figure 2c,d, data not shown). We note that in these experiments skn-1a mutants failed to activate rpt-3::gfp in all tissues, including the intestine, where SKN-1C is expressed. Consistent with the failure to upregulate rpt-3::gfp, skn-1a mutants show larval lethality when proteasome dysfunction is induced by rpn-10(RNAi) or treatment with a low dose of bortezomib (Figure 2e,h). These skn-1a mutations specifically affect SKN-1A, but leave SKN-1B and SKN-1C unaltered, indicating that SKN-1A is essential for normal responses to proteasome disruption and in the absence of SKN-1A, the other isoforms are not sufficient. A number of stimuli that trigger stabilization and nuclear accumulation of a transgenic SKN-1C::GFP fusion protein are known, but relatively little is known about whether these stimuli also affect SKN-1A . We used miniMos transgenesis (Frøkjaer-Jensen et al., 2014) to generate genomically integrated single-copy transgenes that expresses C-terminally GFP-tagged full length SKN-1A (SKN-1A::GFP), and a second C-terminally GFP tagged truncated SKN-1A that lacks the DNA binding domain ( . We generated similar transgenes to express tagged full length and truncated SKN-1C, but did not observe any effect of proteasome disruption (data not shown). These data suggest proteasome dysfunction triggers activation of SKN-1A, but not SKN-1C.
We introduced the SKN-1A::GFP transgene into the skn-1a(mg570) and skn-1(zu67) mutant backgrounds. SKN-1A::GFP rescued the maternal effect lethal phenotype of skn-1(zu67). SKN-1A::GFP also restored wild-type resistance to proteasome disruption, as assayed by growth on rpn-10(RNAi) (Figure 3), or growth in the presence of low concentrations of bortezomib (data not shown). This indicates that the SKN-1A::GFP fusion protein is functional, and that SKN-1A::GFP is sufficient for normal responses to proteasome dysfunction even in the absence of SKN-1C (which is disrupted by the zu67 allele). As such, the transmembrane-domain-bearing SKN-1A isoform is necessary and sufficient for responses to proteasome dysfunction.
Mutation of ddi-1 does not enhance the sensitivity of skn-1a mutants to bortezomib, suggesting that DDI-1 acts through SKN-1A to promote resistance to proteasome inhibitors ( Figure 2h). Additionally removing SEL-11 weakly enhanced the bortezomib sensitivity of ddi-1 skn-1a double mutants, and also caused occasional growth defects even in the absence of proteasome disruption, suggesting that ERAD promotes resistance to proteasome inhibitors largely, but not solely, through regulation of SKN-1A. We examined how ERAD factors regulate SKN-1A using the SKN-1A::GFP transgenes. sel-1 and sel-11 mutants accumulate high levels of SKN-1A[DDBD]::GFP even in the absence of proteasome inhibitors, showing that SKN-1A is constitutively targeted for proteasomal degradation via ERAD (Figure 3a). Upon proteasome disruption, sel-1 and sel-11 mutants show defects in SKN-1A::GFP nuclear localization consistent with defective release from the ER (Figure 3b). Following proteasome disruption in png-1 mutants SKN-1A::GFP localizes to the (e) Developmental arrest of isoformspecific skn-1 mutants exposed to mild proteasome disruption by rpn-10(RNAi) but not on control RNAi. Scale bar 1 mm. (f) Expression and localization of functional SKN-1A::GFP fusion protein after proteasome disruption by rpt-5(RNAi). Apparent GFP signal in control treated animals is background auto-fluorescence. Scale bar 10 mm. (g) No developmental arrest of skn-1 mutants carrying an isoform-specific skn-1a::gfp transgene, and exposed to mild proteasome disruption by rpn-10(RNAi). Scale bar 1 mm. (h) Table showing growth vs. arrest phenotypes of skn-1a mutants in in the presence of bortezomib. All skn-1a alleles are identical in their effect on skn-1a coding sequence (G2STOP). Experiments performed identically to those shown in nucleus, indicating PNG-1 acts downstream of release from the ER (Figure 3b). Lower levels of SKN-1A::GFP accumulate in the nuclei of png-1 mutants than in the wild type, but this mild effect is unlikely to fully account for the severely defective responses to proteasome inhibition in png-1 mutant animals, suggesting retention of glycosylation modifications normally removed by PNG-1 likely disrupts SKN-1A's nuclear function. These data suggest that activation of ER-associated and N-glycosylated SKN-1A is required for responses to proteasome dysfunction.
DDI-1 aspartic protease localizes to both nucleus and cytoplasm, and is upregulated upon proteasome disruption
To examine the expression and subcellular localization of the DDI-1 protease, we used miniMos to generate a single copy integrated transgene expressing full length DDI-1 fused to GFP at the N-terminus, under the control of the ddi-1 promoter. The GFP::DDI-1 fusion protein is expressed in most tissues and shows diffuse cytoplasmic and nuclear localization under control conditions, and can rescue a ddi-1 mutant (see below). Following disruption of proteasome function by rpt-5(RNAi), GFP:: DDI-1 expression is dramatically induced, and GFP::DDI-1 is enriched in nuclei (Figure 4a). We used CRISPR/Cas9 to modify the ddi-1 locus to incorporate an HA epitope tag near the N-terminus of endogenous DDI-1. Following bortezomib treatment of ddi-1(mg573[HA::ddi-1]) animals, we observed strong upregulation (greater than 10-fold, based on blotting of diluted samples) of the HA-tagged endogenous DDI-1 (Figure 4b). The ddi-1 promoter contains a SKN-1 binding site (Niu et al., 2011). Upregulation of GFP::DDI-1 by rpt-5(RNAi) is greatly reduced in skn-1a(mg570) ::GFP is only detected in wild-type animals upon bortezomib exposure; a major band at~70 kD and a minor band at~90 kD are detected. In ERAD defective mutants, the~90 kD band is strongly detected under all conditions, and the~70 kD band appears only following bortezomib treatment. Actin is used as a loading control. (b) Expression and localization of SKN-1A::GFP in wild type and sel-1 and sel-11 ERAD defective mutants after proteasome disruption by rpt-5(RNAi). In ERAD defective mutants, SKN-1A::GFP fails to localize to the nucleus. Scale bar 10 mm. (c) Expression and localization of SKN-1A::GFP in wild type and png-1 mutants after proteasome disruption by rpt-5(RNAi). In png-1 mutants, SKN-1A:: GFP is able to localize to the nucleus, although at reduced levels compared to the wild-type. Scale bar 10 mm. DOI: 10.7554/eLife.17721.008 mutants (Figure 4c), suggesting that DDI-1 upregulation is mostly mediated by SKN-1A. The remaining weaker ddi-1 upregulation in the skn-1a mutant may represent a second skn-1a-independent mechanism that couples DDI-1 levels to proteasome function.
DDI-1 is required for proteolytic cleavage of SKN-1A downstream of ER trafficking
The EMS-induced ddi-1 missense alleles that cause failure to activate rpt-3::gfp are clustered within the aspartic protease domain of DDI-1, and affect conserved residues that are thought to form the substrate-binding pocket of the enzyme (Figure 5a,b), suggesting that the protease activity of DDI-1 is required (Sirkis et al., 2006). We used CRISPR/Cas9 mutagenesis to generate a protease dead mutant containing two amino acid substitutions at conserved residues of the catalytic motif, including the aspartic acid residue that forms the active site (D261N, G263A). We additionally isolated a CRISPR-induced deletion that deletes most of the aspartic protease domain and introduces a frameshift, which we presume to be a null allele. Both mutations cause a similar, strong defect in rpt-3:: gfp activation by the pbs-5(mg502) mutant, or upon proteasome RNAi, and cause a similar sensitivity to bortezomib (Figure 5c, data not shown).
S. cervisiae Ddi1 contains an N-terminal ubiquitin-like (UBL) domain and a C-terminal ubiquitinassociated (UBA) domain, but these domains are not detected by standard protein sequence comparisons with C. elegans DDI-1. To address the possibility that UBL or UBA domains with highly divergent sequence may be present in DDI-1, we generated N-terminally truncated (DN), and C-terminally truncated (DC) gfp::ddi-1 transgenes. We tested their ability to rescue the bortezomib sensitivity phenotype of ddi-1(mg571) alongside wild-type gfp::ddi-1 and an aspartic protease active site (D261N) mutant. The active site mutation abolished rescue by the gfp::ddi-1 transgene, whereas the DN and DC truncated transgenes restored bortezomib sensitivity to near wild-type levels ( Figure 5- In animals lacking DDI-1, SKN-1A::GFP localizes at normal levels to the nucleus upon proteasome disruption by rpt-5(RNAi), suggesting that DDI-1 regulates SKN-1A function after nuclear localization of the transcription factor ( Figure 5d). We noticed that SKN-1A::GFP occasionally showed abnormal localization within gut nuclei of ddi-1 mutants, accumulating in highly fluorescent puncta. We observed this defect for both SKN-1A::GFP and SKN-1A[DDBD]::GFP, indicating that the DBD of SKN-1A is not required for this mis-localization ( Figure 5e).
As in the wild type, SKN-1A[DDBD]::GFP does not accumulate in the absence of proteasome disruption in ddi-1 mutants, indicating that the DDI-1 peptidase does not participate in constitutive degradation of SKN-1A by the proteasome (Figure 5f). SKN-1A[DDBD]::GFP accumulates to similar levels upon proteasome disruption by bortezomib in wild-type and ddi-1 mutants, but in ddi-1 mutants is~20 kD larger than in the wild type, and approximates the expected size of SKN-1A [DDBD]::GFP. To test whether these differences reflect DDI-1-dependent proteolytic processing of SKN-1A, we generated a transgene that expresses full length SKN-1A with an N-terminal HA tag and a C-terminal GFP tag (HA::SKN-1A::GFP). The expression, localization and rescue activity of the dually tagged fusion protein is indistinguishable from that of the full length SKN-1A::GFP transgene. In wild-type animals carrying the HA::SKN-1A::GFP transgene, Western blotting for the HA tag reveals a~20 kD band that accumulates specifically upon proteasome disruption by bortezomib treatment. In ddi-1 deletion or active site mutants, a~110 kD protein accumulates upon proteasome disruption, equivalent in size to full-length HA::SKN-1::GFP ( Figure 5g). As such, SKN-1A is cleaved at a position approximately 20 kD from the N-terminus, and the protease active site of DDI-1 is required for this cleavage.
Discussion
Our genetic screen for mutants that fail to activate SKN-1 and dissection of the isoform-specific role of SKN-1A reveals the molecular details of proteasome surveillance. We show that SKN-1A is an ER ::GFP in ddi-1 mutant animals, treated with either solvent control (DMSO) or 5 ug/ml bortezomib, and blotted for GFP. In the ddi-1 mutant animals, the major band detected is 30 kD larger than in the wild type. (g) Western blot showing expression and processing of HA::SKN-1A:GFP in ddi-1 mutants animals, treated with either solvent control (DMSO) or 5 ug/ml bortezomib, and blotted for HA. In the wild type, a~20 kD band is detected in animals exposed to bortezomib. In ddi-1 mutants this low molecular weight fragment is absent, and a~110 kD band is detected. In (f) and (g) ddi-1 mutations were ddi-1 (mg571) [deletion] or ddi-1(mg572)[active site] and actin is used as a loading control. DOI: 10.7554/eLife.17721.010 The following figure supplement is available for figure 5: associated protein that is normally targeted by the ERAD pathway for proteasomal degradation in the cytoplasm. Mutations affecting ERAD genes sel-1/SEL1/HRD3 and sel-11/HRD1, stabilize SKN-1A, but also disrupt localization of SKN-1A following proteasome disruption, due to a failure to efficiently release SKN-1A from the ER. Our data argues that following release from the ER, SKN-1A must be deglycosylated by PNG-1 and cleaved by DDI-1 to become fully active Figure 6-figure supplement 1.
The bortezomib sensitivity defect of png-1 mutants is similar in strength to skn-1a mutants, and both png-1 and skn-1a mutations are synthetic lethal with rpn-10(RNAi). These similarities suggest that SKN-1A activity may be completely abolished in the absence of PNG-1; likely as a result of failure to deglycosylate SKN-1A following its retrotranslocation from the ER, although we cannot rule out the possibility that deglycosylation of other proteins contributes indirectly to SKN-1A function. Surprisingly, we were unable to generate png-1; skn-1a double mutants, apparently due to lethality of double mutant embryos (NL unpublished). This indicates that in png-1 mutant animals, SKN-1A (likely in a glycosylated state) retains a function that is essential for development. This suggests that SKN-1A is not only important for proteasome homeostasis, but is also important for cellular homeostasis upon disruption of glycoprotein metabolism. Consistently, skn-1 mutants are hypersensitive to tunicamycin, an inhibitor of protein glycosylation (Glover-Cutter et al., 2013).
NGLY1, the human PNG-1 orthologue, plays important roles in human development; NGLY1 deficiency, a recently described genetic disorder of protein deglycosylation, is caused by mutations at the NGLY1 locus (Enns et al., 2014). Failure to deglycosylate Nrf1, and consequent defects in proteasome homeostasis, likely contributes to the symptoms associated with NGLY1 deficiency. As such our work identifies a pathway that may be targeted for treatment of NGLY1 deficiency -genetic screens for suppressors of the defective proteasome gene regulation of png-1 mutants could indicate targets for drug development.
The expression of DDI-1 is dramatically responsive to proteasome inhibition, indicating that its synthesis or stability is coupled to surveillance of proteasome dysfunction. ChIP analysis of SKN-1 shows binding at the promoter element of the DDI-1 operon (Niu et al., 2011), suggesting that DDI-1 upregulation may occur via SKN-1 mediated transcriptional regulation, and we observed a defect in the upregulation of GFP::DDI-1 following proteasome disruption in skn-1a mutants. This suggests a positive feedback loop, wherein SKN-1A upregulates DDI-1, and DDI-1 promotes SKN-1A activation. Positive feedback may ensure a timely and robust response to proteasome inhibition.
Mutation of ddi-1 causes defective regulation of rpt-3::gfp and increases sensitivity of C. elegans to proteasome inhibitors. SKN-1A is cleaved at a site~20 kD from the N-terminus, and ddi-1 is required for this cleavage. The requirement for its catalytic function strongly suggests that DDI-1 is the enzyme directly responsible for cleavage of SKN-1A, although we cannot rule out the possibility that DDI-1 acts upstream of an as-yet unidentified second protease. There are precedents for cascades of proteases, for example caspases in apoptosis, complement cascades in immunology, and thrombin cascades in blood clotting. Our genetic screen for proteasome surveillance defective mutants isolated six independent ddi-1 alleles, but as yet no alleles of any other genes that encode proteases. This argues that either DDI-1 is the only protease in the pathway, or that any other proteases function redundantly or have other essential functions.
Uncleaved SKN-1A localizes to the nucleus in ddi-1 mutants, so cleavage of SKN-1A is not essential for its nuclear localization, and SKN-1A cleavage may occur either in the nucleus or cytoplasm. Given that GFP::DDI-1 is largely (but not exclusively) nuclear under conditions of proteasome disruption, we speculate that DDI-1 cleavage of SKN-1A is nuclear. In either case, unusually for a membrane-associated transcription factor, SKN-1A is released from the ER by a mechanism that does not require proteolytic cleavage. DDI-1-dependent cleavage therefore activates SKN-1A by some other mechanism(s) downstream of ER release. Cleavage of SKN-1A may be required to remove domains in the N-terminus that interfere with its normal function in the nucleus. For example, retention of the hydrophobic transmembrane domain may be disruptive once the protein has been extracted from the ER membrane.
Mutant SKN-1A lacking the transmembrane domain is not subject to proteasomal degradation, and is not cleaved by DDI-1. So, in addition to serving to link SKN-1A levels to proteasome function via ERAD, ER trafficking of SKN-1A is important for subsequent DDI-1-dependent activation. The bortezomib sensitivity of skn-1a mutants (or skn-1a mutants carrying the transmembrane domainlacking transgene) is more severe than that of ddi-1 and ERAD mutants, so ER-association must also promote SKN-1A activation by additional mechanisms. As well as proteasome disruption, skn-1 is implicated in responses to several endocrine and environmental stimuli . Modifications such as glycosylation that SKN-1A acquires in the ER may tailor its activity to respond to proteasome dysfunction, identifying these modifications and how they are regulated will be of interest.
S. cerevisiae Ddi1 contains an N-terminal UBL domain and a C-terminal UBA domain, This domain architecture is typical of extraproteasomal ubiquitin receptors, which play a role in recruiting ubiquinated proteins to the proteasome (Tomko and Hochstrasser, 2013). S. cerevisiae Ddi1 binds to both ubiquitin and the proteasome, and participates in the degradation of some proteasome substrates (Bertolaet et al., 2001;Gomez et al., 2011;Kaplun et al., 2005;Nowicka et al., 2015), and synthetic genetic interactions with extraproteasomal ubiquitin receptor and proteasome subunit mutants supports a role for Ddi1 in proteasome function (Costanzo et al., 2010;Díaz-Martínez et al., 2006). Although the aspartic protease domain is highly conserved, DDI-1 in C. elegans and related nematodes apparently lacks both UBL and UBA domains, and the UBA domain is not found in mammalian Ddi1 orthologues, so it remains unclear whether Ddi1 orthologues function as extraproteasomal ubiquitin receptors in animals (Nowicka et al., 2015). The effect of ddi-1 on development upon proteasome disruption by bortezomib is entirely dependent on skn-1a, indicating that DDI-1 promotes resistance to proteasome disruption via SKN-1A, rather than a general effect on proteasome function. Regardless, it will be of interest to determine whether DDI-1 binds to proteasomes and/or ubiquitin, and whether this affects its function in SKN-1A activation.
Activation of the mammalian SKN-1 homologue Nrf1 involves both deglycosylation and proteolytic cleavage, but the enzymes responsible are not known (Radhakrishnan et al., 2014;Zhang et al., 2015). However, a large scale screen for gene inactivations that render cells more sensitive to proteasome inhibitors supports a model that human DDI1 protease also processes Nrf1: DDI2 (one of two human orthologues of DDI-1) and Nrf1 were highly ranked hits in this screen that identified hundreds of gene inactivations that increase sensitivity of multiple myeloma cells to proteasome inhibitors (Acosta-Alvear et al., 2015). This suggests that DDI2 is required to cleave and activate Nrf1 in human cells. The site at which Nrf1 is cleaved during proteasome dysfunction has been identified (Radhakrishnan et al., 2014), but the primary sequence of this site is not conserved in SKN-1A. Comparisons of SKN-1A with its nematode orthologues reveals conservation at positions consistent with the~20 kD cleavage product we have observed (NL unpublished). It is possible that DDI-1 and its substrate(s) have divergently co-evolved in different lineages. Thus, we suggest that DDI-1 and SKN-1A are core components of a conserved mechanism of proteasome surveillance in animals.
Here we have shown that correct post-translational processing of SKN-1A is essential for development if proteasome function is disrupted. Deregulated proteasome function a feature of aging and age-related disease (Saez and Vilchez, 2014;Taylor and Dillin, 2011). skn-1 is a critical genetic regulator of longevity, and controls lifespan in part through regulation of proteasome function Steinbaugh et al., 2015) As such, the SKN-1A processing pathway described here suggests the mechanism that links SKN-1/Nrf to proteasome function and longevity.
Proteasome inhibitors are important drugs in the treatment of multiple myeloma, but relapse and emergence of drug resistant tumors remains a challenge (Dou and Zonder, 2014). Nrf1 promotes survival of cancerous cells treated with proteasome inhibitors, and activation of this pathway might mediate resistance (Acosta-Alvear et al., 2015;Radhakrishnan et al., 2010;Steffen et al., 2010). Blocking the activation of proteasome subunit gene expression by Nrf1 has been proposed as a potential strategy to improve effectiveness of proteasome inhibitors in cancer treatment. The conserved SKN-1A/Nrf1 processing factors we have identified, particularly DDI-1, are ideal targets for such an approach.
Materials and methods
C. elegans maintenance and genetics C. elegans were maintained on standard media at 20˚C and fed E. coli OP50. A list of strains used in this study is provided in Supplementary Table 1. Mutagenesis was performed by treatment of L4 animals in 47 mM EMS for 4 hr at 20˚C. RNAi was performed as described in Kamath and Ahringer (2003). The mgIs72 [rpt-3::gfp] integrated transgene was generated from sEx15003 (Hunt-Newbury et al., 2007), using EMS mutagenesis to induce integration of the extrachromosomal array. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). sel-11(tm1743) was kindly provided by Shohei Mitani. png-1 (ok1654) was generated by the C. elegans Gene Knockout Project at the Oklahoma Medical Research Foundation, part of the International C. elegans Gene Knockout Consortium.
Identification of EMS induced mutations by whole genome sequencing
Genomic DNA was prepared using the Gentra Puregene Tissue kit (Qiagen, #158689) according to the manufacturer's instructions. Genomic DNA libraries were prepared using the NEBNext genomic DNA library construction kit (New England Biololabs, #E6040), and sequenced on a Illumina Hiseq instrument. Deep sequencing reads were analyzed using Cloudmap (Minevich et al., 2012).
Following deep sequencing analysis, a number of criteria were taken into account to identify the causative alleles, as shown in Supplemental file 1. In many cases, the causative alleles were strongly suggested by the identification of multiple independent alleles for a given gene. Even for those genes only identified by a single allele, the strong functional connection with other independently mutated genes suggests that they are causative (e.g. isolation of multiple alleles of the sel gene class). We also obtained genetic linkage data supporting these assignments for most alleles. For most of the mutants considered, deep sequencing was performed using a DNA from a pool of 20-50 mutant F2s generated by outcrossing the original mutant strain to the parental (non-mutagenised) background, which allowed us to use Cloudmap variant discovery mapping to identify the genetic linkage of the causative allele, or linkage was confirmed by testing linkage in crosses with strains carrying precisely mapped miniMos insertions. For ddi-1 and png-1, we confirmed that disruption by an independent means (with an independently derived allele) has the same effect on rpt-3::gfp expression as the EMS-induced mutation.
Identification of the pbs-5(mg502) mutation
The mg502 allele was isolated in an EMS mutagenesis screen in which mgIs72[rpt-3::gfp] animals were screened for recessive mutations causing constitutive activation of GFP expression. The mutation was identified as described above. The pbs-5(mg502) lesion is a 122bp deletion in the promoter of CEOP1752. This operon consists of K05C4.2 and pbs-5. Animals carrying this mutation show constitutive activation of rpt-3::gfp, but have normal growth and fertility under control conditions.
Genome modification by CRISPR/Cas9
Guide RNAs were selected by searching the desired genomic interval for 'NNNNNNNNNNNNNNNNNNRRNGG', using Ape DNA sequence editing software (http://biologylabs.utah.edu/jorgensen/wayned/ape/). All guide RNA constructs were generated by Q5 site directed mutagenesis as described (Dickinson et al., 2013). Repair template oligos were designed as described (Paix et al., 2014;Ward, 2015). Injections were performed using the editing of pha-1 (to restore e2123ts) or dpy-10 (to generate cn64 rollers) as phenotypic co-CRISPR markers (Arribere et al., 2014;Ward, 2015). Injection mixes contained 60 ng/ul each of the co-CRISPR and gene of interest targeting Guide RNA/Cas9 construct, and 50 ng/ul each of the co-CRISPR and gene of interest repair oligos. Guide RNA and homologous repair template sequences are listed in Supplemental file 1.
Transgenesis
Cloning was performed by isothermal/Gibson assembly (Gibson et al., 2009). All plasmids used for transgenesis are listed in Supplemental file 1. All miniMos constructs were assembled in pNL43, a modified version of pCFJ909 containing the pBluescript MCS, and are described in more detail below. MiniMos transgenic animals were isolated as described, using unc-119 rescue to select transformants (Frøkjaer-Jensen et al., 2014).
ddi-1 constructs
The genomic C01G5.6/ddi-1 coding sequence was fused in frame with GFP at the N-terminus. The gfp::ddi-1 fragment was inserted into pNL43 with the ddi-1 promoter (803 bp immediately upstream of the start codon), and the tbb-2 3'UTR. The gfp::ddi-1[D261N] construct was generated by site-directed mutagenesis, and the N-and C-terminal truncation constructs were generated by isothermal assembly using appropriate fragments of the ddi-1 genomic coding sequence.
Microscopy
Low magnification bright field and GFP fluorescence images (those showing larval growth and rpt-3:: gfp expression) were collected using a Zeiss AxioZoom V16, equipped with a Hammamatsu Orca flash 4.0 digital camera camera, and using Axiovision software. High magnification differential interference contrast (DIC) and GFP fluorescence images (those showing SKN-1A::GFP and GFP::DDI-1 expression) were collected using a Zeiss Axio Image Z1 microscope, equipped with a Zeiss AxioCam HRc digital camera, and using Axiovision software. Images were processed using ImageJ software. For all fluorescence images, any images shown within the same figure panel were collected together using the same exposure time and then processed identically in ImageJ.
Bortezomib sensitivity assays
Bortezomib sensitivity was assessed by the ability of L1 animals to develop in the presence of a range of concentrations of bortezomib (LC Laboratories, #B1408). The assays were carried out in liquid culture in ½ area 96 well plates (Corning, #3882). Each well contained a total volume of 35uL. We mixed~15 L1 larvae with concentrated E. coli OP50 suspended in S-basal (equivalent to bacteria from~200 uL of saturated LB culture), supplemented with 50 ug/ml Carbenicillin, and the desired concentration of bortezomib. All treatment conditions contained 0.01% DMSO. The plates were sealed with Breathe-Easy adhesive film (Diversified Biotech, #9123-6100). The liquid cultures were incubated for 4 days at 20˚C and then C. elegans growth was manually scored under a dissecting microscope. Growth was scored into three categories: (1) Normal -indistinguishable from wild type grown in DMSO control, most animals reached adulthood; (2) Delayed development -most animals are L3 or L4 larvae; (3) Larval arrest/lethal -all animals are L1 or L2 larvae. For each genotype all conditions were tested in at least 2 replicate experiments.
Western blot following bortezomib treatment
Drug treatments were performed in liquid culture in 6-well tissue culture plates. In each well we mixed C. elegans suspended in S-basal (~1000-2000 worms collected from a mixed stage culture grown at 20˚C on NGM agar plates) and E. coli OP50 in S-Basal (equivalent to E. coli from~4 mL saturated LB culture), supplemented with 50 ug/ml Carbenicillin, and the desired concentration of bortezomib, and made the mixture up to a final volume of 700 ul. All wells contained 0.01% DMSO. The tissue culture plates were sealed with Breathe-Easy adhesive film and incubated at 20˚C for 7-9 hr. After the treatment the animals were collected to 1.5 ml microcentrifuge tubes, washed twice in PBS to remove bacteria and the worm pellet was snap frozen in liquid nitrogen and stored at À80˚C. The worm pellet was briefly thawed on ice, mixed with an equal volume of 2x Sample buffer (20% glycerol, 120 mM Tris pH 6.8, 4% SDS, 0.1 mg/ml bromophenol blue, 5% beta mercaptoethanol), heated to 95˚C for 10 min, and centrifuged at 16,000g for 10 min to pellet debris. SDS-PAGE and western blotting was performed using NuPAGE apparatus, 4-12% polyacrylamide Bis-Tris pre-cast gels (Invitrogen, #NP0321) and nitrocellulose membranes (Invitrogen, #LC2000) according to the manufacturer's instructions. The following antibodies were used: mouse anti-GFP (Roche; #11814460001); HRP-conjugated mouse anti-HA (Roche, # 12013819001), mouse anti-Actin (Abcam; #3280).
Multiple alignment of protein sequences
Multiple alignment was performed using Clustal Omega (www.ebi.ac.uk/tools/clustalo). | 8,545.6 | 2016-05-12T00:00:00.000 | [
"Biology"
] |
Incorporating Community Partner Perspectives on eHealth Technology Data Sharing Practices for the California Early Psychosis Intervention Network: Qualitative Focus Group Study With a User-Centered Design Approach
Background Increased use of eHealth technology and user data to drive early identification and intervention algorithms in early psychosis (EP) necessitates the implementation of ethical data use practices to increase user acceptability and trust. Objective First, the study explored EP community partner perspectives on data sharing best practices, including beliefs, attitudes, and preferences for ethical data sharing and how best to present end-user license agreements (EULAs). Second, we present a test case of adopting a user-centered design approach to develop a EULA protocol consistent with community partner perspectives and priorities. Methods We conducted an exploratory, qualitative, and focus group–based study exploring mental health data sharing and privacy preferences among individuals involved in delivering or receiving EP care within the California Early Psychosis Intervention Network. Key themes were identified through a content analysis of focus group transcripts. Additionally, we conducted workshops using a user-centered design approach to develop a EULA that addresses participant priorities. Results In total, 24 participants took part in the study (14 EP providers, 6 clients, and 4 family members). Participants reported being receptive to data sharing despite being acutely aware of widespread third-party sharing across digital domains, the risk of breaches, and motives hidden in the legal language of EULAs. Consequently, they reported feeling a loss of control and a lack of protection over their data. Participants indicated these concerns could be mitigated through user-level control for data sharing with third parties and an understandable, transparent EULA, including multiple presentation modalities, text at no more than an eighth-grade reading level, and a clear definition of key terms. These findings were successfully integrated into the development of a EULA and data opt-in process that resulted in 88.1% (421/478) of clients who reviewed the video agreeing to share data. Conclusions Many of the factors considered pertinent to informing data sharing practices in a mental health setting are consistent among clients, family members, and providers delivering or receiving EP care. These community partners’ priorities can be successfully incorporated into developing EULA practices that can lead to high voluntary data sharing rates.
Introduction
The past decade has seen a rapid expansion in the availability of eHealth technology (eg, smartphone and tablet applications and web-based portals) to support individuals with psychosis [1].Individuals with psychosis are willing and interested in using eHealth technology as part of their care [2][3][4][5].eHealth tools promote treatment engagement [6], symptom monitoring [7,8], relapse prediction [9] and enhance quality of life [10] and functioning [11].Consequently, industry developers and academics are racing to implement eHealth technology at scale to improve outcomes for those experiencing serious mental illness.
As eHealth technology advances and we leverage user data to drive early identification and intervention algorithms [12], it is imperative that we implement ethical data use standards.Typical software has long end-user license agreements (EULAs) replete with legal jargon detailing the myriad ways user data are used and shared [13] with little or no user control.Therefore, users frequently report that they rarely read the EULA and may not understand what they are agreeing to [14,15].Such concerns have led some to question whether the EULA should be considered an effective tool for informed consent, with concerns that the agreement typically serves to protect the company but not the user [16].As a result, technology users may unknowingly have their data shared or sold to third parties, sometimes without encryption, rendering data vulnerable to privacy breaches [13,[17][18][19][20][21].These issues may be particularly relevant in psychosis, given that cognitive impairments associated with psychotic disorder could impact EULA comprehension-data breaches of sensitive and highly stigmatized psychosis diagnoses could be especially harmful.
Users have varied attitudes about risk: some report skepticism of eHealth data [13,16]; others feel cognitive dissonance around risks as a reality of using digital platforms, especially those that are "free" in return for data use [19,22,23].However, health data are personal and private-researchers, providers, and industry partners alike have a duty to protect vulnerable individuals from data misuse.Moreover, an outcomes-driven health care system (an agreed goal in the health care industry [24]) relies on large, interagency data sharing.To do this, we must implement ethical data use practices to increase user acceptability and trust in eHealth platforms.
One such effort to build an outcomes-driven health care system is the California Early Psychosis Intervention Network (EPI-CAL).EPI-CAL is a multiyear project that connects early psychosis (EP) programs across California through an eHealth application, Beehive, in a learning health care network [25].Beehive facilitates client-, family-, and clinic-level outcomes data collection as part of regular care across EP programs using a battery of validated measures.Adopting a learning health care network approach to psychosis care has the potential to support innovation, improve efficiency, and improve care delivery and outcomes [26].EPI-CAL's design relies on clients with EP "choosing" to share their data for analysis outside of standard clinical care by agreeing to a EULA that allows the software to be used to collect, transfer, and present client data.To create an adequate EULA in this setting, previous research suggests that EULAs should be relevant and understandable [27], use video explanations [28,29], set the reading level to sixth to eighth grade [27,30], include comprehension checks [31,32], offer explicit "opt-in" selections [16,30,33,34], and include options to request ending data collection or delete data entirely [30].Unfortunately, such proposals are rarely implemented in practice [35], and therefore, our team sought to elicit feedback from relevant community partners to inform the design of a EULA that incorporates best practices for informed data sharing in an EP setting.
In the first phase of the study, the aim was to explore family members, clients, and EP care providers' beliefs, attitudes, and perspectives on ethical data sharing in EP settings.These findings were then used to develop a EULA for our eHealth data collection platform, appropriate for use in an EP treatment setting.In the second phase, we presented our EULA materials to family members, clients, and EP care providers with the aim of understanding (1) to what extent these materials addressed their concerns and priorities and ( 2) what features could be amended to better meet the goal of developing an accessible, transparent, and flexible EULA.Therefore, the first phase serves to explore generalizable principles of ethical data sharing practices relevant to an EP setting.The second phase represents a case example of using a user-centered design approach to developing eHealth data sharing practices [10,36,37], informed by the perspectives of participants provided during phase 1.
Design
We used a two-phase approach: (1) an exploratory, qualitative, and focus group-based study design to explore participants' mental health data sharing and (2) a privacy preferences and a user-centered design workshop design to evaluate implementation of the perspectives shared by participants in the first phase of the study.We used the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist to guide the design and implementation of the study [38] (Multimedia Appendix 1).
Recruitment
We recruited participants from three EP community partner groups: (1) clinical staff and providers, (2) clients, and (3) family members of clients.Eligible participants were (1) actively or formerly affiliated with an EPI-CAL EP clinic, (2) English-speaking, and (3) able to provide written informed consent and assent (minors).EP provider participants were recruited through research team contact with the team lead of the 12 active EPI-CAL EP programs, asking if at least 1 provider or staff could participate.We used this approach to ensure a maximal number of EPI-CAL programs were represented and to minimize overrepresentation from a small number of clinics.Client and family participants were invited either through clinician referral or by the research team directly contacting individuals who had previously given permission to be contacted for future research opportunities.
Data Collection and Analysis
The development of the phase 1 focus group interview guide was grounded in (1) the authors' previous clinical and research experience implementing eHealth in EP care [7,8], (2) the authors' own questions regarding how to best inform individuals about how their data would be used in clinical care and research as part of the impending implementation of Beehive within EPI-CAL, and (3) a brief review of the relevant literature [16,21,39].The developed focus group guide extends the work of Shen et al [21], who created an interview guide to assess the privacy and data sharing experiences and perspectives of individuals with mood, anxiety, and substance use issues.Additionally, our guide incorporates ideas from Stopczynski [39], who suggested that best practice should emphasize the end user over the research, allowing the "end user" to feel empowered to exercise control over their data.Some specific user-centered design elements include having data sharing access options, having the ability to change one's mind, using simple language, and understanding content through multimedia inputs.Finally, the work of Torous et al [16] was incorporated, which recommends the involvement of community partners from the beginning of any eHealth application development, ensuring the inclusion of EULA comprehension checks and including explicit agreement sharing options.
The phase 1 focus group guide (Multimedia Appendix 2) began with defining key concepts relevant to sharing and using health information collected through an eHealth platform, including privacy, confidentiality, and the distinction between deidentified and anonymous information.The remaining questions prompted participants to share their understanding and perspectives on (1) data sharing, (2) changing sharing options, and (3) sharing different types of data (eg, identifiable vs deidentified) at different levels (eg, individual-and group-levels).Descriptive ice-breaker questions (Multimedia Appendix 3) were administered as a poll at points throughout the group to generate discussion, allow private reflection, and increase engagement.
During phase 1, we conducted three 90-minute focus groups, including 1 client, 1 family member, and 1 provider group.These focus groups were conducted during August 2020 through videoconferencing to comply with COVID-19 restrictions at the time.Each group included a facilitator (LMT or SE), cofacilitator (SE or KEN), and note taker (KEN or CKH).There were no other individuals present other than researchers and participants.The positionality of each researcher is detailed in Table 1.Each group began with the introduction of the research team, including their occupation and the role they would have in the focus group.After each group, the research team met to discuss any salient points and preliminary themes.These reflections were used to refine the focus group guide before conducting a subsequent group.Each group was audio recorded.Upon the completion of each phase, these recordings were transcribed, cleaned, and hand-coded using directed content analysis [40].In this approach, the coding team (KEN, SE, VLT, and LMT) first reviewed the transcripts, highlighting identified ethical data sharing themes.Next, the coding team developed a preliminary coding framework based on the examined text, informed by preexisting literature concerning ethical behavioral health data sharing principles [16,21,39].Next, 2 authors (KEN and SE) independently coded each transcript using the developed coding framework, compared their responses, and resolved any disagreements through discussion.Where appropriate, this coding framework was iteratively revised as new codes emerged.From these codes, a set of categories was developed, and then major and minor themes were established.All analysis was conducted using NVivo qualitative analysis software (QSR International).
In phase 2, using the findings from the phase 1 focus group, the research team created an informational whiteboard Beehive EULA video (Multimedia Appendix 4) explaining data sharing in the application, the choices that each user would have to share their data for research, and a visualized Beehive data sharing screen, which presented opt-in choices of data sharing levels to users after watching the EULA video.Next, the guide for the phase 2 workshop (Multimedia Appendix 5) was developed; it focused on reviewing the developed materials and eliciting feedback on the approach, the user interface, and the information presented.In the workshops, all participants watched the EULA video twice before reviewing the opt-in data sharing screen.
RenderX
The phase 2 workshop transcripts were coded by 2 authors (KEN and VLT) and analyzed using an approach consistent with the phase 1 focus groups.Once the research team completed a preliminary draft of the coding framework, participants were contacted 1 final time and emailed the major and minor themes, supported by key quotations, from their research participation activities.Participants could provide feedback through a survey (Multimedia Appendix 6) or through videoconference discussion with researchers (KEN, SE, and VLT).This feedback then informed the structure of the coding framework.Once analysis was completed, based on the data, a series of modifications were made to both the EULA video and the user interface for the data sharing screen.
Ethical Considerations
The institutional review board of the University of California, Davis, approved the study (1403828-21, California Collaborative Network to Promote Data-Driven Care and Improve Outcomes in Early Psychosis [CORE]).Additionally, several of the EP program participating counties and universities in EPI-CAL required a separate review of the project by their institutional review board, which provided their approval.All study participants provided written informed consent and assent (as appropriate).Participants received US $30 compensation for each focus group (they could participate in both).
Participants
At least 1 provider participant from 12 EPI-CAL programs participated in the study.The clinical roles of these participants included clinicians, case managers, supported employment and education specialists, clinic coordinators, clinical supervisors, and program directors.These roles are not specified with quotations in order to protect the identities of participants.
Regarding client and family recruitment, 30 individuals were contacted directly by the research team.An unknown number of clients and family members were introduced to the study by their respective providers in the 12 EPI-CAL programs.Of all the clients and family members introduced to the study, 10 (6 clients and 4 family members) agreed to participate.Of the 20 who were directly contacted by the research team and did not participate, most (n=12, 60%) did not respond to recruitment attempts; a few (n=3, 15%) stated they were not available; and 5, who initially agreed to participate, ultimately did not attend the research activity.Therefore, the final sample included 24 participants (14 providers, 6 clients, and 4 family members).Participant demographics are presented in Table 2.
Following the completion of the preliminary coding framework, attempts to contact all participants were made, and 8 participants in total (3 clients, 4 providers, and 1 family member) agreed to provide feedback: 6 through a survey and 2 through a videoconference.Overall, participants agreed with the identified themes, and as a result, no significant changes were made to the coding frameworks.Some researchers had existing professional relationships with some participants due to previous research or contact at EPI-CAL focus groups.50) 0 (0) 3 ( 13) Other a EP: early psychosis.
b Participants can select more than 1 race; therefore, percentages might not sum to 100.
c Some participants changed their responses to this question between group 1 and group 2.
d Possible responses for sexual orientation that were not endorsed by any participants were "gay or lesbian," "bisexual," and "asexual."
Overview
In the phase 1 focus groups, participants started by providing their perspectives on sharing their mental health data and factors that would affect their comfort with sharing.Overall, clients and family members reported feeling comfortable with sharing mental health data in a clinical setting.While we presumed mental health data to be more sensitive and thus have distinct considerations for sharing, many participants considered mental health data equivalent to physical health data; instead, they were more concerned with sharing personal information overall.Indeed, participants appeared to be very mindful of potential risks concerning data sharing.
I don't have any distinction. I'm very open about my mental health as well as my physical. [Client 3, group 2]
I feel like no data is safe.Once you release it onto the internet especially because of all the articles saying that there was a breach with this site, and they have your credit card information.[Provider 1, group 1] Participants indicated that multiple factors informed their decision-making process with regard to mental health data sharing.While some were specific as to what could be addressed by a EULA, it was notable that many other considerations that were nonspecific to the EULA process were also highlighted.A summary of these EULA-specific and more general factors is discussed below.Additional quotes supporting the main themes are presented in Multimedia Appendix 7.
Overview
Factors that informed decision-making regarding data sharing that could be specifically addressed by a EULA and subsequent data sharing practices corresponded to four broad themes: (1) the importance of the EULA providing the necessary information required to make an informed decision and transparency around when and how the data will be used; (2) the degree to which clients have control and agency over the data they provide; (3) the degree to which appropriate data security practices are implemented and an explanation of how security would be maintained; and (4) clearly defined benefits derived from the sharing of personal data.A summary of each theme is presented below.
Transparency and Provision of Relevant Information
Transparency was considered foundational in participants' data sharing calculus-paramount to this was knowing what, when, with whom, how, and why data are shared, including the disclosure of conflicts of interest and using layperson's and culturally appropriate terms.The opportunity to review research results was 1 example of transparency that improved participants' understanding of how data are used.Clinic participants suggested explaining current data protection laws may increase willingness to share data.
Control and Agency of Data
Participants emphasized the importance of having control over their data, including sharing the minimum data necessary, restricting access, having access to the data themselves, having the ability to change one's mind to facilitate no regrets (including being able to opt-in later), and deleting data to give peace of mind.All participants noted that the limitations of deleting deidentified data should be clear, especially if data have been shared with outside parties.
I think [the ability to delete your data] is a fairly important option. If at the very least for the peace of mind it can give. [Client 3, group 2]
There's so many protections on my information that even I can't access it, which I find really ridiculous... Why would I want you to share that information to other people if you won't even share it to me? [Client 4, group 2]
Data Security and Protections
Individuals want to know that the institution or entity to which they are entrusting their data is competent in upholding legal protections and that their information is protected and not sold to third parties.Clients emphasized that extra protections should be in place when individuals are in a vulnerable state (eg, a mental health crisis).Participants noted that clarity on the data only being presented in the aggregate was also important.It was notable that clients and family members were aware of at least some of the existing laws concerning data sharing, including that the Health Insurance Portability and Accountability Act (HIPAA) protects against the improper sharing of medical information.
Clarity Regarding Potential Benefits of Data Sharing
Clients, providers, and family participants all highlighted that a clear explanation of the benefits of data collection is an important consideration in agreeing to share data.Some focused on the personal benefits of data collection, such as supporting continuity of care or having data integrated into care delivery.However, others also highlighted the value of knowing how the data can support program sustainability and advance the field of EP care more broadly.This concept highlights a need for those collecting data to clearly define the benefits for users-for those who are providing their data-and those benefits should be clearly communicated or accessible before using that data.
Previous Data Sharing Experiences
Previous experience, both positive and negative, influenced understanding and willingness to share data.Participants' past experiences of data being held securely and appropriately increased comfort in sharing data in the future.Conversely, experiences where data were shared without their knowledge or ability to control it resulted in individuals feeling less comfortable about data sharing in the future.This underscores the importance of integrity in the use of data and how unethical practices can lead to a diminished willingness to share data in the future.
Rapport Developed With Clinical Program
When researchers cannot be in direct contact with participants, they rely on established rapport between client and clinic staff, as staff are often the individuals who relay information about research opportunities.One clinician stated that "understanding what the purpose of the research is and how it's helpful" can be a conduit for transparency.A clinical research coordinator noted that rapport alone is insufficient; clinicians must be able to explain the study.
I think rapport with our patients is really important...I think there was something about the rapport building up front from the phone line to actually consenting that was much more comfortable compared to just someone new coming in and explaining the consent that they had never had contact with or any relationship with prior.[Provider 3, group 1]
Development of EULA Materials
After completing the phase 1 focus groups, we (1) developed a whiteboard-style informational EULA video and (2) designed the user interface in Beehive on which users review the text of the EULA and make decisions about how they want their data to be used.This happened concurrently with the coding of phase 1 groups, with themes from these groups informing the development of these EULA materials.
While it was notable that multiple factors distinct from the EULA were considered important to decision-making regarding data sharing, issues concerning transparency, data protection and security, potential benefits, and control were considered important and something that could be specifically addressed by a EULA.In response to these findings, our informational video and text EULA were designed to include information in plain language regarding the purpose of data collection, the funders sponsoring the project, the entities who would have access to data and at what levels (identified vs deidentified), and how their data were secured and stored.We also provided information about how their participation in this project and sharing their data could benefit them and the population with EP in California more generally.Both formats of the EULA included information regarding opting into and changing data sharing permissions (ie, "control").A detailed summary of how these themes were incorporated into the development of the EULA materials is presented in Table 3.
Participant Perspectives on How the EULA Addresses Issues Related to Transparency, Control, Data Protection, and Potential Benefits of Data Sharing
Following the preliminary development of the EULA materials, we conducted user-centered workshops with the aim of soliciting feedback on the materials and focusing on potential areas for improvement.During these workshops, we presented Beehive EULA materials to participants through a whiteboard video and the application's user interface, where users could indicate their data sharing choices.
Overall, the feedback from the participants was positive.Most considered the EULA to be highly transparent, although some clinicians were concerned with the relevance of particular visualizations, while a client participant suggested the term "deletion" of data may be misleading in this context.Others appreciated how the EULA provided agency and control back to the client, which is particularly important in this setting, given that individuals with psychosis can frequently feel that their agency is being taken away.Others reported that a key takeaway message from the EULA video was that they felt their data were secure, which was considered an important factor in agreeing to data sharing.Finally, feedback regarding the benefits of data collection was somewhat mixed.Some participants appreciated the fact that the EULA made clear how these data linked to the larger EPI-CAL research project centered on improving and evaluating outcomes.On the other hand, others were less clear on how data collection may lead to localized benefits, which raised concerns about the utility of the data being requested.
They feel like they don't have a lot of self-control over things, or even their life, and this gives them control over at least this portion of it.And asking the questions beforehand to get permission before you put in any data, I think is an awesome idea.Based on the feedback from participants during the phase 2 workshops, a series of modifications were made to the EULA.
Examples include clarifying the research team's access to
XSL • FO
RenderX deidentified data for quality management purposes, highlighting potential benefits to clients, further simplifying the text, and slowing the rate of speech.Additionally, we updated the user interface by changing the "opt-in" data sharing choices to a forced response (yes or no) regarding data sharing (Figures 1 and 2).
Figure 1.The Beehive end-user license agreement screen as presented in phase 2 focus groups was designed with feedback from phase 1 focus groups.
Item "a" was from client input (phase 2-impact on transparency), item "b" was from client input (phase 2-impact on transparency), and item "c" was from support person input (phase 2-impact on transparency).
Discussion
This study explored EP community partner perspectives on ethical data sharing practices and what impacted their willingness to share data on eHealth platforms.Then, using these data, we developed a user-centered, accessible, transparent, and flexible EULA that aimed to incorporate EP community partner priorities.In the second phase of this study, we piloted the newly developed EULA materials with EP community partners in a user-centered design workshop format to evaluate if our EULA approach addressed the most critical elements needed for ethical data sharing practices.Community partners expressed overall positive attitudes toward the EULA materials and reported that the EULA would likely increase EP program participants willingness to engage in data sharing if they were using Beehive.This theoretical engagement with Beehive mentioned by participants is supported in practice by the high proportion of clients that have agreed to share their data after reviewing the Beehive EULA as part of their regular care.These findings, therefore, present 1 possible ethical framework for eHealth platforms adopting user-centered approaches.eHealth platforms developed with ethical data sharing practices can address client and family member priorities, which can also lead to a high proportion of clients with EP agreeing to share data.
In the focus group phase of the study, we elicited feedback from participants around sharing and using health information collected through an eHealth platform.We found major themes centered on data sharing practices that could be addressed by a well-designed EULA, as well as factors that were related to data sharing practices more generally.Regarding EULA relevant factors that would increase willingness to share data, four main findings emerged: focus group participants endorsed the core themes of (1) transparency, (2) data protections and limitations, (3) control and agency over the use of their data, and (4) clarity around the potential benefits of data sharing.Factors that influenced decisions around data sharing that could not be addressed by a EULA included past experiences with data sharing and rapport developed with clinical service providers facilitating data collection activities.These findings build on previous research highlighting a range of privacy-adjacent concerns [22,27,29,33], including transparency [27,41], relevancy [27], user-level control [41,42], and comprehension [13,20,43].This demonstrates users' desire to know the "what," "when," "how," "why," and "with whom" to make informed data sharing decisions.eHealth platforms need to equip users with enough information in their EULAs to objectively assess the benefits and risks of sharing their sensitive personal information.
EULAs typically have low readership [44], and profit-oriented applications aim to collect massive amounts of data [22].Thus, there are minimal, if any, safeguards in place for vulnerable individuals.Even when deidentified data are used, they are often exempt from regulatory review [45].As such, the EULA does not parallel the clinical or research-informed consent framework, and there is much that can be applied regarding the ethical use of eHealth technology.Though informed consent is required to cover aspects of privacy, risks, and ethical use of data, it still falls short in similar ways to the typical EULA, such as falling into the trap of long, technical, and difficult-to-understand language (ie, above recommended reading levels), and often requiring supplemental scripts describing the process in more granular steps, using plain layperson's terms, and requiring comprehension checks [20,28,31,32,34,[46][47][48], though this has historically not been a standardized process [49].The goal of this project was to respond to previous EULA and consent framework limitations and address the concerns that users had.These closely aligned with the themes of transparency and comprehension, protections, control, and explanation of potential benefits observed in our focus groups.
Our results demonstrate the value of partnering with community members to develop eHealth technology and related EULA materials.Participants' wide range of experiences and perspectives emphasized their desire for control and protection over their data.Workshop participants upheld the importance of allowing users to change their data sharing preferences at any time; they viewed such a feature as a way to support vulnerable individuals who may wish to modify data sharing decisions they made during times of sedation from psychotropic medications, for example.Similarly, participants highlighted the impact of trust and rapport between client and provider on data sharing decisions; they suggested that providers review the EULA video with clients and families to answer questions and provide encouragement and assurance as they consider their data sharing options.This indicates that person-to-person discussion of the EULA also impacts comprehension, comfort using eHealth technology, and whether the user chooses to share their data.By centering the voices of users, we gained valuable insight into how best to balance user control over data and researchers' need for data.The potential benefits of adopting a user-centered design approach to EULA development are reflected in the high proportion of clients that agreed to share data following completion of the process (421/478, 88/1%).This is noteworthy, given it has been argued that the length and complexity of EULAs have been used as an obfuscation strategy to increase the likelihood that people agree to terms that benefit those that receive materials [35,50].However, our findings are consistent with previous research, suggesting clearer EULAs can lead to a greater number of consumers reading and understanding the terms, which can in turn increase the likelihood they accept them [51].
This study has significant strengths, including centering community partner perspectives, using a multiphase approach to incorporate participant feedback, and developing actionable steps to ensure ethical data sharing in eHealth technology.Limitations include the possibility of bias inherent to qualitative methods: facilitator age, social status, race, and participant involvement in the development of the EULA materials reviewed could bias their responses.Participants may have felt pressure to please facilitators (social desirability bias) and may have limited contributions due to discomfort (sensitivity bias).Another important limitation to note is the relatively small sample size, particularly in the client and family subgroups, which limited the ability to make subgroup comparisons.However, among the subgroups, the findings appeared broadly consistent, mitigating this as an issue.While there was high consistency at the participant level, indicating saturation, this may be partly attributable to group dynamics; data from additional focus groups would be informative, including from more diverse service users and their families with different language preferences and needs.Future work is already underway to include collaborating with partners who speak languages other than English to determine the best approaches for translating EULA materials in a culturally accessible and linguistically appropriate manner.
Limitations were minimized where possible: to lessen dominant respondent bias, facilitators promoted fewer vocal participants; to avoid reference bias, questions were ordered logically, minimizing swaying participants' perspectives; to mitigate social desirability bias and sensitivity bias, facilitators positioned participants as the experts in their experiences and encouraged them to provide honest feedback and frame negative feedback as crucial to addressing potential issues; and to minimize reporting bias, we used codebooks, multiple coders, and participant feedback before finalizing themes.COVID- In a period of rapid expansion of eHealth technology availability, the contrast between community partners wishes for transparent, accessible data sharing agreements and the convention of EULAs being complex, convoluted, and centered on the needs of the developer presents a significant issue in the field.This study highlights the value of using community-informed research to identify community partners' needs, values, and priorities around data sharing.Furthermore, when needs and values are incorporated into the EULA design process, this study demonstrates that the approach can lead to high rates of data sharing.This suggests that adopting a more ethical approach to data sharing can have the dual benefit of addressing community partner needs while simultaneously supporting researchers' efforts to collect eHealth data.
[Parent 2 , group 6 ][
The message I came away with was] That my health information would be protected.[Provider4, group 4]
Figure 2 .
Figure 2. The Beehive end-user license agreement screen was updated based on feedback from phase 2 focus groups.Item "a" was from early psychosis team input (phase 2-impact on transparency).
Table 1 .
Positionality of the research team that conducted groups and analyzed the qualitative transcripts.
Table 2 .
Demographic and clinical characteristics of participants.
just feel like I should be able to know who's accessing what, when, and why.You know?[Parent 1, group 3] I need to know what is the formula [to deidentify data] like.You've described it to me, but that doesn't give me the confidence to really give you a thumbs up.[Parent 4, group 3]
Table 3 .
Implementation of phase 1 themes into the Beehive end-user license agreement (EULA) video.
a EPI-CAL: California Early Psychosis Intervention Network.bNot addressed in the user interface.
19 logistical barriers likely impacted provider recruitment among consumers.Relatedly, COVID-19 safety precautions necessitated videoconference meetings, excluding participants without adequate internet access or electronic devices and those uncomfortable with internet-based participation.Although cross-clinic videoconferencing likely increased the breadth of voices included in the discussion, this selection bias may be particularly relevant given the technology-oriented subject matter.Future research should examine eHealth technology and data sharing attitudes with individuals with low comfort with technology and who prefer in-person participation. | 7,768.6 | 2022-11-10T00:00:00.000 | [
"Medicine",
"Psychology",
"Computer Science"
] |
A Physical Layer for Low Power Optical Wireless Communications
Energy consumption is one of the critical issues in optical wireless communications transmitter design and a limiting factor to miniaturization and deployment in mobile devices. In order to reduce energy requirements, we assess a physical layer based on high-bandwidth on-off keying modulation. The use of on-off keying allows for highly efficient transmitter frontend designs that avoid operation of amplifier stages in a resistive mode, which has the potential of reducing their energy usage by an order of magnitude. Link-level simulations show that this physical layer can deal with typical frontend limitations and can operate in challenging non-line-of-sight channels. For these reasons, we believe that the solution evaluated here can deliver a significant contribution to optical wireless communications technology in a wide range of use cases.
I. INTRODUCTION
M OST of the existing literature on the physical layer (PHY) for optical wireless communications (OWC) considers downlink transmission from the lighting infrastructure to the mobile device, focusing on high data rates achieved through spectrally efficient modulation. However, systems designed for the Internet of Things (IoT), for example in industrial use cases, need moderate data rates and satisfactory link reliability while operating on battery power. Therefore, a low power design is needed, especially for the uplink. A viable approach to realize link reliability is to use wide-beam transmitters and large-field-of-view receivers that create overlapping areas of service, so that a mobile device always can be reached through multiple access points and a single blocked line of sight does not lead to a link interruption [1], [2].
However, such wide beams cause a distribution of optical power over a larger area, reducing the signal power and with that the signal-to-noise ratio (SNR) at the receiver. Thus, an energy efficient system design is required that is able to work at low SNR levels.
Early investigations of OWC have already considered pulse based and single-carrier modulation techniques, such as variants of pulse position modulation (PPM), on-off keying (OOK), and phase shift keying (PSK) [3]- [5]. Later research then moved on to multi-carrier modulation and has focused on the development of orthogonal frequencydivision multiplexing (OFDM)-based concepts, often combined with adaptive bit loading [6]- [10]. OFDM schemes provide some significant advantages, such as high robustness against multipath dispersion, high spectral efficiency, and the ability to adapt to varying channel conditions. Notwithstanding that, with the goal of meeting the demand for low-power IoT solutions, we return to simpler modulation formats and evaluate in this article a complete PHY based on OOK. While OOK enables a near-optimal utilization of available optical power, the core reason for this choice is in the simplified transmitter frontend design, where energy savings are expected to be significant enough to justify the much-increased need for bandwidth due to the low spectral efficiency. This choice will be justified in Section II, starting by pointing out energy efficiency issues of OFDM in OWC and then showing how OOK enables substantial energy savings in transmitter frontends.
The main contribution of this work is to evaluate the complete design of a PHY built on OOK, named the Pulsed Modulation PHY (PM-PHY). This includes assessments of frame synchronization, header and payload detection, and a detailed analysis of the PHY's capability to deal with different channel conditions due to multipath distortion and common analog frontend impairments. These findings stem from two previous publications: The evaluations of the PM-PHY under different channel conditions served as a basis for its definition in IEEE P802.15.13 standardization and were previously presented in [11]. The results considering analog frontend impairments were previously published in [12]. Here we are combining these results into a comprehensive evaluation of the PM-PHY and give a classification of the findings with respect to requirements of future system designs.
The remainder of this article is structured as follows: The expected energy savings through the use of OOK are deduced in Section II. Section III introduces the concepts of the PM-PHY in detail. Models and methods used for the performance evaluation are described in Section IV. Section V discusses simulation results and proposes an efficient parametrization for the PM-PHY, and conclusions are drawn in Section VI.
II. ENERGY SAVINGS IN TRANSMITTER FRONTENDS
As stated in Section I, the main energy savings expected from using OOK are achieved in the transmitter frontends, especially in comparison with OFDM. In this section, a simple model of a transmitter frontend is used to quantify these savings.
A challenge for applying OFDM in OWC is that waveforms need to be purely non-negative. This is commonly realized through so-called DC biased optical OFDM (DCO-OFDM), which simply adds a constant bias to the bipolar OFDM signal. While this preserves all the mentioned advantages of OFDM, it introduces a penalty in energy efficiency. In the case of 4-QAM modulated subcarriers, this amounts to 6-7 dB [13], for example. Another issue of OFDM is its high peak-to-average power ratio (PAPR), which increases the required dynamic range for transmitter amplifiers. It also means that a higher peak transmitter power is needed to achieve a certain average power, and accordingly SNR, at the receiver.
Some modifications have been introduced to OFDM, mitigating the need for a constant bias and improving its power efficiency: asymmetrically clipped optical OFDM (ACO-OFDM) [14] and unipolar OFDM (U-OFDM) [15] remove or invert and interleave all negative waveform components at the cost of halving the spectral efficiency. Layered ACO-OFDM (LACO-OFDM) [16] and enhanced U-OFDM (eU-OFDM) [13] build on those concepts and introduce symbol repetitions and addition of multiple signal layers, respectively, to restore the lost spectral efficiency while preserving good power efficiency. However, these layered modulation formats increase receiver complexity and work best at high SNR, which also limits their deployment in networks with a demand for low energy consumption.
A. Linear Transmitter for OFDM Signals
A core issue regarding power efficiency in OWC systems is in the design of linear physical transmitters, especially when using DCO-OFDM. In order to better approach this issue, we regard a simplified transmitter frontend model, shown in Fig. 1a. It consists only of a light-emitting diode (LED) and a transistor, where the LED is connected to the drain connection of the transistor and a signal source is connected to the gate. A supply voltage V + is applied between the upper port of the LED and the source port of the transistor is connected to the ground. In this model, the transistor is used as an amplifier modulating the LED through its drain-source current I DS , which changes in dependence on the gate-source voltage V GS , here equal to the signal voltage V sig . Fig. 1b shows the output characteristics for an exemplary metal oxide semiconductor field-effect transistor (MOSFET) used in OWC frontends, driven in the ohmic region at a drain-source voltage V DS = 10 V. Notably, there is a threshold for V GS , below which no current flows through the drain-source connection, and accordingly the LED. This threshold is here around 3.5 V. The region below it is defined as the "cutoff region".
In order to transmit an OFDM waveform, for which Fig. 1c shows an example, a biasing of V GS is mandatory, in order to exceed the threshold for the cutoff region at all signal levels and achieve a satisfactory linearity. This bias (V bias,OFDM in Figs. 1b and 1c) leads to a constant current flow through the LED and the transistor's drain-source path and thereby to a constant power consumption in the OWC frontend. Note that in this state power is converted both in the LED and in the transistor, as the transistor is not fully switched on and acts as an ohmic load itself. In this mode, the used power is described by (1). Since the MOSFET acts as a resistor that limits the current through the LED at I DS = I bias,OFDM , the voltage V + is relatively constant and the power consumed by the frontend in Fig. 1a, P OFDM , is given by: Only the contribution caused by the biasing is considered here. Signal variations are not regarded, as most of the signal power is closely distributed around the average (in this case, the bias), assuming OFDM waveforms with high PAPR. A V bias,OFDM = 4.5 V, which is 1 V above the cutoff threshold, leads to a drain-to source current I bias,OFDM = 0.5 A. With a supply voltage V + = 12 V, an average power consumption P OFDM = 6 W is estimated for the OFDM case.
B. Switching Transmitter for OOK Signals
For transmission of OOK signals, for which an example is shown in Fig. 1d, linearity is not critical and the bias voltage V bias,OOK can be placed just at the border of the cutoff region. In this mode, the modulation amplitudes for V GS are selected so that the drain-source current through the transistor is practically switched off for modulation of a "0" (i.e., V GS = V bias,OOK ) and fully on for a "1" (V GS V bias,OOK ). In the "fully on" state, the transistor is not in the ohmic region, so the characteristics in Fig. 1b do not apply. Instead, the resistance between drain and source becomes very small (parameter R DS,on , usually in the order of mΩ). This means that in both of these states very little energy is dissipated in the transistor itself, as it is either blocking or transmitting with minimal resistance most of the time. Practically, the transistor will still operate in the ohmic region during each transition between states, since state changes are not immediate, but the occurrence of this state is minimized through OOK.
The used power in this switching mode is now dominated by the LED energy consumption and given by (2) for the "on" case. Since the transistor resistance is very low, the maximum current needs to be limited outside the shown circuit, e.g., by the voltage source, so that V + is not necessarily constant. We now regard the power as the sum of the LED and transistor power. For modulating a "1", we assume that the current I DS is externally limited at I LED , the LED's nominal current. The voltage over the LED is then the forward voltage of the LED V f,LED . As stated above, the transistor in this state has a drainsource resistance of R DS,on . The overall power is then: When transmitting a "0", i.e., I DS ≈ 0, practically no current flows and accordingly P OOK,0 ≈ 0. With equal distribution of 0s and 1s, the average used power is then P OOK = P OOK,1 /2. Assuming I LED = 300 mA, V f,LED = 3 V, and R DS,on = 10 mΩ, an average power consumption P OOK = 450 mW results.
C. Power Efficiency Comparison
The static energy usage estimated above differs by a factor >12 between the two models. For putting this value in relation to throughput, the effective modulation amplitude needs to be considered, since it determines the achievable SNR at the receiver. As shown in [17], the SNR level needed for a reliable transmission does not vary significantly between single-carrier and multi-carrier modulation if an adequate equalizer is used, but is rather determined by its spectral efficiency. This means that while it could be argued that the higher achievable throughput of OFDM reduces energy usage by shortening the time needed for a given transmission, that advantage fades when taking SNR requirements into account.
In our example, the modulation amplitude for the OFDM signal is derived from the peak-to-peak value of 2I bias,OFDM and the PAPR, for which we have observed 10 as a typical value. The average modulation amplitude without bias is then 2 I bias,OFDM / √ 10 = 316 mA. The modulation amplitude for OOK is given by its peak current I LED = 300 mA, as the balanced OOK signal has a PAPR of 1. The difference between the two transmitters is approx. 5%, which is negligible in the context of this estimate. Thus, both systems can be expected to reach a similar SNR at the receiver and, accordingly, equivalent throughputs. This conversely indicates that in a scenario where the power margin is sufficient for OFDM to achieve a higher spectral efficiency, the OOK system will be able to operate with a lower modulation amplitude, again reducing its energy usage.
These considerations show that the previous comparison of static energy usage serves as an estimate for overall energy efficiency, suggesting that the power used in analog frontends can be significantly reduced by using OOK with transmitter frontends operating in switching mode instead of OFDM. In reality, this gain will be reduced by various parasitic effects of the analog components, but nevertheless we expect a considerable benefit in energy consumption.
III. PULSED MODULATION PHYSICAL LAYER (PM-PHY)
This section presents the Pulsed Modulation PHY (PM-PHY) as used for our simulations, representing an early version of the corresponding PHY in the IEEE P802.15.13 task group [18]. The components frame synchronization, coding, and frequency domain equalization are separately described.
A. Frame Synchronization
The PM-PHY uses a synchronization preamble structure designed for a detector according to Minn [19], which follows an autocorrelation approach similar to the widely used Schmidl-Cox algorithm [20], but is able to produce a sharper output. This structure is created by repeating a subsequence multiple times with varying signs. The sign pattern determines the shape of the correlator output. At the receiver, this pattern needs to be known, but not the content of the subsequences. Here, a subsequence A N of variable length is repeated six times and inverted by the pattern ( + + − + − − ) to form the vector A prb = [A N A N A N A N A N A N ], where A N stands for the inverse of A N . This preamble structure was recommended in [21]. Binary Gold sequences [22] are used as a base for the subsequences A N . These are sets of pseudorandom noise (PN) sequences with low bounded cross-correlations between them, designed specifically for spread-spectrum multiplexing systems. They were selected for this particular property, as in the early stages of PHY development a parallel use of several different synchronization sequences was considered for separate purposes. Subsequences with lengths N seq = 8, 16, 32, 64 are considered, so that overall preamble lengths of N prb = 48, 96, 192, 384 result. For each N seq a maximum length PN-sequence of length N seq − 1, which forms the base for a set of Gold sequences, is appended with a balancing symbol and used as A N .
B. Coding
Header and payload are evaluated separately by means of the coding schemes defined for the respective frame parts. In the PM-PHY, 8B10B line coding [23] and Reed-Solomon (RS) forward error correction (FEC) [24] are used for both schemes. The header and payload coding schemes differ only by the selected word lengths and redundancies of the RS code, which are described below.
The central component of the coding schemes is the 8B10B line coding. It serves the purpose of ensuring a DC-free transmit signal, which is crucial for modulation in usual highbandwidth frontends, as their components often exhibit a high-pass characteristic [25].
In order to achieve the desired effect of strictly balancing the analogue signal, the line coding needs to be applied after FEC. Otherwise, the parity bits inserted by the FEC would be transmitted without line coding and introduce a DC-offset again. Applying line coding after FEC, though, deteriorates FEC performance, as even single bit errors can cause up to 5 erroneous output bits in 8B10B decoding. To circumvent this problem, following method is used [18]: Line coding is applied to the data symbols before FEC, and again to the parity symbols after FEC encoding. Since the RS code is a systematic code, the data symbols pass through the encoder unchanged and their line coding remains intact. At the receiver, this procedure is reversed. First, the line coding of the parity bits is removed, then FEC is decoded, and finally the (now error corrected) line coding of the data symbols is removed. A visualization of this method is given later on in Fig. 5b. To match the length of line-coded data to the word length of the FEC code, zeros are inserted before encoding and removed directly after. This is equivalent to shortening the FEC code, as is also done implicitly by the en-and decoder due to the chosen RS word and parity lengths.
The described method does not completely avoid the drawback of applying 8B10B line coding after FEC encoding, as the parity bits are still line-coded without any error correction, so that bit errors are potentially multiplied on the parity bits. Still, the deteriorative effect is alleviated, as it now only affects the fraction of the transmitted bit stream containing parity symbols. This is confirmed by the results shown in Section V-A2, specifically in Fig. 8.
For header coding, an RS code word with 24 data symbols and 12 parity symbols is defined (RS(36,24)), for payload coding with 248 data symbols and 8 parity symbols (RS(256,248)). Per definition of the Reed-Solomon code, the symbol length for the former configuration is 6 bits and for the latter 9 bits; so the respective code words have lengths of 216 and 2304 bits and contain 144 and 2232 bits of data. 1
C. Frequency Domain Equalization
As the PM-PHY is intended to work at high bandwidths up to 200 MHz, channel equalization is required to mitigate frequency selective impairments such as multi-path spreading and band-pass characteristics of physical components. In OFDM-based communication systems, frequency domain equalization (FDE) is commonly applied [6], [10]. This is the obvious choice, as the received signal is transformed to the frequency domain anyway, and each symbol is prepended with a cyclic prefix (CP) to avoid inter-symbol interference. Using reference symbols, the channel can be estimated in the frequency domain by simply dividing the received signal by the known reference symbols. For equalization, the data parts are then multiplied with the inverse of the estimated channel.
For single carrier, and especially pulse amplitude modulation (PAM) based systems such as the PM-PHY, this method requires the artificial creation of a block structure similar to OFDM symbols by inserting CPs in the time domain signals, reducing the throughput [25]. In addition, a dedicated transformation to frequency domain and back to time domain at the receiver is needed, which increases computational complexity. Despite that, the PM-PHY provisions a block structure oriented at the OFDM-based High Bandwidth PHY (HB-PHY), which is also defined in IEEE P802.15.13. Both PHYs use the same block and CP lengths. Eventually, the intention is to create synergies between the PHYs to make them as interoperable as possible and enable combined deployment for up-and downlink, for example. Here, the shortest CP setting from the HB-PHY with a length of 0.16 µs is used. Later versions of the PM-PHY also include a long CP of 1.28 µs, corresponding to the longest HB-PHY setting, which is then mandatory for the header and optional for the payload. The block length without CP is 5.12 µs in all cases. The block and CP lengths in symbols are thus proportional to the symbol rate and the overhead through CP insertion is 1/32 for all symbol rates at the short setting.
IV. SIMULATION ENVIRONMENT
In this section, the models and methods used for the simulations in this work are presented. First, the models for the optical channel and frontend impairments are introduced. Then, an SNR range for classification of the PM-PHY evaluation results is deduced from a virtual manufacturing hall scenario. Finally, the simulation setups used for the evaluation of frame detection and header/payload coding of the PM-PHY are presented.
These setups were applied both in a wideband and in a narrowband mode. Two optical channel models were used for the evaluation of the PM-PHY in the wideband mode at symbol rates R sym of 50, 100, and 200 MHz, considering FDE and an enhanced frame detection method. The narrowband mode evaluated the PM-PHY at R sym = 25 MHz and focused on the impact of high-pass characteristics in analog optical frontends on low bandwidth PM-PHY signals. Optical channel models and equalization were not regarded in this mode.
All simulations were carried out in MathWorks MATLAB.
A. Optical Channel Models
This section describes the two optical channel models that were used for the evaluation of the PM-PHY in the wideband mode. Corresponding channel impulse responses (CIRs) are shown in Fig. 2. The first channel model (Fig. 2a) reflects a line-of-sight (LOS) scenario, the second one a multi-path nonline-of-sight (NLOS) scenario [26]. These models represent exemplary conditions with very different characteristics. For this reason, they also serve as an official evaluation framework in the IEEE P802.15.13 task group [18].
The logarithmic plots in the insets in Fig. 2 show the frequency responses of the regarded channels. The LOS channel has a relatively flat frequency response, while the multi-path channel has a low-pass characteristic with deep fading notches at higher frequencies. This demonstrates the requirement for an FDE especially with the second model.
B. Frontend Models
In physical systems, the signal is influenced by the transmission characteristics of the analog optical frontends (OFEs) on both the transmitter and receiver side. Especially the highpass characteristic is critical for transmission of the pulsed waveforms of the PM-PHY, since it causes baseline wander and distortion of individual symbols. This effect is most pronounced at low symbol rates, since there are more signal components present at lower frequencies and longer runs of continuous signal states. Accordingly, the PM-PHY is evaluated with the frontend models in the narrowband mode at a symbol rate of 25 MHz. The OFEs are represented by band-pass filters that were modelled based on actual high-bandwidth prototypes. These models are also used as a reference in IEEE 802.11bb standardization [27]. Their frequency responses are displayed in Fig. 3. The transmitter OFE is modelled using 2th order highand 8th order IIR low-pass filters (Fig. 3a) and the receiver OFE using 4th order high-and low-pass filters (Fig. 3b). As the models were used to assess the PM-PHY's ability to cope with high-pass filtering, the cut-off frequency f c-hp of both the transmitter and receiver high-pass components was varied between 100 kHz, 500 kHz, and 1 kHz, corresponding to 0.4, 2, and 4% of the symbol rate. The cut-off frequency of the low-pass was left at 200 MHz, as it is out of the relevant frequency range at the symbol rate regarded here.
C. SNR Range from Multi-Point Transmission Model
In order to evaluate the SNR range in a typical industrial application scenario, a manufacturing hall, described by a geometrical multi-point transmission model, was considered (see Fig. 4). An exemplary arrangement of 12 equally spaced ceiling-mounted transceivers, under which a mobile unit is moving on a specified trajectory, was implemented. In the downlink, the transmitters on the ceiling simultaneously send identical signals to a single receiver on the mobile unit. In the uplink, the signal from the single transmitter on the mobile unit is received by the receivers on the ceiling using equal gain combining. This setup was used to simulate signal strength variations that were then converted to an SNR distribution. Note that this model is unrelated to the optical channel models presented in Section IV-A. At this stage, ideal LOS connections are regarded that only serve for deriving an SNR operating range for the modelled industrial application scenario. Fig. 4a shows the geometric setup of the manufacturing hall in the downlink configuration. A cluster of gray spheres is used to model the mobile unit with an upward facing receiver. The transmitters are arranged in a grid on the ceiling. The trajectory of the receiver is marked by a solid green line. The radiation patterns of the transmitters on the ceiling were modelled by a Lambertian function of order 0.5, the sensitivity pattern of the receiver on the mobile unit by one of order 2. These patterns are depicted as yellow cones. In the uplink configuration, transmitter and receiver characteristics are swapped, but the geometry remains the same. Note that the resulting channel is non-reciprocal due to the different optical characteristics of transmitters and receivers. Only LOS channel contributions are considered, as contributions from NLOS detection are typically insignificant and mostly additive to LOS signals at low bandwidth [10].
Using this model, a distribution of SNR values along the mobile unit's trajectory was generated, which is shown in Fig. 4b. The received signal power was calculated from the link distances and angles between transmitters and receivers and their radiation patterns. The noise power was calculated by assuming a fixed noise power per receiver, which was calibrated based on observations from lab experiments. This calibration assumed an SNR of 20 dB for a single link with frontends pointed at each other over a distance of 3 m. The TABLE I TABLE OF NOTATION observed SNR values represent the ratio of the accumulated power of all signal and noise components on the receiver side.
The results show that while in the downlink the 1st and the 99th percentile (visualized by the tinted area) are around 15 dB and 32 dB, respectively, in the uplink the SNR values are significantly lower, with the 1st percentile at 3 dB and the 99th at 21 dB. Since equal gain combining was used, this is mainly due to the noise of 12 receivers being added up, instead of noise from only one receiver in the downlink. The 1st percentile values are regarded as SNR thresholds for PHY evaluation in Section V-C.
D. Simulation Setups
Two simulation setups were implemented in MATLAB to evaluate the synchronization mechanism ("Synchronization setup") and the coding schemes for header and payload ("Coding setup"). Both setups used the same basic method. A transmit signal was passed through a channel or frontend model, additive white Gaussian noise (AWGN) was added to set a specific SNR, and the resulting signal was evaluated. In the following subsections, these setups are presented in detail. Separate subsections describe the channel representation and a parametrization utilizing both setups. Prior to this, the wideband and narrowband operation modes applied in the setups are described.
In the wideband mode, the optical channel models (Section IV-A) were applied but not the OFE models (Section IV-B), as the influence of the latter is low at high bandwidths. The PM-PHY was simulated at symbol rates R sym of 50, 100, and 200 MHz. The averaging method to detect the synchronization preamble also in strong multi-path conditions was investigated, and FDE was regarded for header and payload demodulation. For this purpose, CPs were inserted in the data stream. The goal was to assess the PHY's general capability to deal with different channels at large bandwidths. In the narrowband mode, a low symbol rate of 25 MHz was used and neither the enhancement to frame detection nor FDE were applied. The focus was on investigating the PHY's ability to deal with high-pass characteristics in analog frontends at a low symbol rate, e.g., in a low cost deployment with minimum complexity. At low symbol rates, the impact of the optical channel can be expected to be minimal under LOS conditions (see Section IV-A). For this reason, the OFE models were used here but not the optical channel models.
1) Synchronization Setup: For the evaluation of frame start detection, depicted by the block diagram in Fig. 5a, pseudo frames were constructed, consisting of the preamble vector A prb (see Section III-A) preceded by zeros and followed by random OOK symbols. These frames were passed through the channel simulation (Channel block) at the respective symbol rate. Then the cross correlation was calculated between the received signal and the ideal preamble sequence (Cross corr.) and the frame start was detected at the first position where the cross correlation output lay over a detection threshold (Thresh. comp.). The derivation of this threshold is described later on. The whole process was repeated 10, 000 times for each configuration. The dashed box Avg. represents an enhancement for the wideband mode simulations that is also described later on. Apart from that, the setup is identical for narrowband and wideband mode.
We selected cross correlation as an approach for frame synchronization, as it is able to produce an even sharper peak than the autocorrelation based detector that the preamble was designed for (see Section III-A). The usual drawback of cross correlation, its high computational complexity, is also alleviated in case of the PM-PHY. Since the preamble sequence is binary and can be represented as either −1 or +1, the computationally expensive multiply-and-accumulate operations, which the cross correlation is based on, transform into inversion and addition of the received samples.
Frame synchronization was then achieved by comparing the cross correlation output values to a fixed threshold. The amplitude of this threshold is critical for balancing false positive and false negative detections. If the threshold is too low, false detections are more probable; if it is too high, true frame starts are less likely detected. This detection threshold was derived as follows prior to the simulation: A vector containing 100, 000 random AWGN samples with an average power 5 dB higher than the preamble's signal power was generated. The cross correlation of the preamble with this noise vector was then calculated and the threshold was selected so that 1/1000 of the resulting values lay above the threshold, i.e., a false positive rate of 0.001 resulted for the detector.
Initial simulations showed that for channels with considerable inter-symbol interference (ISI), such as in the NLOS channel presented in Section IV-A, frame detection did not work reliably. This is due to the signal energy being spread out over multiple paths in the channel, which generates a smaller separate peak for each path in the cross correlation output. For this reason, following enhancement was made to the preamble detector in order to enable operation also in those channels: The output from the cross correlation was averaged over a small number of samples, and a corresponding timing tolerance was defined for the detector output. The desired effect is that the separate correlator output peaks will be summed up while the noise is reduced and a higher relative peak amplitude results. The maximum delay spread acceptable for this technique is given by τ max = N win * T sym , with the number of samples in the averaging window N win and the sampling time T sym = 1/R sym , with R sym the symbol rate. This method is represented in Fig. 5a by the dashed block 'Avg.'. Averaging window lengths N win = 1, 2, 4, 8 were considered and a synchronization tolerance of the same length was set. N win = 1 corresponds to the case without averaging.
2) Coding Setup: The simulation setup for header and payload coding schemes is depicted in Fig. 5b. The blocks exclusive to the wideband mode are dashed, the rest is identical between wideband and narrowband mode.
A data stream of 2 × 10 6 random bits was coded with the coding scheme described in Section III-B. First, 8B10B line coding was applied (8B10B), followed by FEC using the respective RS code (FEC), with a separate line coding of the parity symbols. The resulting bit-stream was modulated to OOK symbols at the selected symbol rate (OOK). Despite the denomination OOK, a bipolar electrical representation of the waveforms was actually used, meaning the OOK symbols were represented as +1 and −1. This is justified by the highbandwidth electrical circuits in physical OFEs usually working in an AC-coupled mode, which converts the unipolar optical waveforms to a bipolar form by removing the DC component. In the wideband mode, CPs were inserted (CP) before the signal was passed through the channel simulation (Channel).
On the receiver side, in the wideband mode the CP was removed again (CP -1 ) and block wise FDE was carried out (FDE) using the frequency domain representations of the ideal CIRs. Then in both modes OOK demodulation (OOK -1 ) and decoding of line coding and FEC was done (8B10B -1 ), (FEC -1 ). Reversing the scheme applied in the transmitter path, the line coding of the FEC parity bits was removed separately before FEC decoding. Finally, the received bit stream was compared to the original one and errors were counted (Error counting).
3) Channel Simulation: Fig. 5c shows the channel simulation block used for the wideband mode in the setups described above. The input signal is convolved with a CIR representing one of the channels described in Section IV-A (CIR) and then white Gaussian noise is added to the signal (AWGN) in order to set a specific SNR.
In the narrowband mode, using the block depicted in Fig. 5d, the incoming signal is first passed through the transmitter frontend model (Tx OFE), as described in Section IV-B. Then AWGN is added and the receiver side frontend model (Rx OFE) is applied. 4) Physical Layer Parametrization: Using the setups described above, a parametrization of the synchronization preamble was carried out in the wideband mode. The considerations in narrowband mode had a smaller scope and because of that only regarded one exemplary preamble length. As a goal for parametrization, the preamble should not be too short, as this affects the synchronization performance, but it should also not be arbitrarily long, in order to avoid overhead. To reach the goal of finding the optimal preamble length, the performances of frame synchronization, header coding, and payload coding were put in relation to each other. Accordingly, following guideline was defined: The SNR at which the payload coding scheme reaches a block success rate of r dec,pl = 0.9 should be greater than the SNR at which the header coding scheme reaches a block success rate of r dec,hdr = 0.99, which in turn should be greater than the SNR at which the frame synchronization mechanism reaches a detection rate of r sync = 0.999.
The staggered requirements for the different frame parts ensure that when transmitting full frames, data is not lost due to synchronization or header errors, respectively, where the payload can effectively be demodulated. In that way, this guideline also serves to assess the construction of header and payload coding. The block decoding success rates for header and payload are given by r dec = (1 − BER) Ncw , with BER the bit error rate of the decoded data stream and N cw the number of data bits in a respective FEC word in the PM-PHY coding scheme.
V. SIMULATION RESULTS
This section presents the results from the simulations with the setups described in Section IV. First, evaluations of the wideband setup are presented. Frame synchronization, coding schemes, and FDE are evaluated and the parametrization of the synchronization preamble is carried out. Then, the results from the narrowband setup are given, also containing an evaluation of frame synchronization and the coding schemes. Finally, the results are briefly discussed in the context of the multi-point model presented in Section IV-C.
A. Wideband Mode
In the following, the results generated in wideband mode are presented. As described before, the wideband mode was evaluated considering optical channel models as well as FDE.
1) Frame Synchronization:
Frame synchronization performance was evaluated for transmissions at symbol rates of R sym = 50, 100, 200 MHz in the LOS channel. For the multi-path channel, frame synchronization enhanced by the averaging method was evaluated for a symbol rate of R sym = 200 MHz.
The preamble sequence length has a large influence on the performance of the frame synchronization algorithm. Fig. 6 gives an overview of the evaluation of different synchronization preambles at R sym = 200 MHz with lengths from N prb = 48 to 384 samples over the LOS channel, where the performance is mainly limited by the AWGN. In Fig. 6a, the detection rate r sync is plotted over SNR, i.e., the average power ratio of the signal after application of the channel model (see Fig. 5c) and the AWGN. For preamble lengths of 96 samples and higher, it can be observed that a doubling of sequence length results in a penalty of about −3 dB. This is to be expected, as twice as many samples are added up and effectively averaged during the cross correlation calculation, canceling out the influence of AWGN, correspondingly. However, the performance decreases more steeply when using a sequence with only 48 samples. Here, the offset from the sequence with 96 samples is around 6 dB. This observation may be explained through side peaks in the autocorrelation of the synchronization preamble, which is made up of six repeated and alternated sequences. Even though the side peaks are intentionally kept low by the repetition pattern, their impact in relation to the main peak increases for shorter sequences, as the influence of noise is larger and the main peak is less distinct. Fig. 6b shows for each preamble length the SNR, at which the detection rate target of r sync = 0.999 is first exceeded. This graph confirms the decrease of roughly 3 dB for each doubling of synchronization sequence length for 96 samples and above, with a steeper descent between 48 and 96 samples.
Further simulations, for which the results are not displayed here, showed that in the multipath channel the basic synchronization did not work reliably for R sym > 50 MHz. As described in Section IV-D1, the reason for this is that the output of the cross correlation produces peaks for every signal path in the multi-path channel. Thus, for the multi-path channel the averaging method also described in that section was evaluated and was found to enable operation also at higher rates.
The results at a symbol rate of 200 MHz are shown in Fig. 7. In Fig. 7a, window lengths of 4 and 8 samples are compared for a preamble with N prb = 96 samples. The performance was best for N win = 8, with the penalty relative to the LOS case being less than 2 dB (compare Fig. 6). For a preamble length of 48 samples, the performance threshold could not be reached with any of the regarded window lengths. Since the results indicated that this might be due to the detection threshold being too high, the average power of the noise vector used for the deduction of the threshold (see Section IV-D1) was lowered from 5 to 0 dB relative to the OOK symbols in order to generate a lower threshold. The goal was to find out if under these circumstances the short preamble could also work, as this would save overhead in a real system. The results generated with the corresponding threshold are shown in Fig. 7b. A windowing of 4 samples reached the best performance, with a penalty of less than 1 dB with regard to the LOS case. Window sizes not shown here did not reach the detection rate threshold.
The results show that depending on the preamble length, the frame synchronization method based on cross correlation is able to function reliably even at SNRs < 0 dB in relatively frequency flat channels, which promises robust operation also in low power systems. At maximum symbol rate, a detection rate of r sync = 0.999 is reached between −6.6 dB for the longest and 6.4 dB for the shortest preamble. In the frequency selective multi-path channel, the enhancement based on averaging enabled reliable operation with a low SNR penalty compared to the LOS case.
2) Coding Schemes: An evaluation of the error correction capability through the combined FEC and line coding was performed. As described in Sections III-B and IV-D2, a nested code structure was created in order to minimize error multiplication effects through the necessary line coding. Fig. 8 shows the block error rate (BLER) of the header and payload Fig. 8. FEC block error rate (BLER) over bit error rate (BER) of the coded data stream. The BER variation was generated by adding AWGN at different power levels.
coding schemes over a range of input bit error rates (BERs), both with and without 8B10B line coding enabled. The BLER describes the ratio of FEC words still containing at least one bit error after decoding. By putting it in relation to the BER of the received data stream before decoding, a measure for the capability of error correction is provided. The range of different BERs on the received data was produced by adding AWGN at different power levels for this purpose. For setting the performance of the coding schemes in relation, an exemplary BLER threshold of 1 × 10 −3 is displayed.
The header coding scheme clearly outperforms the payload coding scheme. The BLER threshold is achieved around a BER of 3 × 10 −4 for the payload, and at over an order of magnitude higher for the header around 5 × 10 −3 . This is due to the much higher redundancy of the FEC used in the header coding scheme (code rate 2/3) compared to the one used for the payload (code rate 31/32). The error multiplication effect of 8B10B line coding becomes visible in the performance of the header coding scheme, but the impact is minor. It is barely visible for the payload coding, due to the portion of parity bits not protected through FEC and the absolute BER being much lower. These results validate the nested code design and confirm that header decoding is reliable whenever the payload can be decoded.
3) Frequency Domain Equalization: The coding schemes were again evaluated with and without FDE. Since the impact of FDE was marginal in the LOS channel, only the results for the multi-path channel are shown in Fig. 9. The output BERs of the uncoded data stream, a data stream encoded with the header coding scheme, and a data stream encoded with the payload coding scheme are set in relation to SNR. A symbol rate of 50 MHz was selected here to enable a comparison to the transmission without FDE, which did not produce useful results at higher rates.
Note that due to the number of 2 × 10 6 bits used for the simulation, error-free transmission was observed at SNRs with an actual BER below approx. 1 × 10 −6 , which means the measured BER appeared as 0. This cannot be displayed in the plots due to the logarithmically scaled y-axes, so they end at the last SNR where errors were still observed. The simulated SNR range was identical for all plots, nonetheless. Due to its better error correction capability (see Fig. 8), the header coding scheme achieved error-free transmission at a lower SNR than the payload coding scheme. Without FDE, it was reached for the header at SNRs above 18 dB, whereas the payload and the uncoded stream did not reach error-free transmission in the regarded range. Applying FDE showed a large impact. Error-free transmission was then achieved above 11 and 13 dB for header and payload, respectively. The uncoded transmission was error free at an SNR of 18 dB. This shows that in combination with FDE the payload coding provides a significant benefit, lowering the SNR threshold by 4 dB despite its relatively low redundancy.
These results confirm that FDE enables transmission for the PM-PHY in frequency-selective channels even with strong inter-symbol interference.
4) Parametrization of the Synchronization Preamble:
In the last step, results from frame synchronization and header and payload coding evaluations were set in relation to each other in order to assess the ability of the PM-PHY to transmit data reliably and to find the optimal synchronization preamble length. The highest defined symbol rate R sym = 200 MHz was selected for this evaluation.
Based on the previous results, for the simulations in the LOS channel neither FDE nor averaging were necessary, i.e., N win = 1. Fig. 10a shows the corresponding results. The points where r sync , r dec,hdr , and r dec,pl first exceeded their respective thresholds, according to the guideline defined in Section IV-D4, are marked at 6.4, 8, and 10.4 dB. The last of these values can serve as a threshold for reliable operation of the whole PHY. A synchronization preamble length of 48 samples has been selected here, as the performance was already sufficient to meet the required criteria.
For the multi-path channel, FDE and the enhanced synchronization were applied. The results are displayed in Fig. 10b
B. Narrowband Mode
The narrowband mode of the PM-PHY layer was evaluated in a similar way as the wideband mode. For these simulations, a low symbol rate R sym = 25 MHz was used. The impact of the high-pass characteristic of the OFEs was assessed at cutoff frequencies 100, 500, and 1000 kHz, corresponding to 0.4, 2, and 4% of the symbol rate. 1) Frame Synchronization: As for the wideband mode, frame synchronization was evaluated by the preamble detection rate r sync over SNR level, in this case for different high-pass cut-off frequencies. The longest preamble with a length of 384 samples was regarded. Fig. 11 shows the performance for different cut-off frequencies for the transmitter side model (Fig. 11a) and the receiver side model (Fig. 11b). A cut-off frequency of 0 Hz denotes the reference simulation without the frontend model.
With the Tx OFE, the detection rate threshold r sync = 0.999 (see Section IV-D4) is reached at −6 dB for f c-hp,tx = 0 and 100 kHz, and at −5 dB for f c-hp,tx = 500 kHz and 1 MHz. While with the Rx OFE the threshold is reached at similar values, the graph shows that the deterioration at lower SNR is larger. Possible explanations for this are the "coloration" of the noise in the case of filtering on the Rx side, which does not occur for the Tx side filter, and the higher order of the high-pass filter in the Rx side model (see Section IV-B), causing a steeper decline of power towards low frequencies.
Notably, in all configurations the synchronization mechanism achieved the detection rate threshold for an SNR of −4 dB or less, which shows that the frame synchronization is robust against the impact of high-pass filtering at up to 4% of the symbol rate.
2) Coding Schemes: Similar evaluations as in the wideband setup were carried out for the header and payload coding schemes, applying the frontend models at different cutoff frequencies at a symbol rate of 25 MHz. In Fig. 12, the impact of the OFE model on the output BER is shown for header and payload coding with different cut-off frequencies. As with the frame synchronization, it is visible that filtering on the receiver side (Fig. 12a) has a larger impact on the BER than on the transmitter side (Fig. 12b). While the plots for OFEs with a cut-off frequency of 100 kHz barely deviate from the reference simulation (0 Hz), the impact is more clearly visible for frequencies > 100 kHz.
Similar to the results shown in Section V-A3, due to the number of 2 × 10 6 bits used for the simulation, error-free transmission (found BER = 0) was observed at SNRs with an actual BER below approx. 1 × 10 −6 . Since this cannot be displayed on the logarithmically scaled y-axis, the corresponding plots end at the last point where errors were still observed. The simulated SNR range was here, too, identical for all shown plots.
C. Application in a Multi-Point Scenario
In Section IV-C, an exemplary model of a manufacturing hall was used to generate SNR ranges for a typical industrial communication scenario. Putting the results from the previous sections in relation to these ranges delivers valuable insights: The SNR range for downlink operation with a single receiver starts at 15 dB, whereas the PM-PHY could operate reliably in wideband mode at SNR ≥ 10.4 dB in the LOS scenario and at SNR ≥ 14.4 dB in the multi-path scenario. This indicates that the PM-PHY will work reliably both in frequency-flat channels and in channels with strong reflective (non-line of sight) components at a high bandwidth. The same is valid in the narrowband setup when high-pass OFE models with cut-off frequencies of 100 kHz and less are used. As shown previously, in this case data transmission was possible at an SNR of 12 dB. Thus, all these configurations can be expected to operate in the regarded downlink setup.
In the uplink setup, however, the SNR range starting at 3 dB exceeds the reliable operative ranges of the PM-PHY in all tested modes and scenarios. As mentioned earlier, the main reason for this low SNR limit is the equal gain combining considered for the uplink of the manufacturing hall model, which causes the received signal to be affected by the noise generated in all 12 receivers. Hence, a reduction of the number of receivers can be expected to relax this limit decisively. An efficient measure to enable PM-PHY operation also with the large number of receivers would be to replace equal gain combining by an adaptive method such as maximum ratio or selection combining [28], [29].
Another approach would be to repeat PM-PHY signals and average the received copies in order to reduce the noise power, effectively increasing the SNR by the repetition factor. For example, in order to reach the SNR limit in the wideband LOS case at 10.4 dB, averaging over 6 repetitions would be necessary, which would correspond to an improvement of the SNR by 7.8 dB. This clearly represents a significant overhead, which might however be acceptable for the uplink in some systems, especially when adaptive re-transmissions are used.
Finally, the comparison with the manufacturing hall model as a reference showed that in a medium sized network using 12 access points, operation in the downlink can be expected to work reliably, whereas the uplink requires either additional overhead or an adaptive combining method.
VI. CONCLUSION
A complete physical layer based on on-off keying modulation, called the Pulsed Modulation PHY, was evaluated, taking into account the influence of various channel effects. Due to the low peak-to-average power ratio of on-off keying, the same signal-to-noise ratio can be reached at a lower peak transmitter power compared to orthogonal frequencydivision multiplexing. A decisive advantage is that with on-off keying, switching transistors can be used in the LED driver, while a linear driver is otherwise needed. An estimation based on a simple model, taking into account the same throughput and the same average modulation amplitudes, quantified power savings of one order of magnitude in the LED driver, which is particularly useful in battery-driven devices.
The physical layer components were assessed in simulations by means of the success rates of frame detection at different preamble lengths and header and payload decoding, applying different channel impairments. The impact of exemplary optical channels was regarded in a wideband mode at symbol rates up to 200 MHz. High-pass filtering common in analog frontends was simulated in a narrowband mode at a symbol rate of 25 MHz. The simulation results have shown that the Pulsed Modulation PHY has the potential of enabling on-off keying based optical wireless communications at all regarded symbol rates, even under difficult channel conditions. Based on a guideline considering frame detection, header, and payload decoding success rates, we found signal-to-noise ratio thresholds of 10.4 dB in a line-of-sight channel and 14.4 dB in a frequency-selective multi-path channel. Considering the high-pass characteristics of the optical frontends, a limit for the cut-off frequency in the order of 2% of the symbol rate was found for reliable operation. The presented results provide detailed insights into the performance of the Pulsed Modulation PHY in IEEE Std. 802.15.13 and give a prospect of future applications for optical wireless communications systems in an industrial context.
APPENDIX
The frequency domain equalization used here is based on the following algorithm. The input vector r is split up into blocks of length N blk + N cp . The block length N blk and CP length N cp in symbols depend on the symbol rate, since the block and CP durations are constant for all rates (see Section III). From each block a window of N blk samples s m , starting a few samples before the end of the CP, is selected and transformed to the frequency domain by fast Fourier transform, resulting in the frequency domain representation S m .
The known channel impulse response h is extended with zeros to a length of N blk and also transferred to the frequency domain, resulting in the frequency response H. The equalizer coefficients W are then calculated using the minimum mean square error (MMSE) criterion W(n) = H(n)/(|H(n)| 2 + σ 2 n ), with n = 1..N blk and the noise variance σ 2 n . Equalization is then carried out by calculating S eq m (n) = S m (n) · W(n). The result is transferred back to time domain by inverse fast Fourier transform and the equalized signal vector s eq m results. The vectors s eq m for all blocks are concatenated to form the equalized received signal vector r eq without CPs. | 12,426.4 | 2021-03-01T00:00:00.000 | [
"Physics"
] |
Precise point positioning with GPS and Galileo broadcast ephemerides
For more than 20 years, precise point positioning (PPP) has been a well-established technique for carrier phase-based navigation. Traditionally, it relies on precise orbit and clock products to achieve accuracies in the order of centimeters. With the modernization of legacy GNSS constellations and the introduction of new systems such as Galileo, a continued reduction in the signal-in-space range error (SISRE) can be observed. Supported by this fact, we analyze the feasibility and performance of PPP with broadcast ephemerides and observations of Galileo and GPS. Two different functional models for compensation of SISREs are assessed: process noise in the ambiguity states and the explicit estimation of a SISRE state for each channel. Tests performed with permanent reference stations show that the position can be estimated in kinematic conditions with an average three-dimensional (3D) root mean square (RMS) error of 29 cm for Galileo and 63 cm for GPS. Dual-constellation solutions can further improve the accuracy to 25 cm. Compared to standard algorithms without SISRE compensation, the proposed PPP approaches offer a 40% performance improvement for Galileo and 70% for GPS when working with broadcast ephemerides. An additional test with observations taken on a boat ride yielded 3D RMS accuracy of 39 cm for Galileo, 41 cm for GPS, and 27 cm for dual-constellation processing compared to a real-time kinematic reference solution. Compared to the use of process noise in the phase ambiguity estimation, the explicit estimation of SISRE states yields a slightly improved robustness and accuracy at the expense of increased algorithmic complexity. Overall, the test results demonstrate that the application of broadcast ephemerides in a PPP model is feasible with modern GNSS constellations and able to reach accuracies in the order of few decimeters when using proper SISRE compensation techniques.
Introduction
Precise point positioning (PPP) has emerged more than two decades ago as an alternative method to differential carrierphase positioning (Malys and Jensen 1990;Zumberge et al. 1997). While the differential real-time kinematic (RTK) approach relies on the elimination of measurement errors or nuisance parameters through differential positioning with respect to a real or virtual reference station, the PPP technique relies on accurate models and correction information to achieve a similar positioning accuracy (Héroux et al. 2004;Kouba and Héroux 2001).
Aiming at decimeter to centimeter level accuracy, both offline and real-time applications of PPP build on the use of the most accurate orbit and clock products. With limited exceptions (Gunning et al. 2019;Hadas et al. 2019), broadcast ephemerides have not been considered as a suitable option for PPP in view of their limited accuracy. However, two new GNSS constellations, Galileo and BeiDou-3, have advanced and are now close to or already in operational status. They provide broadcast ephemerides with significantly smaller orbit and clock errors than the legacy systems. Galileo, in particular, has a global average root mean square (RMS) signal-in-space range error (SISRE) of only 20 cm, which is approximately a factor of three smaller than for GPS (Montenbruck et al. 2018). The main driver behind the low SISRE are frequent uploads of updated broadcast ephemerides to the Galileo satellites, together with the use of very precise passive hydrogen maser clocks through the entire constellation. Modernized and more stable clocks have also been deployed in the GPS constellation with the Block IIF satellites. Similar and even better clocks are also used in the latest generation GPS III satellites, which promise a further decrease in the SISRE over the next years.
The low SISRE of Galileo makes this constellation already today the most promising candidate for PPP with broadcast ephemerides in applications aiming at accuracy from sub-meter to a few decimeters. The application of broadcast ephemerides in a PPP model appears of particular interest for real-time positioning, as it eliminates the dependency on a PPP correction data stream.
To minimize the impact of broadcast ephemeris errors in PPP, we study two different strategies. Following an analysis of SISREs of GPS and Galileo to characterize the current quality of the respective broadcast ephemerides, the relevant algorithms are presented. Subsequently, practical tests with permanent reference stations are presented to assess the achievable accuracy of PPP with broadcast ephemerides. Furthermore, these are complemented by a boat test to demonstrate the practical use of the method on a moving vehicle under realistic conditions.
Assessment of GPS and Galileo broadcast ephemeris errors
The achievable PPP accuracy with broadcast ephemeris (BCE) products mainly depends on the magnitude of orbit and clock errors in those products. For a comprehensive assessment of these errors, the SISRE is typically used as a metric (Montenbruck et al. 2018). A recent analysis yields SISRE RMS values of approximately 21 cm for Galileo and 35 cm for GPS Block IIF satellites (Wu et al. 2020). For a proper understanding of the PPP tests with permanent reference stations described in the next section, a dedicated SISRE analysis was performed for the entire month of December 2019, considering all healthy satellites in the GPS and Galileo constellation. GPS and Galileo broadcast ephemeris products from the International GNSS Service (IGS; Johnston et al. 2017) have been compared with precise products from the Center for Orbit Determination in Europe (CODE; Prange et al. 2020a, b). The results are depicted in Fig. 1, which shows the cumulative distribution of SISRE values for both constellations.
The plot confirms the finding that the Galileo constellation exhibits significantly smaller broadcast orbit and clock offset errors compared to GPS. The RMS SISRE for Galileo reaches a value of only 12 cm during this time period, while GPS has an RMS SISRE of 50 cm. The 95th-percentile values are roughly twice as high and amount to 24 cm for Galileo and 97 cm for GPS. The significantly lower errors for Galileo can be explained by the use of highly stable passive hydrogen maser clocks onboard most satellites, limiting the clock prediction error. Second, Galileo offers a much higher upload rate of the broadcast navigation data compared to GPS, which reduces orbit and clock extrapolation errors. It can be concluded from this analysis that the errors which need to be compensated by the additional SISRE parameter in the state vector have different magnitude for both constellations and also have different temporal behavior. Therefore, different initial standard deviations, as well as process noise settings, must be used.
In addition to the SISRE values, the magnitude of orbit and clock offset discontinuities are relevant. The SISRE modeling in the Kalman filter assumes only small temporal variations within limits defined by the process noise. This assumption is, of course, violated when a handover of two consecutive batches of broadcast ephemeris sets happens. A change in the issue-of-data counter indicates such handovers for ephemeris (IODE) or clock (IODC) in case of GPS or in the IODnav counter in case of Galileo. The modeled range from satellite-to receiver-antenna may be affected by a discontinuity in either the projected orbit onto the user lineof-sight (LOS) vector or a discontinuity in the clock offset, or both. Depending on the magnitude of the discontinuity, it can either go unnoticed or lead to a rejection of measurements, re-initializing the SISRE state after data quality control.
Orbit and clock offset discontinuities on handovers of consecutive broadcast ephemeris records have been analyzed for the same time period for all satellites of GPS and Galileo constellations. The discontinuities have been determined from the difference between the current and the previous broadcast ephemeris data evaluated at the epoch at which a new broadcast ephemeris set has become available. The clock discontinuities are simply the difference between the clock offsets of the current and the previous broadcast ephemeris set. The orbit discontinuities are computed as the orbit difference between the current and the previous set mapped onto the LOS vector of a ground-based user at the worst user location (WUL) (Li et al. 2011). This user location corresponds to the position on the earth, within the visibility area of a satellite, at which the discontinuity has the largest impact on the range between satellite and receiver. It thus yields a conservative assessment of the impact of orbit discontinuities.
The corresponding results are depicted in Figs. 2 and 3, which show the frequency of occurrence of discontinuities of a certain size with a bin size of 5 cm. Figure 2 shows the statistics of orbit discontinuities. It can be observed that about 75% of all discontinuities for Galileo are less than 5.0 cm, whereas GPS is more often affected by larger values. The 95th-percentile is 9.9 cm for Galileo and 48.7 cm for GPS. Similar to the overall SISRE, this difference can be explained by the shorter update interval of the orbit information onboard the Galileo satellites.
The statistics of the clock discontinuities are depicted in Fig. 3. Like the orbit, most of the clock discontinuities for Galileo are smaller than 5.0 cm. Almost 85% of all discontinuities fall into this bin. A higher percentage of larger discontinuities is present for GPS. The 95th-percentile is 10.2 cm for Galileo and 26.7 cm for GPS. It should be noted that orbit and clock discontinuities exceeding 0.5 m are summarized in the rightmost bar. Only very few of these large discontinuities exist, which can reach magnitudes of a few meters.
All static and dynamic tests presented in our research are performed with a data interval of 30 s. At this time scale, the time variations in the clock errors are bigger than those of the orbit, dominating the rate of change. The consequence is that the time variation of the SISRE not only differs between constellations but also between different satellites. Those with more stable clocks, such as the GPS IIF and GPS III rubidium clocks, and the Galileo hydrogen masers, are affected by a smaller variation of the clock error and need smaller process noise. Satellites with less stable clocks, such as the IIR rubidium and IIF cesium atomic frequency standards, require higher process noise to account for these variations. For simplicity, no satellite-specific settings are applied in our tests. However, different process noise is assigned for the GPS and Galileo constellation to account for the fact that the GPS system has a larger percentage of satellites with higher clock noise, whereas Galileo uses predominantly highly stable passive hydrogen masers.
PPP algorithm description
The functional models of pseudorange and carrier-phase observations in PPP rely on using precise correction data or models to eliminate unknown terms in the measurement equations. As it is common practice, dual-frequency observation combinations are used to remove the ionospheric delay up to first order. We start the derivation of the observation equation with these standard models for code and phase measurements (Kouba et al. 2017) and (1) where p and are the ionosphere-free combination of pseudorange and carrier-phase measurements, is the geometrical range between the satellite's and the receiver's antenna reference points, and are the corrections for code and phase center offsets for transmitting and receiving antennas, c is the speed of light, dt r and dt s are the receiver and satellite clock offsets, T is the modeled tropospheric delay, dT is an additional, estimated tropospheric delay correction, is the wavelength of the ionosphere-free combination, A is the float-valued ionosphere-free combination of carrier phase ambiguities, is the carrier-phase wind-up, and where e and are the combined noise and multipath errors for pseudorange and carrier-phase. The user position coordinates x r , y r and z r are included in the geometric range where x s , y s and z s are the coordinates of the satellite.
The observations are processed in a Kalman-filter, which estimates the position, receiver clock offset, and ambiguities A 0 ...A n for n tracked satellites as part of the filter's state vector When computing dual-constellation solutions, a receiver clock offset for each constellation is estimated. The functional model for pseudorange and carrier-phase observations in (1) and (2) and the state vector in (4) represent a typical formulation for PPP with float ambiguities and ionospherefree dual-frequency observations. It would normally be used with precise orbit and clock products, and the ambiguities are treated as constant parameters. The typical PPP methodology is here referred to as PRE. The same formulation is also used in a second method, where precise orbit and clock products are replaced with broadcast ephemerides. This method does not include any strategy for SISRE compensation. It is here referred to as BCE.
For simplicity, no motion model is used, and all states are predicted as constants in the time update. For state vector components treated as random walk parameters, white process noise with variance q = 2 P ⋅ Δt∕ P at a sampling interval Δt is used in most cases. For the clock offset, process noise with a very large variance q = 2 P is applied irrespective of the filter step size. In this way, clock offsets are essentially estimated as free parameters at each epoch. The filter settings for the initial standard deviation 0 , process noise standard deviation P and time constant P are shown in Table 1.
As a first approach for SISRE compensation, named BCE1, we use the same model with broadcast orbits and clocks. In contrast to the standard PPP modeling, process Table 1 Kalman-filter noise settings for state vector elements The initial standard deviation 0 and the standard deviation P and time constant P are listed for both processing strategies. In method BCE1, the process noise is applied to the ambiguities, and in method BCE2, a dedicated SISRE state is used (2008) for real-time orbit determination of earth orbiting satellites and found to be effective for the compensation of broadcast ephemeris errors in that application. The second approach, named BCE2, is to include the projected orbit and clock errors as additional parameter s into the pseudorange and carrier-phase observation equations as suggested by Gunning et al. (2019) and The following parameters of (5) and (6) are now part of the filter's state vector where s 0 ...s n are the projected SISRE values for n tracked satellites. In this approach, SISREs are explicitly estimated using a separate parameter for each tracked satellite and process noise is applied to account for their temporal variations. The initial standard deviation for the SISRE parameters in Table 1 corresponds to the RMS value from the broadcast ephemerides assessment of Fig. 1.
The process noise values for methods BCE1 and BCE2 were defined from the results of a sensitivity analysis. The basic assumption of such analysis is that there exists a value of the process noise that yields the best accuracy. This minimum can, therefore, be selected as the best process noise. The same test case was thus repeated using different values for the process noise standard deviation, over a range from 0 = + + c dt r − dt s + (T + dT) + s + (A + ) + .
(7)
x = x r y r z r dt r dT s 0 ...s n A 0 ...A n to 15 mm. The process noise was applied at a 30 s measurement update interval. Observations from a pool of 11 stations, chosen from the IGS network and depicted in Fig. 4, were used to compute 24 h single-constellation solutions over a month (December 2019). The solution was computed in static conditions for both methods BCE1 and BCE2. The three-dimensional (3D) RMS position errors were averaged over all tests to obtain a single accuracy metric for each process noise value. The sensitivity analysis was performed separately for BCE1 and BCE2 and for both constellations to obtain a process noise value for each case. The results are depicted in Fig. 5.
For Galileo, the process noise values for which the minima are reached correspond to 1 mm (BCE1) and 3 mm (BCE2), while for GPS, the minima are found at 3 mm (BCE1) and 10 mm (BCE2). This is in line with the fact that Galileo is characterized by a smaller overall SISRE than GPS. The results indicate that the BCE2 method is not very sensitive to process noise for both constellations. However, the pronounced dips around the minima indicate high sensitivity to the process noise for BCE1. The figure also denotes how the absence of process noise causes an important deterioration of the position accuracy.
The estimated receiver position is corrected for by the displacement due to solid Earth tides and pole tides following the IERS 2010 conventions (Petit and Luzum 2010). Furthermore, an eccentricity correction accounting for the offset between antenna reference point and marker position is applied when necessary. In case, the clock reference signals of the broadcast or precise clock product differ from that of the pseudorange observations processed in the PPP, and the corresponding differential code bias (DCB) corrections are applied. Precise clock products from the International GNSS Service (IGS; Johnston et al. 2017) typically refer to the L1 and P(Y)-code for GPS and the E1 and E5a signals for Galileo. In the case of broadcast ephemerides, the clock offsets refer to the same signals for GPS but can refer to either E1 and E5a for FNAV messages or E1 and E5b for INAV. The Galileo FNAV messages have been used exclusively for all analyses in this paper. The satellite clock offsets are furthermore corrected for periodic effects of special and general relativity, and the modeled range is corrected for the Shapiro effect (Ashby 2003).
The satellite position is corrected for the rotation of the earth during signal flight time, also known as the Sagnac effect (Enge and Mira 2006). The algorithm uses the global mapping function (GMF, Boehm et al. 2006) with atmospheric parameters based on the global pressure and temperature (GPT) model (Boehm et al. 2007). An additional tropospheric correction is estimated based on the non-hydrostatic mapping function of the GMF model. Antenna phase-center offsets and phase variations are modeled using the igs14. atx offsets and patterns (Rebischung and Schmid 2016). In accordance with the current MGEX practice, GPS L1/L2 calibrations are used in place of missing Galileo E1/E5a. In the case of PPP with precise products, the corrections are applied for transmitter and receiver antennas. For broadcast ephemerides, in contrast, no satellite antenna offset needs to be applied, since broadcast orbits are already referred to the satellites' antenna reference point rather than the center of mass. Finally, the carrier-phase wind-up effect is modeled using satellite-type dependent attitude models for the different generations of satellites. GPS Block IIA/IIR satellites were modeled according to Kouba (2009), while Block IIF was modeled according to Dilssner (2010). Galileo was instead modeled following GSC (2019, Sect. 3.1.1.) for IOV satellites and GSC (2019, Sect. 3.1.2) for FOV satellites.
The ionospheric-free combinations of pseudorange and carrier phase observations are processed in the measurement update. An elevation-dependent weighting function is used for both observation types. An elevation cut-off angle of 10° is used. The assumed measurement standard deviations are summarized in Table 2. The individual values have been chosen based on the average measurement residuals over the set of stations used in the test case.
For the test performed with IGS monitoring stations, daily observation data at 30 s sampling and corresponding broadcast ephemerides received by the permanent reference station data were obtained from the IGS in the receiver independent exchange format (RINEX). For the PRE test, CODE Final MGEX precise ephemeris products (Prange et al. 2020a, b) with 5 min orbit step size and 30 s clock sampling were used. The respective clock solutions are referenced to semi-codeless L1/L2 P-code signal tracking, identified by RINEX observation codes C1W and C2W, for GPS and E1/E5a pilot tracking, identified as C1C and C1W, for Galileo. For DCBs, the daily products provided by the Chinese Academy of Sciences (CAS) were used (Wang et al. 2016). For all other tests, LNAV broadcast ephemerides of GPS and FNAV of Galileo were used. The IGS weekly global solutions for the station positions were used as a reference against which the 24 h solutions were compared.
PPP results
The models described in this study were characterized by a series of tests. The algorithm was applied to a series of different cases in terms of location, time, and conditions. Data from a pool of IGS stations over multiple days were used for the tests. These analyses are described in the following subsection. A kinematic boat test was performed with a receiver on a motorboat on Lake Ammer, in southern Bavaria. The data were processed a-posteriori, and the results of this test are described in the second sub-section.
Tests with IGS monitoring stations
The first part of the study focuses on a series of tests that were carried out using stations and data from the IGS network, given their high availability in terms of time and geographic locations. The set of 11 stations chosen for the analyses is depicted in Fig. 4. All tests are based on a month of observations and ephemeris data covering December 2019. The same data have been used to compute 24-h solutions with the four different methods listed in Table 1. All tests were performed in both static and kinematic mode, even though the reference station antenna positions were static. In order to characterize the capabilities of Galileo and GPS, single-constellation solutions were computed for each system. Along with those, dual-constellation GPS + Galileo solutions were also estimated. This gives the possibility to study the application of our approach to multi-GNSS positioning, along with the effects that a larger amount of tracked satellites can have on the accuracy. This is particularly relevant for Galileo, for which, in certain situations, only a reduced number of healthy satellites could be tracked simultaneously in the test period. When computing the statistics of each solution, the first 120 epochs, i.e., one hour, were removed to exclude the convergence phase of the Kalman filter. For each set of tests with similar conditions, the RMS position errors were averaged to obtain a single value that characterizes the test. Solutions that showed noticeable deviations from the main observed distribution were removed from the statistics. Tables 3 and 4 list the results of the four different methods for static and kinematic conditions, respectively. Along with the three-dimensional RMS, the horizontal two-dimensional (RM) and the vertical RMS are shown, giving the possibility to assess the two separately. The 3D RMS position error statistics of the stations are depicted in Fig. 6 for the static case and Fig. 7 for the kinematic case with a box-and-whiskers diagram. The results of the test with precise products have been omitted in the figures because their small size made a visual comparison difficult.
The tests with precise products, yielding a horizontal accuracy in the order of 1 cm for static and 3 cm for kinematic conditions, demonstrate the validity of the employed PPP models and algorithm. The values are very similar for all three system cases, with only subtle differences that are mostly due to the number of satellites available for each case.
When it comes to the three solutions with broadcast ephemerides, the results indicate that GPS has the worst performance in all cases. This confirms the expectations, given the larger SISRE for GPS. The Galileo-only and the dual-constellation solutions, on the other hand, show better accuracy and similar behavior between them. These facts already indicate that the dual-constellation case seems to be able to bring together the robustness of GPS in terms of the number of satellites and the smaller SISRE which characterizes the modern Galileo. The two methods applying a SISRE compensation technique, BCE1 and BCE2, yield similar results. In both singleconstellation solutions, the 3D RMS values are within 5% of each other and within 15% in the dual-constellation case. In every case, the improvement compared to BCE is substantial, from a minimum of 35% for Galileo up to 50% for GPS. The achievable 3D accuracy is in the order of 10 cm for Galileo and dual-constellation and 25 cm for GPS. The horizontal 2D accuracy is as small as 7 cm for Galileo and 18 cm for GPS. As a way of example, the 24-h coordinates time series obtained with the four different methods in static conditions are depicted together in Fig. 8 for a selected station.
It is possible to observe how the SISREs induce a deviation from the reference position in BCE, and how this deviation is mitigated in BCE1 and BCE2. The effect is particularly noticeable in the East and Up component.
A method comparable to BCE was used in Hadas et al. (2019), obtaining an accuracy that compared to our results is up to 40% better for Galileo and up to 50% for GPS. The reason for this difference is attributed to the different filter settings and mostly to the smaller cut-off angle, which is 5° in the mentioned work and 10° in ours. However, the same results show an accuracy similar to that found with BCE1 and BCE2, suggesting that SISRE compensation techniques can counteract the adverse effects of a higher cut-off angle. The results yield a similar picture in kinematic conditions, where all three constellation cases show similar behavior with all methods. Both BCE1 and BCE2 bring a marked improvement with respect to BCE and are characterized by similar accuracies. For the Galileo-only and the dual-constellation cases, the proposed strategies reach a 3D RMS position error within 30 cm and a horizontal one below 20 cm. GPS shows values that are roughly twice as big. In kinematic conditions, compensating the SISRE with either model improves the solution with respect to the BCE model by 50% for Galileo and dual-constellation and by 65% for GPS. Like it is the case for the static tests, the best accuracies show one order of magnitude difference compared to the standard PPP approach. For kinematic conditions as well, the 24-h coordinates time series of the different solutions are depicted in Fig. 9 for a selected station. In this case, as well, it can be observed how the amplitude of the deviation from the reference position is much bigger for BCE and for the other two approaches with broadcast ephemerides. In particular, the last few hours of the solution depict a sudden deterioration of the BCE solution, which cannot be observed for BCE1 and BCE2.
In kinematic conditions, the difference between the results found with BCE and those from Hadas et al. (2019) increases. In particular, the BCE GPS-only solution of Table 4 if worse by a factor of four. The difference is attributed to the different filter settings, mainly to the cut-off angle. As in the static case, the improvement brought by the compensation techniques in BCE1 and BCE2 bring the values close together. In the Galileo-only solution of Table 4, the 2D RMS position error is actually almost 40% smaller than Hadas, while for GPS the same value is roughly 20% bigger.
Test with kinematic boat measurements
One of the reasons why PPP is such an attractive approach is the ability to perform absolute positioning with an accuracy that is virtually independent of the location. Since the proposed strategies aim at understanding the capabilities of PPP when precise ephemerides are not available, the studying of a real-life kinematic scenario is of particular interest to us. On September 11, 2019, a motorboat was driven on Lake Ammer, in southern Bavaria, to record GNSS observations on a moving vehicle for a period of 1 h and 20 min. Dualfrequency measurements of GPS (L1 + L2 P(Y)) and Galileo (E1/E5a) were collected with an AsteRx SB receiver connected to a Trimble Zephyr 3 Geodetic antenna. The setup is depicted in Fig. 10. In addition to the observations, the receiver recorded an RTK solution, which was later used as a reference for assessing the accuracy of the different approaches. The RTK reference antenna, located at the DLR site in Oberpfaffenhofen, was at a distance of approximately 15 km.
The methods used for the boat tests are the same four listed in Table 1 and used in the previous analysis. Similar to the tests with IGS monitoring stations, one dual-constellation (Galileo + GPS) solution and one single-constellation solution for each system were computed. The tests were performed in kinematic mode. In these boat tests, the filter was updated at 1 s steps. Given the short period of the test of less than 1.5 h, this was necessary to increase the number of epochs processed. The first 5 min of data were removed from the statistics.
The results, divided in vertical, 2D horizontal and 3D RMS position errors, are listed in Table 5. Compared to the kinematic solutions of Table 4, the standard PPP solutions show a deterioration by a factor of five for all cases. A degradation is expected given the less rigorous conditions of the boat test compared to those of the monitoring stations, like strong multipath due to surrounding water.
Concerning the BCE case, the accuracy obtained in the boat tests is in line with the accuracies obtained with individual IGS monitoring stations. GPS is once more characterized by the worst 3D accuracy, in the order of 1 m in this case. Galileo yields the best accuracy of 0.5 m, while the large SISRE of GPS causes the dual-constellation solution to fall between the two single-constellation cases. The coordinates time series of the dual-constellation solutions are depicted in Fig. 11, where it can be observed how the East component causes an important deterioration of the horizontal 2D accuracy. The solution with precise orbit and The two proposed approaches show similar accuracies in all three cases, with 3D RMS values within 10% of each other for Galileo and dual-constellation, and within 20% for GPS. Compared to BCE, the improvement brought by the SISRE compensation techniques is up to 30% for Galileo, and 60% for GPS and dual-constellation. Overall, BCE1 and BCE2 show a 2D horizontal accuracy in the order of 30 cm, 20 cm and 10 cm for GPS, Galileo, and dual-constellation, respectively.
Summary and Conclusions
Among the current GNSSs, the European Galileo system is characterized by the best broadcast ephemerides, with a typical SISRE of one to two decimeters. This accuracy suggests the possibility of performing precise point positioning (PPP) without the need for precise orbit and clock Fig. 8 Accuracy (in terms of 3D RMS position error) for the different tests in kinematic mode, divided by station. The three horizontal lines of the colored rectangles represent the first, second and third quartile of the distribution. The vertical lines extend to the maximum and minimum value. Outliers are not plotted products or additional real-time corrections. Our study assesses two functional models as strategies for performing PPP with broadcast ephemerides in Kalman-filterbased algorithms. The first approach lumps unmodeled orbit and clock errors within the float ambiguities states, adding proper process noise to allow for the time variation of these errors. On the other hand, the second model introduces a dedicated SISRE parameter in the observation equations and filter states. Here, process noise is applied to the SISRE state instead of the float ambiguities.
Compensation of SISRE in the two models allows for a reduction in positioning errors by 40-70% when working with broadcast ephemerides compared to established PPP algorithms. In general, the improvement is most pronounced for constellations characterized by larger SISRE values and for positioning performed in kinematic conditions. Among the two methods for SISRE compensation considered here, the use of process noise in ambiguity states allows for a particularly simple implementation. On the other hand, the explicit incorporation of SISRE states in the estimation vector shows slightly better performance and robustness.
Overall, positioning errors at the few-decimeter level could be achieved in kinematic PPP solutions with broadcast ephemerides in our study. As expected from the very low SISRE values of Galileo broadcast ephemerides, the use of Galileo offers the best performance with horizontal errors of down to 0.2 m and 3D position errors of 0.3-0.4 m. Roughly two times larger errors were obtained in GPS-only processing of a globally distributed IGS monitoring station set. Dual-constellation solutions achieve similar accuracy as Galileo-only solutions but offer increased robustness due to the larger number of tracked satellites.
The tests demonstrate that applying broadcast ephemerides to a PPP model is a viable approach for positioning with dual-frequency code and phase observations. While not competitive with established PPP concepts based on precise ephemerides or real-time correction data, it can still offer a ten-times accuracy improvement over code-based single-point positioning (SPP). This makes PPP with broadcast ephemerides an interesting alternative for applications aiming at sub-meter accuracies such as personal navigation or traffic management. Similar to SPP, PPP with broadcast ephemerides can be performed in real-time with exclusive use of data transmitted by the GNSS satellites themselves and does not require access to external correction services. | 7,637 | 2021-03-27T00:00:00.000 | [
"Physics"
] |
Junction Temperature Optical Sensing Techniques for Power Switching Semiconductors: A Review
Recent advancements in power electronic switches provide effective control and operational stability of power grid systems. Junction temperature is a crucial parameter of power-switching semiconductor devices, which needs monitoring to facilitate reliable operation and thermal control of power electronics circuits and ensure reliable performance. Over the years, various junction temperature measurement techniques have been developed, engaging both non-optical and optical-based methods, highlighting their advancements and challenges. This review focuses on several optical sensing-based junction temperature measuring techniques used for power-switching devices such as metal-oxide-semiconductor field-effect transistors (MOSFETs) and insulated-gate bipolar transistors (IGBTs). A comprehensive summary of recent developments in infrared camera (IRC), thermal sensitive optical parameter (TSOP), and fiber Bragg grating (FBG) temperature sensing techniques is provided, shedding light on their merits and challenges while providing a few possible future solutions. In addition, calibration methods and remedies for obtaining accurate measurements are discussed, thus providing better insight and directions for future research.
Introduction
Power switching semiconductors are indispensable elements in inverters and converters used in power grids/systems, automobiles, data centers, and renewable energy for reliability towards a more intelligent control system. While conventional switching devices applications are limited due to low switching speed and massive size, power switching semiconductors exhibit fast switching that can meet the load requirements and operating frequency of today's technology [1,2]. Nowadays, over 1000 gigawatts of renewable energy incorporated into the power grids is controlled by power-switching semiconductors [3]. Additionally, power electronic converters and switches, which contain semiconductor devices, are utilized to regulate almost 60% of the supplied electrical energy consumed in industrialized countries [4]. Nevertheless, during power and thermal cycling, one of the most common failures is the wear-out caused by thermal stress on these power-switching semiconductor devices due to variations in their junction temperature [5][6][7][8]. Hence, realtime temperature sensing of circuits, including these devices, is of paramount importance. Recently, composite power switching devices such as IGBT-and Silicon Carbide (SiC)based MOSFETs have gathered attention due to their improved performance characteristics The electrical-based technique uses electrical devices or electrical parameters for temperature measurement. Typical electrical devices include thermal-sensitive electrical devices (TSED) that employ additional electronic components such as resistors, diodes, and externally designed electrical circuits for measurement. Although this method provides excellent spatial resolution, it requires high costs and adds to the system's complexity [10]. The temperature-sensitive electrical parameters (TSEP), such as the gate threshold voltage, saturation current, and short circuit current, are also suitable for online junction temperature sensing, but incur power loss to the system and thus are not suitable for measurement when the device is in operation. Another disadvantage of TSEP is that the device's temperature distribution cannot be obtained since this measurement provides a point temperature value of the chip [12,13].
On the other hand, physical techniques include a thermistor and thermocouple (TC) that measure temperature differences, which are external to the system. Their techniques are simple to implement with excellent spatial resolution; however, the slow response in the measurement, especially in high-frequency circuits, remains the constraint for its deployment. In addition, this approach is practically difficult since the temperature The electrical-based technique uses electrical devices or electrical parameters for temperature measurement. Typical electrical devices include thermal-sensitive electrical devices (TSED) that employ additional electronic components such as resistors, diodes, and externally designed electrical circuits for measurement. Although this method provides excellent spatial resolution, it requires high costs and adds to the system's complexity [10]. The temperature-sensitive electrical parameters (TSEP), such as the gate threshold voltage, saturation current, and short circuit current, are also suitable for online junction temperature sensing, but incur power loss to the system and thus are not suitable for measurement when the device is in operation. Another disadvantage of TSEP is that the device's temperature distribution cannot be obtained since this measurement provides a point temperature value of the chip [12,13].
On the other hand, physical techniques include a thermistor and thermocouple (TC) that measure temperature differences, which are external to the system. Their techniques are simple to implement with excellent spatial resolution; however, the slow response in the measurement, especially in high-frequency circuits, remains the constraint for its deployment. In addition, this approach is practically difficult since the temperature measurement requires direct probe contact with the semiconductor device; thus, disassembling power circuits is unavoidable [14,15].
Recently, optical-based sensing (OBS) techniques have taken center stage as a viable non-invasive electromagnetic interference (EMI) immune junction temperature sensing technology, as highlighted in Table 1, and have been implemented for thermal monitoring in power grid systems and industrial plants operation [16]. OBS techniques include IRC, TSOP, and FBG approaches for junction temperature measurements. Infrared imaging using IRC is the early optical-based technique for capturing surface temperature distribution. In addition, IRC is still serving as a secondary measuring tool in most applications where other techniques are used for validation, thanks to its ability to quickly map the temperature distribution of a target surface from a distance [17]. The discovery of luminescence characteristics of semiconductors in forward bias in the 1990s was the primary drive behind the exploitation of TSOP for semiconductor switching devices. This method involves setting this device in an operation region where photons are emitted based on the magnitude of junction temperature and current [18]. Recently, the advent of optical fiber sensing revolutionized thermal monitoring techniques in aerospace and power transmission systems applications. They exhibit less weight and space, with a thickness of a few tenths of a micrometer, and as such can be easily embedded in power electronic circuits. State-of-the-art FBG is an optical fiber with an inscribed grating at a particular Bragg wavelength, which reflects light at this designated wavelength. A change in temperature over the grating region, typically associated with power switching semiconductor devices, alters the reflected Bragg wavelength utilized as a monitoring parameter to characterize the thermal behavior of the circuits [19,20]. Unlike the previous reviews that discussed in-depth electrical-based techniques [10,21], this review exclusively concentrates on OBS techniques and comprehensively discusses the three approaches viz. IRC, TSOP, and FBG. In particular, their underlined principles, recent advances, and comparison are presented. Moreover, this work also provides calibration and measurement guidelines for each of the OBS techniques, and finally, some possible ways to navigate through the open research opportunities that are identified to substantiate their practical implementation in the industry. This work is organized as follows: Section 2 discusses the structure and factors influencing junction temperature in power-switching semiconductor devices. Section 3 highlights different approaches to the OBS techniques based on IRC, TSOP, and FBG. Section 4 explicitly discusses the calibration and measurement setup for each of these approaches. Finally, Section 5 discusses some possible future developments and the implementation of FBG for commercial power electronic applications.
Power Semiconductor Devices
This section discusses the basic operational features, internal structure, and thermal behavior of power switching semiconductor devices. Typical semiconductor devices in power electronics include; thyristors, Silicon (Si)-controlled rectifiers (SCR), IGBT, and SiC MOS-FET. While Si MOSFET operates at a high frequency and a low power range, Si IGBT is used in high power and low power to moderate frequency applications [22][23][24]. Recently, composite SiC MOSFET has been shown to exhibit low switching loss compared to Si MOSFET, combining the features and benefits of both Si IGBT and Si MOSFET, thus strengthening the potential of SiC MOSFET for high-frequency and high-power applications [25][26][27][28]. A comparison of all three popular semiconductor devices in terms of operating power and frequency of operation is shown in Figure 2. Moreover, the fact that Si IGBTs and SiC MOSFETs are the most frequently used power-switching semiconductor devices in several applications such as data centers, automotive systems, and power grids/systems, this review exclusively concentrates on both of these devices while illustrating their schematic diagram in Figure 3.
Power Semiconductor Devices
This section discusses the basic operational features, internal structure, and thermal behavior of power switching semiconductor devices. Typical semiconductor devices in power electronics include; thyristors, Silicon (Si)-controlled rectifiers (SCR), IGBT, and SiC MOSFET. While Si MOSFET operates at a high frequency and a low power range, Si IGBT is used in high power and low power to moderate frequency applications [22][23][24]. Recently, composite SiC MOSFET has been shown to exhibit low switching loss compared to Si MOSFET, combining the features and benefits of both Si IGBT and Si MOSFET, thus strengthening the potential of SiC MOSFET for high-frequency and high-power applications [25][26][27][28]. A comparison of all three popular semiconductor devices in terms of operating power and frequency of operation is shown in Figure 2. Moreover, the fact that Si IGBTs and SiC MOSFETs are the most frequently used power-switching semiconductor devices in several applications such as data centers, automotive systems, and power grids/systems, this review exclusively concentrates on both of these devices while illustrating their schematic diagram in Figure 3. One of the fundamental lifecycle evaluation factors of power switching semiconductor devices is their junction temperature and its fluctuation, since this affects their lifetime,
Power Semiconductor Devices
This section discusses the basic operational features, internal structure, and thermal behavior of power switching semiconductor devices. Typical semiconductor devices in power electronics include; thyristors, Silicon (Si)-controlled rectifiers (SCR), IGBT, and SiC MOSFET. While Si MOSFET operates at a high frequency and a low power range, Si IGBT is used in high power and low power to moderate frequency applications [22][23][24]. Recently, composite SiC MOSFET has been shown to exhibit low switching loss compared to Si MOSFET, combining the features and benefits of both Si IGBT and Si MOSFET, thus strengthening the potential of SiC MOSFET for high-frequency and high-power applications [25][26][27][28]. A comparison of all three popular semiconductor devices in terms of operating power and frequency of operation is shown in Figure 2. Moreover, the fact that Si IGBTs and SiC MOSFETs are the most frequently used power-switching semiconductor devices in several applications such as data centers, automotive systems, and power grids/systems, this review exclusively concentrates on both of these devices while illustrating their schematic diagram in Figure 3. One of the fundamental lifecycle evaluation factors of power switching semiconductor devices is their junction temperature and its fluctuation, since this affects their lifetime, One of the fundamental lifecycle evaluation factors of power switching semiconductor devices is their junction temperature and its fluctuation, since this affects their lifetime, which may cause device failure [29]. Junction temperature refers to the mean surface temperature on the SiC MOSFET chip or the absolute maximum temperature of the emitter metallization on the Si IGBT chip. It is influenced by several factors. For instance, in a multilayer IGBT module that handles a wide range of input supplies, any random input voltage fluctuation causes the module to repeatedly hold up the thermal cycle's shock for an extended period. Thus, junction temperature also fluctuates during this thermal cycle, giving rise to alternating thermal stress. Similarly, for SiC MOSFET, thermal stress influences the junction temperature variation due to high switching frequency [30]. In general, when there is a degradation in electron mobility, a further increase in power generation will also increase the junction temperature of these devices due to power dissipation. On the other hand, the aging of the solder layer can also contribute to an increase in thermal resistance, which in turn raises the junction temperature of the power switching semiconductor chips [31,32]. The internal structure of these semiconductor devices and the comparison are discussed in the subsequent paragraph.
Considering Figure 3a, the trench gate structure of Si IGBT that runs through the n + -emitter and p-base regions facilitates an increase in the channel density and eliminates the usual channel voltage drop inherent to junction MOSFETs. Moreover, the IGBT chip thickness is reduced by introducing an "n-fieldstop" layer that lowers the static and dynamic losses. Conversely, the conventional planar structure of SiC MOSFET, illustrated in Figure 3b, has the n + substrate region in contact with the drain electrode at the bottom of the device instead of the collector. In contrast to the structure of the IGBT, the emitter is replaced with the source electrode, while the gate electrode remains separated at the top by the interlayer insulator without a trench. Like Si IGBT, the channel of SiC MOSFET is located in the p region, between the n + source and the n-layer. Although both devices' structure is similar to the MOS-gated structure, there is no parasitic body diode in Si IGBT, and thus it requires an antiparallel Si p-i-n freewheeling diode for practical applications [33,34].
From the electrothermal behavior viewpoint, both devices are conducting to the top or bottom surface of the die when there is a current flow. This causes variation in the temperature distribution within the device, and thermal modeling of these devices under the same current and voltage rating has shown that the junction temperature and the temperature swing of IGBT are higher than that of SiC MOSFET, since the on-state resistance of the IGBT is independent of junction temperature [35]. Meanwhile, in case of short circuit failure, the junction temperature rises faster in SiC MOSFET than in IGBT, which results in a lower short circuit holding time. This is because the heat generation rate in SiC is three times higher than the conduction rate, when compared to Si IGBTs. Hence, during a short circuit, junction temperature will be dominated by the heat generation rate [36], suggesting that the magnitude of the junction temperature for both devices depends on their operation state. The structure of both devices is similar to traditional MOSFET, and since both are Si-based semiconductors, they are suitable for the TSOP junction temperature sensing approach, as discussed in the subsequent Section 3. Moreover, since FBG could be bonded on SiC and IGBT devices while IRC can detect temperature distribution on their respective surfaces, both allow measurement of device junction temperature, thus making the OBS technique an attractive technology.
Junction Temperature Optical Sensing Techniques
The two physical-based techniques, thermistor and thermocouple, shown in Figure 1, have wide temperature measurement ranges and are readily available on the market. However, they suffer from the severe constraint of the mechanical process involved, which includes setting up and disassembling or making dents through the device, to enable probe contact with the chip. Conversely, the electrical-based methods exhibit fast response time and directly indicate junction temperature. Despite the fact that extensive calibration is required for each power circuit during its setup phase, the junction temperature estimation provided by TSEP and TSED is an average measurement [37]. Unlike optical-based techniques that employ light signals for temperature estimation, it is worth noting that both physical-and electrical-based techniques operate on the electrical signal, which is prone to loss due to the self-heating of the measuring devices. As such, both are invasive to the measurement or require additional external circuitry for compensation, increasing the power circuit's complexity. The prominent features, advantages, and limitations of the three temperature sensing techniques are summarized in Table 1.
Despite several OBS techniques, such as Raman spectroscopy [38,39], liquid crystal thermography [40,41], and thermo-reflectance [42,43], being presented in the literature, only a few have been implemented for junction temperature sensing of power switching semiconductor devices. Although Raman spectroscopy and liquid crystal provide good resolution and are contactless to the targeted surface, they are impractical for large surface temperature measurements, since faster scanning is required [44]. Moreover, they are not commercially attractive due to the set-up complexity and cost of implementation. The three most popular OBS techniques commonly engaged for junction temperature measurement in power switching devices are the TSOP [45], IRC [46], and FBG [47], as highlighted in Figure 1. Unlike electrical and physical sensing techniques that are invasive to the system, OBS techniques are spatially separated from the sensing circuit and the device, since they operate on light signals. As such, they are immune to induced electrical noise and EMI from the surroundings. In the following, the underlying working principle, advancements in performance, and comparison of these three methods are discussed.
IRC Sensing Technique
Objects spontaneously emit radiation whose intensity and spectrum are temperaturedependent. Moreover, they also absorb, reflect, or stimulate emission to interact with the incident radiation [15]. The IRC sensing method exploits this naturally emitted infrared radiation from objects and hence does not require physical contact with the semiconductor chip surface [48]. It employs an infrared portion of the electromagnetic (EM) spectrum to sense the surface temperature through the emission of radiation. This technique can provide real-time temperature measurement, enabling the quick scanning and acquisition of stationary and fast-moving objects [49]. Generally, heat is emitted from the surface of the SiC/IGBT chip as infrared radiation and transformed into electrical signals via an infrared sensor. These signals are then mapped and displayed as a function of temperature in two-dimensional (2D) space for visualization purposes [37]. The main components of a typical IRC-based system are depicted in Figure 4. The IR detector located at the front end of the camera records the spectral emittance coming from the object (IGBT), which is then amplified and transformed from analog to digital data to generate a legible 2D thermal map. In some circumstances, postprocessing of the camera signal with signal processors is recommended for emissivity adjustment. temperature estimation provided by TSEP and TSED is an average measurement [37]. Unlike optical-based techniques that employ light signals for temperature estimation, it is worth noting that both physical-and electrical-based techniques operate on the electrical signal, which is prone to loss due to the self-heating of the measuring devices. As such, both are invasive to the measurement or require additional external circuitry for compensation, increasing the power circuit's complexity. The prominent features, advantages, and limitations of the three temperature sensing techniques are summarized in Table 1.
Despite several OBS techniques, such as Raman spectroscopy [38,39], liquid crystal thermography [40,41], and thermo-reflectance [42,43], being presented in the literature, only a few have been implemented for junction temperature sensing of power switching semiconductor devices. Although Raman spectroscopy and liquid crystal provide good resolution and are contactless to the targeted surface, they are impractical for large surface temperature measurements, since faster scanning is required [44]. Moreover, they are not commercially attractive due to the set-up complexity and cost of implementation. The three most popular OBS techniques commonly engaged for junction temperature measurement in power switching devices are the TSOP [45], IRC [46], and FBG [47], as highlighted in Figure 1. Unlike electrical and physical sensing techniques that are invasive to the system, OBS techniques are spatially separated from the sensing circuit and the device, since they operate on light signals. As such, they are immune to induced electrical noise and EMI from the surroundings. In the following, the underlying working principle, advancements in performance, and comparison of these three methods are discussed.
IRC Sensing Technique
Objects spontaneously emit radiation whose intensity and spectrum are temperaturedependent. Moreover, they also absorb, reflect, or stimulate emission to interact with the incident radiation [15]. The IRC sensing method exploits this naturally emitted infrared radiation from objects and hence does not require physical contact with the semiconductor chip surface [48]. It employs an infrared portion of the electromagnetic (EM) spectrum to sense the surface temperature through the emission of radiation. This technique can provide real-time temperature measurement, enabling the quick scanning and acquisition of stationary and fast-moving objects [49]. Generally, heat is emitted from the surface of the SiC/IGBT chip as infrared radiation and transformed into electrical signals via an infrared sensor. These signals are then mapped and displayed as a function of temperature in two-dimensional (2D) space for visualization purposes [37]. The main components of a typical IRC-based system are depicted in Figure 4. The IR detector located at the front end of the camera records the spectral emittance coming from the object (IGBT), which is then amplified and transformed from analog to digital data to generate a legible 2D thermal map. In some circumstances, postprocessing of the camera signal with signal processors is recommended for emissivity adjustment. The 2D thermography mapping of the targeted surface temperature provided by IRC allows quick detection of hot spots during the measurement. However, the approach requires a clear line of sight between the camera and the target surface. Since power switching semiconductor devices are usually embodied with ceramics or plastic, a direct junction temperature measurement with IRC is challenging. Moreover, embedding IRC within the chip is not possible, owing to its substantial size and weight. Hence, in reported experimental works, the outer cover of the chip [51], or the die encapsulation [52,53], is usually removed to enable IRC to map the thermal distribution of semiconductor devices situated on a circuit board. In addition, a clear and fixed path is also established between the chip and IRC by fixing their position to obtain accurate measurements.
The infrared region of the EM spectrum spans up to 100 µm, but the range for temperature sensing in IRC is limited to 0.7-20 µm due to the reduced sensitivity of the IR camera's photosensitive material above 20 µm [54]. As expressed in Equation (1), the emitted radiation striking the IRC is the function of the target material temperature, the atmosphere, and the radiant energy. Apart from the wavelength constraints mentioned earlier, the IRC approach also suffers from an exponential increase in noise with the rise in ambient temperature [55,56] and may also be affected by the variation in the measuring distance and angle [57][58][59].
The total radiation W total captured by the IRC, considering the emission from the atmosphere and surroundings, in addition to the emission from the object, as shown in Figure 5, can be expressed as: where E obj , E atm , and E re f l are the emissions from the target surface, and reflections from the atmosphere and the surroundings, respectively. Moreover, these emissions can further be expressed as: where ε obj is the emissivity of the target surface (i.e., the object), T obj , T atm , and T re f l , are temperatures of the object, atmosphere, and reflection, respectively, and τ atm is the transmittance of the atmosphere. σ atm is Stefan-Boltzmann's constant, which is given as 5.670 × 10 −8 W/m 2 /k 4 . Emissivity, which refers to the ability of an object to emit thermal energy [60][61][62], dramatically affects the measurement accuracy of the IRC sensing technique [43]. For the IRC-based system to obtain accurate thermal measurement, emissivity of the targeted object must be uniform. For instance, commercial IGBT chips are usually coated with silver solder layers by manufacturers, which significantly decrease its surface emissivity. As such, the surface temperature of the IGBT chip could not be correctly measured by the IRC as reported in [63]. Nevertheless, we can characterize the surface emissivity considering the blackbody and infrared radiation discussed in [64] as: where R T1 and R T2 are the infrared emission levels at known temperatures, T1 and T2, and R b1 and R b2 are the equivalent black body emission levels. The advancements in this IRC optical sensing are summarized in Table 2.
To increase the surface emissivity, Baker et al. [64] employed filtered paint and microspraying equipment to spray the target surface to emphasize further the requirements for improved surface emissivity for reliable junction temperature measurement of an IGBT chip. They ensured that the particle size and paint thickness (<100 µm) were uniform throughout the surface, which they showed was necessary to achieve homogenous surface emissivity. Another approach for fixing the emissivity is to use signal processors to directly postprocess the output signal of an infrared camera, as shown in Figure 4. Although the approach enhances the complexity of IRC, it has the benefit that it can map the temperature of any semiconductor chip, independent of its shape or composition [50]. To increase the surface emissivity, Baker et al. [64] employed filtered paint and microspraying equipment to spray the target surface to emphasize further the requirements for improved surface emissivity for reliable junction temperature measurement of an IGBT chip. They ensured that the particle size and paint thickness (<100 µm) were uniform throughout the surface, which they showed was necessary to achieve homogenous surface emissivity. Another approach for fixing the emissivity is to use signal processors to directly postprocess the output signal of an infrared camera, as shown in Figure 4. Although the approach enhances the complexity of IRC, it has the benefit that it can map the temperature of any semiconductor chip, independent of its shape or composition [50].
As indicated in Table 2, it was observed that the experimental results obtained in [65] and [66] have a temperature error of ±0.12 K at a uniform emissivity compared to cases where emissivity falls below 1, such as ref. [67]. In this case, a temperature difference of ±1.25 K at an emissivity factor of 0.735 has been reported within a temperature range of 100 to 160 °C. This suggests that the effect of emissivity cannot be overlooked. Furthermore, frame rate and spatial resolution are also essential factors in evaluating the performance of IRC. The frame rate is the speed at which the camera updates the temperature readings to be displayed on the screen. An IRC with a high frame rate is desirable to capture rapid temperature changes. Due to the limited refresh rate of 9 Hz exhibited by the IRC used by Cheng et al. [68], the junction temperature variation could not be captured when the applied pulse width modulation (PWM) was raised above 10 Hz at the gate of IGBT. An IRC of sampling rate above 100 Hz, employed in [69,71], reported capturing transient temperature across the junction with an accuracy of ±3 °C in the temperature range As indicated in Table 2, it was observed that the experimental results obtained in [65,66] have a temperature error of ±0.12 K at a uniform emissivity compared to cases where emissivity falls below 1, such as ref. [67]. In this case, a temperature difference of ±1.25 K at an emissivity factor of 0.735 has been reported within a temperature range of 100 to 160 • C. This suggests that the effect of emissivity cannot be overlooked. Furthermore, frame rate and spatial resolution are also essential factors in evaluating the performance of IRC. The frame rate is the speed at which the camera updates the temperature readings to be displayed on the screen. An IRC with a high frame rate is desirable to capture rapid temperature changes. Due to the limited refresh rate of 9 Hz exhibited by the IRC used by Cheng et al. [68], the junction temperature variation could not be captured when the applied pulse width modulation (PWM) was raised above 10 Hz at the gate of IGBT. An IRC of sampling rate above 100 Hz, employed in [69,71], reported capturing transient temperature across the junction with an accuracy of ±3 • C in the temperature range of 70 to 140 • C. The existing costly IRCs available in the market exhibit frame rates up to 200 Hz with better scanning speeds. Nevertheless, the accuracy and transient temperature measurement capability could be improved. In this case, the IRC images are sampled once a steady thermal state is reached. Some of the established issues with IRC include surface reflectance and uncertainty in local temperature. The probable reason for the significant change in the emissivity was presented as the transparency of the molded lens and reflection from other components of the targeted object [37]. This issue could be mitigated by removing objects around the setup that may likely cause reflection; otherwise, the surface could be painted black.
The uncertainties of local temperature due to variation in surface emissivity are due to changes in radiative recombination properties of semiconductor layers, bonding elements, and coating having different transparency or reflectance properties to infrared radiation, making it difficult to determine the surface emissivity accurately. These issues have been mitigated in the literature with high emissivity coatings [50,76]. Another alternative to obtain an accurate measurement of emitted radiation from a target is to place micro carbon particles near the targeted surface, eliminating the need for coating [77]. It should be noted that IRC may provide nonuniform temperature estimation if the thickness of the coating varies across the surface.
Other factors to consider for the accuracy of the IRC sensing technique, including the spatial resolution and image pixels, which defines the ability to provide details of temperature distribution on the captured surface, which relies on the pixels of the camera's detector and its field of view (FOV) specification with respect to the area the camera sees at any given moment [53]. The spatial resolution of the IR image could be defined with an adapted lens to permit pixels of various sizes and FOV, which was utilized in [75] to improve the accuracy of the IR camera to about ±2 • C. Also, the influence of system noise and the error during detection could be compensated with auto-calibration and image-processing algorithms embedded in commercially available IR cameras.
The summary of benefits and challenges provided by this IRC OBS technique is shown in Table 3. As highlighted before, IRC is attractive to evaluate power electronics circuits' thermal validation, reliability, and temperature performance at a designated distance only when there is a clear path to the targeted object. Table 3. Summary of key benefits and underlying challenges of IRC sensing technique.
Key Benefits Challenges
• Temperature changes can be easily sensed at some distance.
• Junction temperature sensing requires the removal of the power semiconductor package for radiation detection.
• Suitable for offline thermal extraction of power semiconductor devices.
• To obtain an accurate measurement, even emissivity of the surface is required across the region.
• Modern IRC sensors have high spatial resolution and, thus, more accurate results.
• Even though thermal imaging can be detected at a distance, it requires clear sight of the object for accurate results.
• IRC renders attractive temperature mapping with temperature range bar.
• Thermal IRC sensors are external to the system and difficult to embed in the power switches.
TSOP Sensing Technique
The TSOP sensing technique uses the electroluminescence (EL) phenomenon to measure the junction temperature of Si power switching devices. As shown in Figure 6a, the radiation emitted (blue-like visible light around SiC chip) as a result of external stimulation, such as an electric field or photon excitation, is known as luminescence [78,79], and EL, in particular, is the emission that occurs as a result of excitation through the recombination of electrons and holes across a p-n junction, or in other words, via bias voltage [80]. The peak energy of luminescence depends on the temperature and the spatial resolution of the EL [15], while the spatial resolution is dependent on the area of the p-n junction that is producing the light signal [79,[81][82][83].
The EL process includes simultaneous current and temperature effects. A controlled source is used to generate current in order to control the conductivity of the body diode of the SiC MOSFET. The light emitted from the SiC is then exported to an optical grating spectrometer for spectral analysis via a quartz optical cable fixed to the MOSFET chip. The heat controller positioned beneath the SiC accounts for the junction temperature difference at various forward current settings to decouple the effect of the temperature and forward current on the emitted photons. The spectral features exhibited in this process can be adapted to estimate the junction temperature of other semiconductor devices such as LED and Si MOSFET [84,85]. Figure 6b shows a generic schematic of the TSOP technique for junction temperature measurement. The optical path is typically a low-loss optical fiber sensor for transmitting EL to the spectrometer for spectral analysis. The fiber sensor tip is often fixed onto the Si chip die for EL extraction via optical power coupling to the fiber. and EL, in particular, is the emission that occurs as a result of excitation through the recombination of electrons and holes across a p-n junction, or in other words, via bias voltage [80]. The peak energy of luminescence depends on the temperature and the spatial resolution of the EL [15], while the spatial resolution is dependent on the area of the p-n junction that is producing the light signal [79,[81][82][83].
(a) (b) The EL process includes simultaneous current and temperature effects. A controlled source is used to generate current in order to control the conductivity of the body diode of the SiC MOSFET. The light emitted from the SiC is then exported to an optical grating spectrometer for spectral analysis via a quartz optical cable fixed to the MOSFET chip. The heat controller positioned beneath the SiC accounts for the junction temperature difference at various forward current settings to decouple the effect of the temperature and forward current on the emitted photons. The spectral features exhibited in this process can be adapted to estimate the junction temperature of other semiconductor devices such as LED and Si MOSFET [84,85]. Figure 6b shows a generic schematic of the TSOP technique for junction temperature measurement. The optical path is typically a low-loss optical fiber sensor for transmitting EL to the spectrometer for spectral analysis. The fiber sensor tip is often fixed onto the Si chip die for EL extraction via optical power coupling to the fiber.
Generally, every forward-biased p-n junction semiconductor device can emit light, and this luminescence will be strong for direct bandgap semiconductors. Unfortunately, power switching devices, such as SiC MOSFETs and Si IGBTs, are indirect bandgap p-n junctions; thus, the radiative recombination process is very weak [80,86]. However, they can act as parasitic light-emitting diodes while they are forward-biased, thus emitting weak EL, which makes it possible to detect and hence it can be utilized as a viable sensing parameter for Si/SiC power switching devices. Moreover, for EL to be strong, SiC MOSFET and Si IGBTs are operated in the third quadrant window where the forward-biased condition can be achieved [87]. The measurement of photoemission bandgap carried out in [88] has shown that 4H SiC, the most-used SiC polytype in power electronics due to its thermal and mechanical properties, has a junction emission in the ultraviolet (UV) spectrum with a −3.62 eV energy band.
For SiC MOSFET, the body diode only acts as a parasitic light-emitting diode during passive third quadrant (forward bias) operation [89], in other words, when the MOSFET is in the OFF state (i.e., the gate-source voltage Vgs is nearly zero, and the current flow is in the opposite direction to the device). In the case of Si IGBT, the p-n junction near the collector is forward-biased, and a collector current Ic flows into the device when it is in ON state. As a result, its p-n junction acts as a parasitic light-emitting diode in the first quadrant operation. To establish third quadrant operation, antiparallel freewheeling diodes are usually included with IGBTs chips. Since the light emission from power Generally, every forward-biased p-n junction semiconductor device can emit light, and this luminescence will be strong for direct bandgap semiconductors. Unfortunately, power switching devices, such as SiC MOSFETs and Si IGBTs, are indirect bandgap p-n junctions; thus, the radiative recombination process is very weak [80,86]. However, they can act as parasitic light-emitting diodes while they are forward-biased, thus emitting weak EL, which makes it possible to detect and hence it can be utilized as a viable sensing parameter for Si/SiC power switching devices. Moreover, for EL to be strong, SiC MOSFET and Si IGBTs are operated in the third quadrant window where the forward-biased condition can be achieved [87]. The measurement of photoemission bandgap carried out in [88] has shown that 4H SiC, the most-used SiC polytype in power electronics due to its thermal and mechanical properties, has a junction emission in the ultraviolet (UV) spectrum with a −3.62 eV energy band.
For SiC MOSFET, the body diode only acts as a parasitic light-emitting diode during passive third quadrant (forward bias) operation [89], in other words, when the MOSFET is in the OFF state (i.e., the gate-source voltage V gs is nearly zero, and the current flow is in the opposite direction to the device). In the case of Si IGBT, the p-n junction near the collector is forward-biased, and a collector current I c flows into the device when it is in ON state. As a result, its p-n junction acts as a parasitic light-emitting diode in the first quadrant operation. To establish third quadrant operation, antiparallel freewheeling diodes are usually included with IGBTs chips. Since the light emission from power semiconductors is obtained during the passive third quadrant operation, the measurement period is limited to the dead time in power electronic applications [90,91].
The spectral sensitivities of SiC MOSFET's EL are influenced by the gate bias voltage and bias temperature instability (BTI). Both alter the effective electrical field across the oxide and cause changes in the current that flows through the body diode, which, in turn, impacts EL extraction [92][93][94]. To mitigate this, Lukas et al. [95] proposed a post-processing method of minimizing SiC's sensitivity to gate bias voltage using the estimated intensity ratio of both spectral peaks (UV and blue-green peaks). Experimental work in [96][97][98] has established that two notable spectrum peaks are significant in a typical SiC EL spectrum. As shown in Figure 7a, the UV peak centered at~390 nm due to band-to-band recombination, and the blue-green peak centered at~500 nm owing to the recombination of deep Boron states with the conduction band and acceptor states caused by doping elements and lattice impurities [99][100][101]. Since the remaining energy in the recombination process is released in the form of a photon, the relationship between the emitted light peak wavelength λ (nm) and photon energy E (eV) is given as: Since the energy bandgap of semiconductors is temperature-dependent, the spectral power distribution properties of Si and SiC, and other features such as the peak wavelength and spectral bandwidth, can be used to characterize junction temperature variations [102,103]. The experimental work in [104] investigated the EL of SiC MOSFET, and developed an electrothermal-optic model that showed the relationship between the EL intensity, junction temperature, and forward current, which is given in the form of total light intensity I EL as: where a o and b o are the coefficients due to the effect of junction temperature, i is the forward current, k 1 and k 2 are the constant coefficients, and ∆T is the change in the junction temperature of the device. Thus, at a given forward current condition, light intensity varies with respect to junction temperature. Moreover, the device output voltage V 0 is not only a function of V T0 (initial junction voltage) but also the temperature change, and is expressed as: where T 0 and T j are the given temperature at V T0 and the junction temperature, respectively. Thus, the expressions for I EL and V 0 in Equations (7) and (8), show that junction temperature varies linearly with an increase in light intensity extracted from the device, corresponding to the spectral waveform as shown in Figure 7a [105]. A typical implementation of TSOP employs a low loss nonlinear fiber optic sensor for transmission and, in most cases, has the sensor tip fixed on the Si chip die for EL extraction via optical power coupling to the fiber, as illustrated in Figure 7b. The visible light emission around the decaped SiC chip during the conduction of the body diode indicates the presence of inherent electroluminescence in the SiC body diode. Also, the variation in the output voltage of the photosensitive sampling circuit corresponds linearly to the rise in the chip's junction temperature, as expressed in Equation (8). However, the observed EL emission is weak, since the radiative recombination in SiC is low due to the dominant non-radiative combination. The approach in [106] utilized the TSOP technique for measuring the junction temperature of two paralleled SiC MOSFETs. Here, the extraction of light emission from individual chips was carried out independently. The two junction temperatures, Tj1 and Tj2, were obtained from the integrated intensities of the sub-peak areas. This ensures accurate The approach in [106] utilized the TSOP technique for measuring the junction temperature of two paralleled SiC MOSFETs. Here, the extraction of light emission from individual chips was carried out independently. The two junction temperatures, T j1 and T j2, were obtained from the integrated intensities of the sub-peak areas. This ensures accurate estimation via optical fibers connected to the optical spectrometer, since the module's current is not evenly distributed. Thus, the light transferred by each fiber to the sensing circuitry depends on the individual temperature and current. Table 4 summarizes the recent advancements in the TSOP technique from the literature. Another approach is using EL spectral features to extract the junction temperature and current at once, as shown in Figure 8. This was achieved with multiple optical fibers and TSOP sensors exhibiting different wavelength sensitivities, and was further processed by an artificial intelligence system [83,96]. The advantage of this method is that the junction temperature and device current could be estimated individually from the paralleled devices. However, the time instant of extraction could not be selected simultaneously to light emission, since all SiC MOSFETs of the power switching circuit do not emit light simultaneously. EL extraction from SiC MOSFET has also been implemented in high voltage applications, such as traction inverters, using a Si photomultiplier [87]. A repetitive 50 ms pulse current was applied to obtain emission, which was detected by a fiber-coupled p-i-n photodiode. The spectrum exhibits a significant characteristic peak around 500 nm, while the intensity-current characteristics related to the temperature coefficient, obtained at −0.003 V/K, were utilized to estimate the variation in the junction temperature. Similarly, in [105], this method was used but at a temperature coefficient of −0.0046 V/K and a sensitivity of 3.2 mV/K between the temperature range of 30 to 150 • C was reported. Another approach is using EL spectral features to extract the junction temperature and current at once, as shown in Figure 8. This was achieved with multiple optical fibers and TSOP sensors exhibiting different wavelength sensitivities, and was further processed by an artificial intelligence system [83,96]. The advantage of this method is that the junction temperature and device current could be estimated individually from the paralleled devices. However, the time instant of extraction could not be selected simultaneously to light emission, since all SiC MOSFETs of the power switching circuit do not emit light simultaneously. EL extraction from SiC MOSFET has also been implemented in high voltage applications, such as traction inverters, using a Si photomultiplier [87]. A repetitive 50 ms pulse current was applied to obtain emission, which was detected by a fiber-coupled p-i-n photodiode. The spectrum exhibits a significant characteristic peak around 500 nm, while the intensity-current characteristics related to the temperature coefficient, obtained at −0.003 V/K, were utilized to estimate the variation in the junction temperature. Similarly, in [105], this method was used but at a temperature coefficient of −0.0046 V/K and a sensitivity of 3.2 mV/K between the temperature range of 30 to 150 °C was reported. Due to the weak EL signal in the TSOP sensing technique, an integrated operational amplifier was engaged to improve the signal-to-noise ratio (SNR); nevertheless, a slight deviation in measurement accuracy was observed as the junction temperature rose due to the self-heating effect. Moreover, the galvanic isolated sensing method for SiC MOSFET Due to the weak EL signal in the TSOP sensing technique, an integrated operational amplifier was engaged to improve the signal-to-noise ratio (SNR); nevertheless, a slight deviation in measurement accuracy was observed as the junction temperature rose due to the self-heating effect. Moreover, the galvanic isolated sensing method for SiC MOSFET was introduced in [108] based on the variation in light intensity. The extracted EL spectrum exhibited two characteristic peaks around~380 nm and~480 nm, while a similar approach implemented in [104] has~383 nm and~485 nm for the two peaks, as indicated in Table 4. The two peaks exhibited different temperature coefficients. The major peak had a negative temperature coefficient, while the minor one showed a positive peak. The emission was coupled to the spectrometer via a quartz fiber fixed on the side of the chip for measurement purposes. The former used two independent bandpass filters to extract the peak emissions, and the signal ratio was used to compensate for the EMI due to fast switching transients and optical transmission degradation within a temperature range of 30 to 150 • C. On the other hand, the latter work displayed a limited temperature range between 90 and 135 • C, and the sensitivity obtained was~1.53 mV/ • C at a mean error of ±5 • C; the optical measurement, in this case, still required a high bandwidth since the dead time was short.
The approach for extracting the forward current and junction temperature from the two peaks simultaneously was implemented by integrating inherent electrical isolation with the EL technique [107]. After establishing a correlation between the current and temperature through an analytical model, a negative gate voltage of −5 V was applied to the gate of SiC MOSFET to operate in the third quadrant. The spectral shifts of the two peaks at 510 nm and 390 nm were used to determine junction temperature at a mean error of ±3 • C within a temperature range of 50 to 130 • C. A similar approach was also implemented in [96] with similar major and minor spectral peaks, but an improved sensing performance was reported with an error of ±1.2 • C in a temperature range of 10 • to 90 • C and using multiple sensors for photodetection. However, this approach employed a highresolution spectrometer for measurement, since the peak wavelength shifts noted were only a few nanometers. In addition, optical measurement with a high gain was required since the emission from the body diode of SiC is not very efficient due to its indirect bandgap.
Light-sensing circuits such as a photodetector typically detect the spectral characteristics of light emission through EL. In the case of multiple parallel devices, multiple detectors of different wavelength bands may be implemented for spectral sensitivity; otherwise, the guided light has to be filtered for proper detection [83]. Also, a major limitation of EL application in SiC MOSFET is that the chip is packaged in plastic and covered by metal on the top, which makes it challenging to obtain a direct measurement. When a direct current is applied to the body of diode, the light emitted around the chip cannot be fully extracted. Thus, the ratio of weighted spectral information obtained by the photodiode does not reflect the actual junction temperature. In addition, the cross-sensitivity of junction temperature and current is another challenge in SiC MOSFET TSOP sensing.
Although TSOP is not as sensitive as a thermal camera, they are still usable for online junction temperature measurement and have been implemented for SiC MOSFET in the literature. Other challenges related to the setup are discussed in Section 4 of this article, while the summary of this technique's key advantages and challenges is highlighted in Table 5. • Low-cost photodetection circuits can be employed for detection.
• High-resolution spectrometer is required for accurate thermal estimation.
• Noninvasive to the device operation.
• It may become complex, especially when multipoint sensing is required.
FBG Sensing Technique
The thermography method gives average temperature distribution over an area and thus is incapable of detecting maximum temperature for a targeted point [111]. FBG is a very recent optical technique explored in the literature for measuring the junction temperature of power switching semiconductor devices. The working principle of FBG is based on the wavelength shift that occurs because of the variation in thermal profiles over the grating portion of the fiber sensor, and the wavelength properties can be characterized to represent the variation in junction temperature in SiC MOSFET and Si IGBTs. In a typical FBG, the manufacturer engraves the Bragg gratings into a single-mode fiber. When the light signal from a broadband source is fed to the fiber, the light signal known as the Bragg wavelength is reflected, depending on the refractive index and the grating period [112] and is given as: where Λ is the grating pitch and n e f f is the effective refractive index of the single mode fiber. Whenever the external temperature around the grating portion of the fiber is varied, the thermo-optic effect alters the core refractive index [111], thus affecting the n e f f , which shifts the central wavelength of the reflected signal accordingly. The relationship between the temperature variation and the wavelength shift of the FBG sensor is given by: (10) where ∆λ is the wavelength shift, λ is the initial center wavelength of the FBG, a f is the thermal expansion coefficient of the fiber, ξ is the thermo-optic coefficient, and P e is the strain-optic coefficient. Hence, Equation (10) considers wavelength shift due to the temperature and strain; however, in the case of junction temperature measurement, only the effect of temperature is required. The external strain on the FBG can be eliminated by using a tube or rigid housing, which additionally protects the sensor from mechanical damage [47]. In this case, Equation (10), can then be rewritten as: A typical measurement flow chart of the FBG sensing technique is shown in Figure 9, The setup involves bonding the FBG sensor to the semiconductor chips, usually inside modules, with thermal oil or glue to improve the thermal contact between the chip's surface and the fiber sensor [29,113]. The fiber is then illuminated with a broadband light source (BBS) via an optical circulator, and the sampling rate is pre-selected depending on the interrogator type. The reflecting light signal wavelength shift can then be routed to the interrogator and examined for various pulse widths applied to the Si IGBT gate. Based on the predetermined sensitivity, the junction temperature due to conduction losses for long-duration pulses can be computed. However, the transient losses during switching and the peak power generated during ON-OFF switching losses may or may not be detected depending on the interrogator acquisition rate. Table 6 depicts the recent advancements in FBG sensors in the junction temperature sensing of power switching semiconductor devices. FBG sensors can be enclosed in a tube to improve the shear strength of the sensor; it should be calibrated to minimize measurement errors that may be introduced due to environmental factors. A temperature-wavelength relationship can then be obtained for the FBG sensor with the least square method (better fit with a linear equation of about 99.9% precision), and an accuracy of up to ± 1 • C was reported at a temperature below 40 • C in [111].
The effect of airgap and the interrogator sampling rate was studied in experimental work by Liu et al. [112]. A pulse of different widths and duty cycles was applied at the gate of the MOSFET while the oscilloscope measured the input direct current and voltage. The interrogator monitored the wavelength shift to capture the thermal pulses and the thermal sampling frequency was limited to 1 kHz; thus, the optical interrogator captured changes in wavelength every 1 ms. It was observed that the junction temperature increased and accumulated as the duty cycle increased from 2 up to 50% for the same pulse width of 300 s. However, when the pulses ended, a slight temperature rise was initially observed, after which the temperature dropped gradually due to the error introduced by the air gap between the grating portion of the sensor and the die, and the limited sampling rate of the employed interrogator.
face and the fiber sensor [29,113]. The fiber is then illuminated with a broadband light source (BBS) via an optical circulator, and the sampling rate is pre-selected depending on the interrogator type. The reflecting light signal wavelength shift can then be routed to the interrogator and examined for various pulse widths applied to the Si IGBT gate. Based on the predetermined sensitivity, the junction temperature due to conduction losses for longduration pulses can be computed. However, the transient losses during switching and the peak power generated during ON-OFF switching losses may or may not be detected depending on the interrogator acquisition rate. Table 6 depicts the recent advancements in FBG sensors in the junction temperature sensing of power switching semiconductor devices. FBG sensors can be enclosed in a tube to improve the shear strength of the sensor; it should be calibrated to minimize measurement errors that may be introduced due to environmental factors. A temperature-wavelength relationship can then be obtained for the FBG sensor with the least square method (better fit with a linear equation of about 99.9% precision), and an accuracy of up to ± 1 °C was reported at a temperature below 40 °C in [111]. The effect of airgap and the interrogator sampling rate was studied in experimental work by Liu et al. [112]. A pulse of different widths and duty cycles was applied at the gate of the MOSFET while the oscilloscope measured the input direct current and voltage. The interrogator monitored the wavelength shift to capture the thermal pulses and the thermal sampling frequency was limited to 1 kHz; thus, the optical interrogator captured changes in wavelength every 1 ms. It was observed that the junction temperature Chen et al. [68] proposed direct on-chip thermal measurement for Si IGBT to demonstrate the effect of the sensor bond interface with the chip surface and evaluated the performance of real-time junction temperature measurements. For the direct detection of the die temperature, the ceramic package of the IGBT module was removed and the FBG sensor was placed directly on the chip, as shown in Figure 10a. Measurements from two FBG sensors with different interfaces (air, and solid bond interfaces with IGBT chips) were then examined. The solid-interface setup has the FBG sensor bonded to the chip interface with a thermal paste, which had a thermal conductivity of 5.2 W/(mK), exhibiting an accuracy of 2 pm (~0.2 • C) compared to that of the air interface (without thermal paste), which exhibited an accuracy of 3 pm (~0.3 • C). The air gaps between the fiber sensor and the chip influenced the sensor response. Moreover, the air-interface response exhibited a much slower rise rate than the solid-interface case when both were subjected to fast temperature changes. However, in [115], a groove was cut on the baseplate along its axial centerline, beneath the IGBT chips, instead of on-chip placement, as shown in Figure 10b. This method allows the embedding of FBG sensors in the device module without interfering with the IGBT operation, although thermal network characterization of the module is required to obtain equivalent junction temperature. The experiment by Ren et al. [47] investigated the effect of packaging schemes on temperature sensitivity and transient performance. Three FBG sensors, a bare FBG, a metallic plate housing, and a tube housing were used for temperature sensing under various conditions. For FBG with metallic plate housing, the sensor was not segregated from the plate and the paste. As such, it has the drawback of allowing the strain on the plate to spread further to the FBG sensor, which causes errors in the measurement. Thus, the device was calibrated after inserting the sensor in the housing to take care of the effect of strain. From the experimental results, bare FBG with no packaging had a sensitivity of 10.2 pm/°C, while the tube and plate housings were recorded 10.4 pm/°C and 14.7 pm/°C, respectively. It was observed that there is a significant increase in the sensitivity of plate housing to about 1.45 times bare FBG. At 10 Hz, the plate housing type effectively detects the temperature ripples at different frequencies, having peak-peak ripples at 2.58 °C. Nevertheless, the tube housing could not capture the temperature ripples due to its slow response. Thus, it was established that the plate housing type is the preferred packaging method for chip transient temperature measurement as it captures intercycle temperature ripples at several modulation frequencies.
The influence and performance of FBG for different grating head lengths were tested in reference [114]. It was suggested that uneven heat distribution of the temperature captured by the FBG sensor might result from unsuitable head dimensions. To investigate the effect of head dimensions, three FBG sensors with different head lengths of 1 mm, 3 mm, and 5 mm were assessed in the experiments. To ensure consistency of comparison and assessment of length effects, the midpoints of all three FBG head lengths considered were kept at identical locations during the test. Also, the measurement for each of the fibers was taken independently. Compared with the simulation, the obtained temperature measurement for 1 mm FBG was approximately the same within a range of 45 °C. In contrast, the longer 3 mm FBG sensor variations were within 1.6 to 1.9 °C of the actual measurements, while the 5 mm FBG displayed around 5.2 to 6.0 °C lower than the results obtained for a The experiment by Ren et al. [47] investigated the effect of packaging schemes on temperature sensitivity and transient performance. Three FBG sensors, a bare FBG, a metallic plate housing, and a tube housing were used for temperature sensing under various conditions. For FBG with metallic plate housing, the sensor was not segregated from the plate and the paste. As such, it has the drawback of allowing the strain on the plate to spread further to the FBG sensor, which causes errors in the measurement. Thus, the device was calibrated after inserting the sensor in the housing to take care of the effect of strain. From the experimental results, bare FBG with no packaging had a sensitivity of 10.2 pm/ • C, while the tube and plate housings were recorded 10.4 pm/ • C and 14.7 pm/ • C, respectively. It was observed that there is a significant increase in the sensitivity of plate housing to about 1.45 times bare FBG. At 10 Hz, the plate housing type effectively detects the temperature ripples at different frequencies, having peak-peak ripples at 2.58 • C. Nevertheless, the tube housing could not capture the temperature ripples due to its slow response. Thus, it was established that the plate housing type is the preferred packaging method for chip transient temperature measurement as it captures intercycle temperature ripples at several modulation frequencies.
The influence and performance of FBG for different grating head lengths were tested in reference [114]. It was suggested that uneven heat distribution of the temperature captured by the FBG sensor might result from unsuitable head dimensions. To investigate the effect of head dimensions, three FBG sensors with different head lengths of 1 mm, 3 mm, and 5 mm were assessed in the experiments. To ensure consistency of comparison and assessment of length effects, the midpoints of all three FBG head lengths considered were kept at identical locations during the test. Also, the measurement for each of the fibers was taken independently. Compared with the simulation, the obtained temperature measurement for 1 mm FBG was approximately the same within a range of 45 • C. In contrast, the longer 3 mm FBG sensor variations were within 1.6 to 1.9 • C of the actual measurements, while the 5 mm FBG displayed around 5.2 to 6.0 • C lower than the results obtained for a 1 mm FBG head sensor. This is equivalent to a 16% relative deviation from the desired temperature values. As such, it is unacceptable in a situation where hotspot precision is required. As is evident from the results of the above experiments, FBG with short head lengths are preferred as they provide more accurate detection of localized hotspots. However, FBG with a short head length has the drawback of ensuring precise location and sensor placement. In contrast to short FBG sensors, longer FBGs can deliver accurate temperature readings in areas with less extreme thermal gradients and favor placement with simple installation.
The key benefits offered by FBG sensors, along with underlying challenges, are summarized in Table 7. So far, sampling rate, grating head length, and thermal conductivity are major factors influencing FBG accuracy, which depend on the manufacturer's specifications. Error due to sensor housing can be controlled with proper calibration. Smaller housing is necessary for compatibility, since FBG sensors will be incorporated into the power module. In addition, compact housing will allow quick response, which is also necessary for transient temperature measurement. The advantage offered by FBG over other optical-based techniques, if embedded adequately in MOSFET and IGBT modules, provides fast and accurate measurement at a low cost compared to the thermography technique counterpart. Table 7. Summary of key benefits and underlying challenges of FBG sensing technique.
Key Benefits Challenges
• FBG has high resolution and accuracy up to 0.1 • C.
• FBG must be placed closer to the wafer to obtain accurate results.
• FBG sensors have good stability and a large temperature measurement range.
• Accuracy is dependent on interrogator's resolution.
• FBG can be stuck to the device even when the chip is under its operation condition without disturbance.
• Thermal adhesive is vital to enhance FBG-IGBT interface for accurate measurement.
Summary
The three popular OBS techniques, IRC, TSOP, and FBG, discussed so far, have been extensively tested in the literature for SiC MOSFET and Si IGBT, and are reviewed in Tables 2, 4 and 6, respectively. They are noninvasive and offer accurate measurements if professionally installed and calibrated. Furthermore, it is worth noting that the FBG-based techniques require low attenuation fibers with negligible bending radius and insertion losses (i.e., connector losses) and hence should be considered during installation. A comparison of various characteristics of the three OBS techniques is shown in Table 8, suggesting that no single approach excels over others. Hence, the selection of the technique invariably depends on the application and the surrounding constraints.
Method of Calibration
This section discusses the materials and apparatus needed for calibration and the experimental setup of the three key OBS techniques for junction temperature measurement in power electronics applications.
IRC Sensing Technique
The calibration setup for thermal imaging consists of a lens with a predetermined working distance to focus the thermal radiation on the camera's detector [37]. In the literature, the reported distance is between 15 and 25 cm. An adjustable emissivity setting is required to calibrate the detection based on the orientation of the targeted object and ambient conditions for each setup.
As illustrated earlier in Figure 5, the infrared thermal imager should be positioned at a distance and to the front of the IGBT, as a direct line of sight is required; the focal length is firstly set, after which the camera is fixed with a gripper once the infrared thermal imager is correctly displayed on the screen [111]. For commercial IGBT, the dielectric gel on the chip surface may be removed and painted for uniform emissivity across the chip surface for accurate measurements. Also, the paint may be filtered to attain a uniform particle size whose thickness ranges from 5 to 16 µm, as suggested in [70], to improve the accuracy.
TSOP Sensing Technique
Proper calibration is essential for implementing TSOP methods, because the estimation of junction temperature by EL is based on intrinsic properties, which are sensitive to differences in fabrication and variations in electrical parameters that affect the operation of the MOSFETs in the third quadrant window [108]. For SiC modules mounted on a ceramic substrate and covered by a transparent silicone gel, the detector could be immersed in the silicone gel to enhance the optical coupling [105]. Otherwise, a photodetector may be mounted above the side wall of the device to measure the light emitted from the surface of the substrate. Wang et al. [19] placed the detector at about 2 mm over a SiC chip covered with transparent silicone gel. Considering that the light wavelength travels through the gel, a dark box shielded the ambient light from the setup desk to avoid disturbance from external noise.
Typically, SiC MOSFET is biased with a negative voltage within the range of −5 to −15 V to set the device in reverse conduction mode to achieve the third quadrant operation for emission. As illustrated earlier in Figure 7b, the SiC module is placed on a heat controller, which could be a digital hot plate designed to raise the diode's temperature to decouple the influence of forward current and the junction temperature on the emitted light. The required pulse current for the EL process could be generated from a DC source in a typical range of 2 to 50 A. Depending on the desired accuracy and budget, a Si p-i-n photodiode can be used to detect photons with or without external bias. The photodetector generates a current proportional to the emission, depending on the junction temperature of the SiC MOSFET. In addition, the spectrum emitted under different conditions can also be analyzed on a spectrometer for characterization purposes [106].
FBG Sensing Technique
For the FBG method, the temperature wavelength fitting curve is necessary to determine the sensor's sensitivity. Light can be fed into the fiber from a BBS to calibrate the sensor, while the reflected light from the sensor can be routed to the spectrum analyzer or an interrogator via an optical coupler, as shown in Figure 11. Next, a portion of the grating area is heated in an enclosed environment whose temperature can be precisely controlled to a predetermined value. Alternatively, a heating plate can be used together with a thermocouple to validate the temperature at the desired time. Next, to ensure the mechanical stress-free sensor, the FBG head could be inserted into a ceramic capillary attached to a stainless-steel plate using Kapton tape [114]. The initial temperature and central wavelength shift are first recorded, then the temperature is raised with a fixed step size, and at each value, the corresponding wavelength shift is recorded until the maximum value is reached. This procedure can be repeated several times to take an average value of the wavelength shift for the number of cycles and calculated at each temperature level. The data obtained can then be used to compute the temperature wavelength fitting curve using a linear regression (y = mx + c), where c is the initial Bragg wavelength at ambient temperature, x and y are the wavelength and temperature at each point, respectively, and m is the slope of the curve, which represents the sensitivity of the FBG sensor. The typical measurement from the literature has shown the slope of the linear fitting in the Kelvin scale to be 10.99 ±0.073 pm/K, with a mean error of ±0.5 K [111].
In [75,115], the central Bragg wavelengths of the FBG sensors used are 1537 and 1539.9 nm, respectively, exhibiting a sensitivity of ±0.2 nm/°C. After the calibration, the value of the junction temperature estimation can be obtained by measuring the wavelength shifts. For practical measurements, the fiber can be bonded to the chip with a thermal paste of high viscosity and low shrinkage [111]. This is necessary to ensure improved heat exchange between the sensing portion of the fiber and the IGBT chip surface. The thermal conductivity of the thermal paste and the temperature must be factored in during selection to reduce the effect of aging and thermal breakdown [50].
Distributed Temperature Sensing
The OBS techniques explored so far have been designed for a point or single-unit junction temperature measurement. A typical commercial power circuit contains two to tens of IGBTs or SiC MOSFETs to capture the semiconductor devices, where distributed temperature sensing (DTS) could be a viable solution for such applications. Among the three optical techniques discussed in Section 3, the FBG technique is the most suitable approach for DTS of power switching circuits due to its size, maintainability, and overall cost compared to IRC and TSOP. As depicted in Figure 12, an array of IGBTs or SiC MOSFETs can be monitored concurrently with several FBG sensors on a single optical fiber cable. The fiber is carefully laid so that the sensing portions of the fiber grating are situated on the device's chip. Each FBG sensor is uniquely identified based on its center The initial temperature and central wavelength shift are first recorded, then the temperature is raised with a fixed step size, and at each value, the corresponding wavelength shift is recorded until the maximum value is reached. This procedure can be repeated several times to take an average value of the wavelength shift for the number of cycles and calculated at each temperature level. The data obtained can then be used to compute the temperature wavelength fitting curve using a linear regression (y = mx + c), where c is the initial Bragg wavelength at ambient temperature, x and y are the wavelength and temperature at each point, respectively, and m is the slope of the curve, which represents the sensitivity of the FBG sensor. The typical measurement from the literature has shown the slope of the linear fitting in the Kelvin scale to be 10.99 ± 0.073 pm/K, with a mean error of ±0.5 K [111].
In [75,115], the central Bragg wavelengths of the FBG sensors used are 1537 and 1539.9 nm, respectively, exhibiting a sensitivity of ±0.2 nm/ • C. After the calibration, the value of the junction temperature estimation can be obtained by measuring the wavelength shifts. For practical measurements, the fiber can be bonded to the chip with a thermal paste of high viscosity and low shrinkage [111]. This is necessary to ensure improved heat exchange between the sensing portion of the fiber and the IGBT chip surface. The thermal conductivity of the thermal paste and the temperature must be factored in during selection to reduce the effect of aging and thermal breakdown [50].
Distributed Temperature Sensing
The OBS techniques explored so far have been designed for a point or single-unit junction temperature measurement. A typical commercial power circuit contains two to tens of IGBTs or SiC MOSFETs to capture the semiconductor devices, where distributed temperature sensing (DTS) could be a viable solution for such applications. Among the three optical techniques discussed in Section 3, the FBG technique is the most suitable approach for DTS of power switching circuits due to its size, maintainability, and overall cost compared to IRC and TSOP. As depicted in Figure 12, an array of IGBTs or SiC MOSFETs can be monitored concurrently with several FBG sensors on a single optical fiber cable. The fiber is carefully laid so that the sensing portions of the fiber grating are situated on the device's chip. Each FBG sensor is uniquely identified based on its center wavelength so that once the light is allowed to pass through the fiber, each FBG sensor reflects at a designated Bragg wavelength based on the junction temperature of the device. Moreover, several sections of such fiber could be combined at the fiber flange. The fiber flange will provide an interface for connecting multiple fibers to form a distributed system for commercial applications; this interface could be made passive such that the light is transmitted, and the signal processing is shifted to the central monitor through an optical link such as free space communication (FSO). Otherwise, a photodetection circuit could be embedded to handle the data processing within its locality. wavelength so that once the light is allowed to pass through the fiber, each FBG sensor reflects at a designated Bragg wavelength based on the junction temperature of the device. Moreover, several sections of such fiber could be combined at the fiber flange. The fiber flange will provide an interface for connecting multiple fibers to form a distributed system for commercial applications; this interface could be made passive such that the light is transmitted, and the signal processing is shifted to the central monitor through an optical link such as free space communication (FSO). Otherwise, a photodetection circuit could be embedded to handle the data processing within its locality.
Reduction in Response Delay
A slight delay in response is typical in IRC and FBG sensing techniques and is more prominent when the power module temperature falls. This is due to the equipment capacity limitation and the external system's effect on the measurements. For IRC, reflections from the surroundings and target distance could be the causes. To mitigate this issue, an IRC with an adjustable sampling rate and pixel resolution could be a viable solution. For the FBG, this issue may occur due to the minute air gap between the sensor portion and the targeted surface (sensor-chip gap) and the limited acquisition rate of the interrogator. An interrogator or spectrum analyzer with an auto-adjustable acquisition rate will facilitate a quick response. In addition, the sensor-chip interface could also be matched as close as possible to the specially treated thermal paste, which can be uniformly applied across the surface to avoid future heat loss and chirping failure.
Automated Calibration and Intelligent Operational Prognosis (ACIOP)
Among the temperature sensor technologies, the OBS technique and FBG approach have the notable benefit of exhibiting linear sensitivity. However, they require calibration, because the surrounding ambient temperature and sensitivity of the FBG sensors slightly differ from each other. Although auto-calibration for the IRC technique is now available on the market, this feature has not been exploited or implemented for FBG and TSOP approaches. Machine learning could be incorporated to fill this gap, using the current ambient temperature and pre-trained data to compute FBG sensitivity to automate the calibration process. This could be a promising way to eliminate manual calibration. ACIOP, in this context, could use a deep learning algorithm to calibrate and predict each unit's junction temperature based on designated features such as the magnitude of the load across the power module, frequency of operation, usage, and components' aging. The relationship established by the deep-learning model from the highlighted features could further be used to evaluate component reliability, life cycle, and usage under various operation states. To integrate these features into FBG techniques in the future, adequate data acquisition and data-driven models would be necessary for the model to provide an acceptable prediction under various conditions.
Reduction in Response Delay
A slight delay in response is typical in IRC and FBG sensing techniques and is more prominent when the power module temperature falls. This is due to the equipment capacity limitation and the external system's effect on the measurements. For IRC, reflections from the surroundings and target distance could be the causes. To mitigate this issue, an IRC with an adjustable sampling rate and pixel resolution could be a viable solution. For the FBG, this issue may occur due to the minute air gap between the sensor portion and the targeted surface (sensor-chip gap) and the limited acquisition rate of the interrogator. An interrogator or spectrum analyzer with an auto-adjustable acquisition rate will facilitate a quick response. In addition, the sensor-chip interface could also be matched as close as possible to the specially treated thermal paste, which can be uniformly applied across the surface to avoid future heat loss and chirping failure.
Automated Calibration and Intelligent Operational Prognosis (ACIOP)
Among the temperature sensor technologies, the OBS technique and FBG approach have the notable benefit of exhibiting linear sensitivity. However, they require calibration, because the surrounding ambient temperature and sensitivity of the FBG sensors slightly differ from each other. Although auto-calibration for the IRC technique is now available on the market, this feature has not been exploited or implemented for FBG and TSOP approaches. Machine learning could be incorporated to fill this gap, using the current ambient temperature and pre-trained data to compute FBG sensitivity to automate the calibration process. This could be a promising way to eliminate manual calibration. ACIOP, in this context, could use a deep learning algorithm to calibrate and predict each unit's junction temperature based on designated features such as the magnitude of the load across the power module, frequency of operation, usage, and components' aging. The relationship established by the deep-learning model from the highlighted features could further be used to evaluate component reliability, life cycle, and usage under various operation states. To integrate these features into FBG techniques in the future, adequate data acquisition and data-driven models would be necessary for the model to provide an acceptable prediction under various conditions.
Conclusions
A thorough comparative review of the state-of-the-art OBS techniques for junction temperature sensing in power switching semiconductors has been performed. It was established that IRC rendered a better 2D temperature mapping but could not be embedded in power electronic circuits. The TSOP technique, on the other hand, is simple to implement but only practically applicable for SiC MOSFET. The FBG technique exhibits high spatial resolution and compact size, which makes it attractive to be embedded in power electronic circuits. However, logical positioning and a suitable packaging method with a uniform shear strength are required to obtain accurate temperature measurements. With the rapid growth and deployment of optical fiber sensors for various applications, multiparameter and distributed temperature sensing should be considered to gain widespread use for commercial applications. So far, we have showcased the implementation of distributed temperature sensing and intercommunication for data acquisition to enable sensor integration to a central monitoring platform through a communication link. These areas needed more attention to facilitate the development of other necessary features, such as automatic calibration, to make FBG adaptable for other power electronic applications. Lastly, we highly suggest that researchers explore this domain in order to achieve industrial breakthroughs. Data Availability Statement: Not applicable, as no datasets were generated or analyzed for this research work.
Conflicts of Interest:
The authors declare no conflict of interest. | 18,078.2 | 2023-08-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
GPU-based deep convolutional neural network for tomographic phase microscopy with l 1 fitting and regularization
. Tomographic phase microscopy (TPM) is a unique imaging modality to measure the three-dimensional refractive index distribution of transparent and semitransparent samples. However, the requirement of the dense sampling in a large range of incident angles restricts its temporal resolution and prevents its application in dynamic scenes. Here, we propose a graphics processing unit-based implementation of a deep convolutional neural network to improve the performance of phase tomography, especially with much fewer incident angles. As a loss function for the regularized TPM, the l 1 -norm sparsity constraint is introduced for both data-fidelity term and gradient-domain regularizer in the multislice beam propagation model. We compare our method with several state-of-the-art algorithms and obtain at least 14 dB improvement in signal-to-noise ratio. Experimental results on HeLa cells are also shown with different levels of data reduction. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JBO.23.6.066003]
Introduction
Most biological samples such as live cells have low contrast in intensity but exhibit strong phase contrast.Phase contrast microscopy is then widely applied in various biomedical imaging applications. 15][6] The label-free and noninvasive character makes it attractive in biomedical imaging, especially for cultured cells. 7,8owever, most of the current methods require around 50 quantitative phase images acquired at different angles [9][10][11] or different depths 6 for optical tomography.This speed limitation greatly restricts its field of applications.For example, the difference of the refractive index may be blurred during the angular (or axial) scanning when observing fast-evolving cell dynamics or implementing high-throughput imaging cytometry. 11Another challenge for TPM is the missing cone problem, which limits its reconstruction performance, especially for limited axial resolution compared with the subnanometer optical-path-length sensitivity. 12o relieve the missing cone problem, many methods have been developed for better signal-to-noise ratio (SNR) with fewer images.Different regularizations such as the positivity of the refractive index differences 4,13 and the sparsity in some transform domain 14,15 are added to an iterative reconstruction framework based on the theory of diffraction tomography, 16,17 for reducing the artifacts induced by the missing cone problem and the limited sampling rates in Fourier domain.Both intensity-coded and phase-coded structured illumination methods further promote the performance by their better multiplexing ability compared with conventional plane-wave illumination. 18,19However, these methods suffer from the great degradation when the scattering effects become significant in the sample.The beam propagation method (BPM) 20 is then applied in phase tomography to provide a more accurate model by considering the nonlinear light propagation with scattering. 21,22And the multislice propagation modeling is definitely similar to the neural network in the field of machine learning. 23,24By combining the nonlinear modeling and the sparse constraint in the gradient domain, the Psaltis group has validated the competitive capability of this learning approach over conventional methods. 21,23Despite its success in modeling with l 2 -norm constraint, the current method is still a preliminary network, especially compared with the state-of-the-art deep learning frameworks, 25 and the iterative reconstruction is challenging to deploy in practice due to the high computational cost and the difficulty of the hyperparameter selection.More potential can be exploited in both optimization algorithms and better network architectures.
In this paper, we propose a graphics processing unit (GPU)based implementation of a deep convolutional neural network (CNN) to simulate the multislice beam propagation for TPM.A loss function consisting of an l 1 -norm data-fidelity term and an l 1 -norm gradient-domain regularizer is devised to achieve higher reconstruction quality even with fewer training data.To deal with the vast quantities of parameters and regularizers, we apply the adaptive moment estimation (Adam) algorithm 26 for optimization, which can also be regarded as the training process of the CNN.Compared with previous works using stochastic gradient descent, 23,24 our method ensures a faster convergence and a better robustness to the initial value.Both simulation and experimental results on polystyrene beads, and HeLa cells are shown to validate its reconstruction performance.We anticipate that our work can not only boost the performance of optical tomography, but also guide more applications of deep learning in the optics field.
Experimental Setup
Figure 1 shows the schematic diagram of the experimental setup.In our system, 23 the sample placed between two cover glasses is illuminated sequentially at multiple angles and the scattered light is holographically recorded.A laser beam (λ ¼ 561 nm) is split into sample and reference arms by the first beam splitter.In the sample arm, a galvo mirror varies the angle of illumination on the sample using the 4F system created by L1 and OB1.The light transmitted through the sample is imaged onto the CMOS camera via the 4F system created by OB2 and L2.The beam splitter (BS2) recombines the sample and reference laser beams, forming a hologram at the image plane.The numerical apertures (NAs) of OB1 and OB2 are 1.45 and 1.4, respectively.For data acquisition, we capture multiple tomographic phase images by near-plane-wave illumination (Gaussian beam) with equally spaced incident angles.We use a differential measurement between the phase on a portion of the field of view on the detector that does not include the cell and the cell itself to maintain phase stability.Accordingly, complex amplitudes extracted from the measurements constitute the training set of our proposed CNN.
Beam Propagation Method
We build the CNN, based on the forward model of light propagation, 21,23 to model the diffraction and propagation effects of light-waves.It is known that the scalar inhomogeneous Helmholtz equation completely characterizes the light field at all spatial positions in a time-independent form E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 6 3 ; 3 3 4 ½∇ 2 þ k 2 ðrÞuðrÞ ¼ 0; (1) where r ¼ ðx; y; zÞ denotes a spatial position, u is the total lightfield at r, is the Laplacian, and kðrÞ is the wave number of the light field at r.The wave number depends on the local refractive index distribution nðrÞ as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 3 2 6 ; 7 0 4 where k 0 ¼ 2π∕λ is the wave number in vacuum, n 0 is the refractive index of the medium, and the local variation δnðrÞ is caused by the sample inhomogeneities.By introducing the complex envelope aðrÞ of the paraxial wave uðrÞ ¼ aðrÞ expðjk 0 n 0 zÞ for BPM, we can obtain an evolution equation 21 in which z plays the role of evolution parameter ; t e m p : i n t r a l i n k -; e 0 0 3 ; 3 2 6 ; 6 0 7 where δz is a sufficiently small but a finite z step, ω x and ω y represent angular frequency coordinates in the Fourier domain, að•; •; zÞ expresses the two-dimensional (2-D) complex envelope at z depth, Ã refers to a convolution operator, and F −1 f•g means the 2-D inverse Fourier transform.
GPU-Based Implementation of CNN
A schematic architecture of our CNN is shown in Fig. 2. For constructing our neural network, we divide the computational sample space into thin slices with the sampling interval δz along the propagation direction z.One slice corresponds to one layer in CNN.Within each layer, neurons specify the discretized light-field with transverse sampling intervals δx and δy, respectively.The input layer is the incident field upon the sample.In terms of the Eq. ( 3), inputs are then passed from nodes of each layer to the next, with adjacent layers connected by alternating operations of convolution and multiplication.At the very last layer of our CNN, the output complex field amplitude is then bandlimited by the NA of the imaging system composed of lenses OB2 and L2 in Fig. 1.We implement the proposed network on the basis of TensorFlow framework.The connection weight δnðrÞ can be trained using the Adam algorithm for optimization on the following minimization problem: where M denotes the number of measured views, k •k 1 indicates the l 1 -norm, and ∇ ¼ ð ∂ ∂x ; ∂ ∂y ; ∂ ∂z Þ is the differential operator.For a given view m, Y m , and G m are the output of the last layer and the actual measurement acquired by the optical system, respectively.The design of our loss function will be specifically discussed in Sec.4.1.Compared with the l 2 -norm, the l 1 datafidelity term relaxes the intrinsic assumptions on the distribution of noise (symmetry and no heavy tails) and suits better for the measurements containing outliers.Hence, it can be effectively applied to the biomedical imaging especially when the noise model is heavy-tailed and undetermined. 27As a regularization term, RðδnÞ imposes the l 1 -norm sparsity constraint on a gradient domain according to its better characteristic for the reconstruction from higher incomplete frequency information than l 2 -norm, 28,29 whereas τ is the positive parameter controlling the influence of regularization.The positivity constraint takes advantage of the assumption that the index perturbation is real and positive when imaging weakly absorbing samples such as biological cells.The subgradient method 30 plays an important role in machine learning for solving the optimization framework under l 1 -norm and the Ref. 26 has verified the theoretical convergence properties of the Adam algorithm, which will be specifically discussed in Sec.4.3.We perform the neural network computations on 4 NVIDIA TITAN Xp graphics cards and the processing time to run the learning algorithm (100 iterations) on 256 × 256 × 160 nodes is nearly 9 min.Obviously, it is possible to make the optimization of hyperparameters, which have an important effect on results, a more reproducible and automated process and thus is beneficial for training the large-scale and often deep multilayer neural networks successfully and efficiently. 31The full implementation and the trained networks are available at https://github.com/HuiQiaoLightning/CNNforTPM.
Results
For demonstration, we evaluate the designed network by both simulation and experimental results of the TPM as described before.To make a reasonable comparison, selected hyperparameters have been declared for all the other reconstruction methods.The selection of hyperparameters will be specifically discussed in Sec.4.2.
Tomographic Reconstruction of Simulated Data
In simulation, we consider a situation of three 5 μm beads of refractive index n ¼ 1.548 immersed into oil of refractive index n 0 ¼ 1.518 shown in Fig. 3.The centers of the beads are placed at ð0; 0; −3Þ, (0, 0, 3), and (0, 5, 0), respectively, with the unit of micron.The training set of the framework is simulated as 81 complex amplitudes extracted from the digital-holography measurements with different angles of incidence evenly distributed in ½−π∕4; π∕4 by BPM, whereas the illumination is tilted perpendicular to the x-axis and the angle is specified with respect to the optical axis z.The size of the reconstructed volume is Þ and we take the δnðrÞ of a polystyrene bead as example.
Cross-sectional views
Fig. 3 Simulation geometry comprising three spherical beads with a refractive index difference of 0.03 compared with the background.
with the sampling steps of δx ¼ δy ¼ δz ¼ 144 nm.For the network hyperparameters, we choose 600 training iterations in our GPU-based implementation with the batch size of 20, the initial learning rate of 0.001, and the regularization coefficient of τ ¼ 1.5.The reconstructed results by our method and other reconstruction methods are shown in Fig. 4. The SNR defined in Ref. 21 of our result is 25.56 dB, 14 dB higher than the previous works.We can also observe much sharper edges of the reconstructed beads at the interface with less noise in the background from Fig. 5.The comparison between the proposed loss function and other regularized loss functions proves the higher reconstruction quality of the l 1 -norm constraint than the l 2 -case directly.
In addition, we analyze the performance of our method under different noise levels and reduced sampling angles.For the noise test, we add Gaussian noise of different power levels to the 81 simulated measured complex amplitudes, which are represented as different SNRs of the training data.From the curve of the reconstructed SNR versus the noise level, as shown in Fig. 6(a), we can find our method maintains more robustness to the noise than other methods.This is especially useful in the case of shorter exposure time for higher scanning speed, where the data are always readout-noise limited.For the test of reduced sampling angles, we keep the range of incident angles fixed from −π∕4 to π∕4.The total number of the incident angles for the network training decreases from 81.The curve of the reconstructed SNR versus the number of the incident angles is shown in Fig. 6(b).Even with as few as 11 incident angles, we can still achieve comparable performance as the previous methods with 81 angles.This nearly eight-time improvement facilitates the development of high-speed 3-D refractive index imaging.
Tomographic Reconstruction of a Biological Sample
To further validate the capability of the network, we display the experimental results on HeLa cells performed by our tomographic phase microscope as shown in Regularized loss function comprises one data-fidelity term and one regularization term.
To the best of our knowledge, the presented study is the first to employ l 1 fitting for the regularized TPM.Generally, the choice of the data-fidelity term depends on the specified noise distribution.However, it is particularly common for solving normal image restoration problems, as under various constraints, images are always degraded with mixed noise and it is impossible to identify what type of noise is involved.The l 2 -norm fitting relies on strong assumptions on the distribution of noise: there are no heavy tails and the distribution is symmetric.If either of these assumptions fails, then the use of l 2 -norm is not an optimal choice.On the other hand, for the so-called robust formulation based on l 1 -norm fitting, it has been shown that the corresponding statistics can tolerate up to 50% false observations and other inconsistencies. 27Hence, l 1 -norm data-fidelity term relaxes the underlying requirements for the l 2 -case and is well suited to biomedical imaging especially when the noise model is undetermined [as shown in Figs.4(a)-4(d)] and mixed [as shown in Fig. 6(a)].
As for the regularization term, we finally choose the anisotropic total variation (TV) regularizer in our method, which is an l 1 penalty directly on the image gradient.It is a very strong regularizer, which offers improvements on reconstruction quality to a great extent compared with the isotropic counterpart (l 2 penalty). 29Therefore, the edges are better preserved, which can be seen apparently from the comparison between Figs. 4(a) and 4(b).
Selection of Hyperparameters
Selection of hyperparameters has an important effect on tomographic reconstruction results.In practice, many learning algorithms involve hyperparameters (10 or more), such as initial learning rate, minibatch size, and regularization coefficient.Reference 31 introduces a large number of recommendations for training feed-forward neural networks and choosing the multiple hyperparameters, which can make a substantial difference (in terms of speed, ease of implementation, and accuracy) when it comes to putting algorithms to work on real problems.Unfortunately, optimal selection of hyperparameters is challenging due to the high computational cost when using traditional regularized iterative algorithms. 21,23n this study, our GPU-based implementation of CNN runs computation-intensive simulations at low cost and is possible to make the optimization of hyperparameters a more reproducible and automated process with modern computing facilities.Thus, we can gain better and more robust reconstruction performance with the GPU-based learning method.During the simulation and experiment, selection of hyperparameters varies with the biological sample and the range of incident angles.To make a convincing comparison, optimal hyperparameters have been selected for all the other reconstruction methods.The refractive index difference δnðrÞ is initialized with a constant value of 0 for all the methods, and different optimal regularization coefficients are chosen for different regularized loss functions due to the different combinations of data-fidelity term and regularization term.The number of iterations is set to guarantee the convergence of each method, as shown in Fig. 9.
Subgradient Method and Adam Algorithm
In convex analysis, 30 the subgradient generalizes the derivative to functions that are not differentiable.A vector g ∈ R n is a subgradient of a convex function at x if E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 1 1 0 fðyÞ ≥ fðxÞ þ g T ðy − xÞ ∀y: (5) If f is convex and differentiable, then its gradient at x is a subgradient.But a subgradient can exist even when f is not differentiable at x.There can be more than one subgradient of a function f at a point x.The set of all subgradients at x is called the subdifferential, and is denoted by ∂fðxÞ.Considering the absolute value function jxj, the subdifferential is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 3 2 6 ; 5 2 8 ∂jxj ¼ 8 < : Subgradient methods are subgradient-based iterative methods for solving nondifferentiable convex minimization problems.
Adam is an algorithm for first-order (sub)gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments.The method is aimed toward machine learning problems with large datasets and/or highdimensional parameter spaces.The method is also appropriate for nonstationary objectives and problems with very noisy and/ or sparse gradients.Adam works well in practice and compares favorably with other stochastic optimization methods regarding the computational performance and convergence rate. 26It is straightforward to implement, is computationally efficient, and has little memory requirements, which is robust and well suited to TPM.Compared with the stochastic proximal gradient descent (SPGD) algorithm reported in Ref.21, our GPU-based CNN trained with the Adam algorithm for optimization converges to the same SNR level and achieves twice the rate of convergence as shown in Fig. 10.To show the higher convergence rate of Adam fairly, we use the proposed l 1 -norm loss function for both the Adam and SPGD training processes here, thus producing the same reconstructed SNR after convergence.
Conclusion
We have demonstrated a GPU-based implementation of deep CNN to model the propagation of light in inhomogeneous sample for TPM and have applied it to both synthetic and biological samples.The experimental results verify its superior reconstruction performance over other tomographic reconstruction methods, especially when we take fewer measurements.Furthermore, our CNN is much more general under different optical systems and arbitrary illumination patterns as its design is illuminationindependent.Importantly, this approach can not only enlarge the applications of optical tomography in biomedical imaging, but also open rich perspectives for the potential of deep neural networks in the optical society.
Fig. 2
Fig. 2 Detailed schematic of our CNN architecture, indicating the number of layers (Nz), nodes (Nx × Ny) in each layer and operations between adjacent layers.Here, kerðω x ; ω y Þ signifies
Fig. 1 .Fig. 4 Fig. 5 4 Discussion 4 . 1
Fig. 4 Reconstruction results of three 5 μm beads.Comparison of the cross-sectional slices of the 3-D refractive index distribution of the sample along the x − y , x − z, and y − z planes reconstructed by (a) proposed CNN, (b) CNN with l 1 fitting, l 2 regularization (L1 L2) and the regularization coefficient of 5, (c) CNN with l 2 fitting, l 1 regularization (L2 L1) and the regularization coefficient of 0.1, (d) learning approach23 implemented on the same CNN settings (LA) with the regularization coefficient of 0.6, (e) optical diffraction tomography based on the Rytov approximation (ODT)13 with the positivity constraint and 100 iterations, and (f) iterative reconstruction based on the filtered backprojection method4 with the positivity constraint and 400 iterations.Scale bar, 5 μm.
Fig. 6 Fig. 7 Fig. 8
Fig. 6 Performance analysis for proposed approach with the same hyperparameter selection.(a) The curve of the reconstructed SNR versus the noise level and (b) the curve of the reconstructed SNR versus the number of the incident angles.
Fig. 9 Fig. 10
Fig. 9 Reconstructed SNR plotted as a function of the number of iterations for different reconstruction methods on simulated data.Hyperparameters are declared in Sec.3.1. | 4,757.2 | 2018-06-01T00:00:00.000 | [
"Computer Science"
] |
Time Domain Characterization of Light Trapping States in Thin Film Solar Cells
Spectral interferometry of the backscattered radiation reveals coherence lifetimes of about 150 fs for nanolocalized electromagnetic modes in textured layered nanostructures as they are commonly used in thin film photovoltaics to achieve high cell efficiencies.
Introduction
Nanolocalization of light is not only relevant for achieving high spatial resolution in spectroscopy and microscopy but also substantially affects the efficiency of light matter interaction.For example, in thin film solar cells light trapping, i.e. the localization of light in the active layer, determines critically the efficiency [1][2][3].It has been shown that randomly textured interfaces increase the light trapping efficiency.Thus, for example, silicon thin film solar cells usually have corrugated internal interfaces between the transparent conductive oxide (TCO, in our case ZnO) layers and the absorber medium, i.e. hydrogenated amorphous Si (a-Si:H).Light localization in random stratified media has attracted substantial attention [4][5][6], however, it is still an unresolved question what type of interface corrugation leads to the best light trapping.
Multiple scattering at corrugated interfaces supports the formation of weakly localized photonic modes similar to Anderson localization of light in three dimensional turbid media [7].In contrast to diffusive transport these modes appear as resonances and exhibit lifetimes of about 150 fs.Here we report the time domain characterization of scattered coherent radiation from layered nanostructures.It is shown that spectral interferometry is applicable to investigate light trapping, allows quantifying trapping state lifetimes, and could serve as a new characterization tool for thin film processing.
Methods and Results
A modified Mach-Zehnder interferometer is used to measure spectral interferometry for two independent polarization components of the diffuse backscattered light from the sample mounted in one interferometer arm (Fig. 1a).Linearly polarized laser pulses from a Ti:Sapphire oscillator (795 nm, 20 fs) are focused onto the nanostructure (about 5 µm spot diameter).The sample surface is tilted by 10° with respect to the incident beam.The parabolic mirror recollimates part of the scattered light that is back-reflected under an angle of about 10° with respect to surface normal.This scattered light is brought to interference with a bandwidth-limited attenuated reference pulse propagating in the other interferometer arm that is linearly polarized at 45° with respect to the interferometer plane.A fibre coupled dual channel spectrograph is used to measure simultaneously the spectral interference for both polarization components.The spectral phase of the reference pulse is characterized using a SPIDER setup.The layered nanostructures studied here are shown in Fig. 1b,c.The amorphous silicon layer is either grown on a corrugated ZnO layer (Fig. 1b and left cross section in Fig. 1c) or on a smooth layer (right cross section in Fig. 1c).The corrugation of the initial ZnO layer is conserved during the growth of the thin silicon layer and thus both interfaces exhibit an almost identical roughness of 90 nm rms.In both cases a top layer of ZnO is deposited with 13 nm and 90 nm rms roughness for the structure with smooth and corrugated interfaces, respectively.Fig. 1d summarizes the time domain characterization of the scattered radiation performed for these two nanostructures.For smooth internal interfaces the spectral amplitude of the scattered light (gray shaded area in inset Fig. 1d) is slightly narrower compared to the incident radiation (gray solid line).In the time domain the reconstructed scattered radiation field amplitude exhibits a single peak with 22 fs duration (FWHM).The slight increase in comparison to the incident pulse duration is attributed to the narrower bandwidth of the scattered light.The sample with corrugated internal interfaces exhibits a completely different behavior.The scattered light shows a similar fast initial rise.However, in contrast to the sample with smooth internal interfaces it exhibits delayed scattered light components spread over more than 100 fs.The time dependent field amplitudes vary significantly for different positions on the sample.In addition, the spectral amplitude shows a highly structured response that also varies significantly for different EPJ Web of Conferences positions on the sample and thus the strong modulation of the time-dependent field amplitude is attributed to beating between different modes.An average trapping lifetime of about 150 fs is determined by fitting an exponential to the sum of the time-dependent field amplitudes recorded for 10 different focal spots on the sample (not shown).The resulting signal corresponds to the incoherent sum of the back-scattered light from the different locations.In a coherent sum of the scattered light -as it would be measured using a large focal spot for illumination averaging over many different light trapping modes -the lifetime effect cancels out because of destructive interference of the different spectral components.Hence, focused illumination and characterization of the coherent back-scattering provides additional information compared to usual light scattering measurements.
In contrast to the sample with smooth internal interfaces the nano-textured layered structure shows back-scattered light components that are polarized perpendicular to the incident pulse and exhibit similar effective lifetime as the non-rotated components.Assuming an effective refractive index of 3 for light propagation in the layered structure the lifetime of about 150-200 fs corresponds to an optical path of 15 µm.The observed spectral resonances indicate the formation of modes, i.e. closed multiple scattering pathways, and thus an upper limit for the lateral localization of the modes of about 5 µm is estimated.
Conclusion
Dual channel spectral interferometry of back-scattered radiation allows investigating light trapping mechanisms in layered nanostructures.Coherence lifetimes of about 150 fs, mode beating, and resonances in the spectral amplitude indicate the formation of nanolocalized photonic modes that are attributed to the formation of closed pathways of multiply scattered radiation.The measurements show that microscopic spectral interferometry is a promising technique for the characterization and monitoring of thin film processing and provides a new approach to investigate and optimize light trapping mechanisms.
Fig. 1 .
Fig. 1.Dual channel spectral interferometry of backscattered light from layered nanostructures.a) Scheme of the experimental setup.b) Scanning electron microscopy image of the ZnO layer in a nanostructured thin film silicon solar cell.c) Cross section through the layered nanostructures.The left structure corresponds to a thin film solar cell with omitted metallic back reflector.The structure with smooth internal interfaces (<5 nm rms roughness) serves as reference.d) Back-scattered field amplitudes from a layered structure with nano-textured internal interfaces (colored lines correspond to three different positions) and with smooth internal interfaces (shaded area).The corresponding spectral amplitudes are shown as inset together with the spectral amplitude of the laser (gray solid line).In all cases the polarization component of the scattered light parallel to the incident field is shown.
EPJ Web of Conferences DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2013 | 1,513.4 | 2013-03-01T00:00:00.000 | [
"Physics"
] |
Delivering a toxic metal to the active site of urease
Urease is a nickel (Ni) enzyme that is essential for the colonization of Helicobacter pylori in the human stomach. To solve the problem of delivering the toxic Ni ion to the active site without diffusing into the cytoplasm, cells have evolved metal carrier proteins, or metallochaperones, to deliver the toxic ions to specific protein complexes. Ni delivery requires urease to form an activation complex with the urease accessory proteins UreFD and UreG. Here, we determined the cryo–electron microscopy structures of H. pylori UreFD/urease and Klebsiella pneumoniae UreD/urease complexes at 2.3- and 2.7-angstrom resolutions, respectively. Combining structural, mutagenesis, and biochemical studies, we show that the formation of the activation complex opens a 100-angstrom-long tunnel, where the Ni ion is delivered through UreFD to the active site of urease.
INTRODUCTION
Urease is a virulence factor of Helicobacter pylori that infects half of the human population, leading to an increased risk of peptic ulcers and gastric cancer (1). The enzyme, hydrolyzing urea into carbon dioxide and ammonia, is essential to the survival of the pathogen in the acidic environment of the human stomach (2). Most ureases are nickel (Ni) enzymes containing two Ni(II) ions bridged by a carbamylated lysine residue (Lys 219 of H. pylori urease) in their active sites (3)(4)(5)(6). Metal ions at the top of the Irving-Williams series (7) such as Ni(II) ions are toxic because they can displace weaker ions [e.g., Mg 2+ in guanosine triphosphatase (GTPase)] from the active site of cellular enzymes and can inactivate these enzymes (8).
The urease maturation pathway represents a paradigm for how cells solve the problem of delivering a toxic metal ion to the active site of an essential enzyme. Cells have evolved metallochaperones, or metal carrier proteins, to deliver the Ni(II) ions from one protein to another via the formation of specific protein-protein complexes so that the toxic metal ions do not diffuse into the cytoplasm (9). In the urease maturation pathway, Ni delivery is assisted by four urease accessory proteins UreD, UreE, UreF, and UreG (fig. S1) (9)(10)(11). UreE exists as a homodimer in solution containing the conserved Gly-Asn-Arg-His motif that binds a Ni(II) ion at the dimeric interface (12)(13)(14). UreG is a GTPase that undergoes guanosine triphosphate (GTP)-dependent dimerization that brings the conserved Cys-Pro-His (CPH) motif to the dimer interface to bind a Ni(II) ion in a square planar coordination (15). UreD interacts with UreF to form a dimer of heterodimeric UreFD complex, which induces conformational changes in UreF to recruit a UreG dimer to form a UreGFD complex (16,17). UreE provides the upstream source of Ni(II) ions, which are delivered to UreG via the formation of a UreE 2 G 2 heterodimeric complex (15,18,19). After receiving its Ni, the Ni-bound UreG dimer forms an activation complex with UreFD and urease, and activates urease upon GTP hydrolysis with the help of UreFD (Fig. 1A) (15,17,20).
How Ni is transferred from UreG to urease remains elusive. The Ni binding site of urease is deeply buried (3,4), so it is not known how Ni can access the urease active site. Moreover, because UreG and the urease are topologically separated by the UreFD complex, the Ni binding site of UreG will likely be far away from the urease (Fig. 1A). It is also not known what could be the role of the UreFD complex in this process. In this study, we determined the cryo-electron microscopy (cryo-EM) structures of H. pylori HpUreFD/urease complex and Klebsiella pneumoniae KpUreD/urease complex. We show that formation of the UreFD/urease complex opens a tunnel from the buried urease active site that can connect to the Ni binding site in UreG. Further supported by mutagenesis and biochemical studies, we suggest that this tunnel facilitates the delivery of Ni(II) ion from UreG to the urease.
The EM maps of HpUreFD/urease and KpUreD/urease were solved to 2.3-and 2.7-Å resolution, respectively ( Fig. 1 and fig. S2). Cartoon representation and secondary structure assignment of the modeled structures are shown in the figs. S3 to S5. In K. pneumoniae urease, each catalytic unit is constituted by the α (KpUreC), β (KpUreB), and γ (KpUreA) domains (Fig. 1D). Similar to the quaternary structure of Klebsiella aerogenes urease (3), K. pneumoniae urease contains three catalytic units arranged in a C3 trimer that resembles a triangular disc-like structure (Fig. 1B). The vertexes of the urease trimer are bound with a KpUreD molecule (Fig. 1, B and D) that has the characteristic β helix fold ( fig. S3). In H. pylori urease, each catalytic unit is constituted by HpUreA (a fusion of the β and γ domains) and HpUreB (α domain) ( Fig. 1E and figs. S4 and S5). Four copies of the urease trimers form a dodecameric quaternary structure with 12 protruding HpUreFD molecules (Fig. 1C). As expected, the R179A/Y183D substitutions disrupted the dimerization of HpUreFD, and each catalytic unit of H. pylori urease is interacting with one copy of HpUreD and HpUreF (Fig. 1E). As for the KpUreD/urease complex, HpUreD interacts with the urease (Fig. 1E). The binding sites of KpUreD and HpUreFD on urease are very similar and involve an area of the α domain flanked by the β and γ domains (Fig. 1, F and G). The surface area buried by UreD on K. pneumoniae and H. pylori ureases were 2225 and 2384 Å 2 , respectively. Residues involved in polar contacts (hydrogen bonds or salt bridges) between UreD and the urease are indicated in Fig. 1 (F and G) and listed in table S1.
Complex formation induces conformational changes in urease and UreD
The structure of the HpUreFD/urease complex was compared to the crystal structures of HpUreFD (16) and H. pylori native urease (4). In addition to minor structural changes at the C terminus of helix 1 in the dimerization interface of HpUreF, residues with large values of Cα RMSD (root mean square deviation of α carbon) are found in the binding interface between HpUreD and HpUreB ( Fig. 2A). The conformational changes upon the formation of HpUreD/urease complex are summarized in Fig. 2 and in movie S1. Binding of UreD induces major conformational changes in three switch regions of HpUreB: (i) the glycine-rich loop ( 277 GAGGGHAP 284 ) between strand 6 and helix 6, (ii) helix H3 and the loop connecting to helix 7 (residues 330 to 340), and (iii) residues between strand 12 and strand 13 (residues 538 to 545) (Fig. 2B). Upon binding of HpUreFD, residues 335 ADSR 338 at the C terminus of helix H3 uncoil, causing switches II and III residues to stretch in opposite directions and residues Arg 338 and Arg 340 to flip out toward UreD (Fig. 2B). As Arg 338 is hydrogen bonded to the backbone amide of Ala 278 and Gly 279 , the conformational change is propagated to the glycine-rich switch I loop (Fig. 2B). When the structure of the KpUreD/urease complex was compared to the crystal structure of K. aerogenes urease apoprotein (3) and native urease (21), similar conformational changes were observed ( Fig. 2C and fig. S6).
The formation of the HpUreFD/urease complex also induces major conformational changes in HpUreD ( Fig. 2F and movie (Fig. 2F). On the other hand, Gln 80 undergoes a swivel motion (Fig. 2F) so that it can form hydrogen bonds to the backbone amides of Val 332 and Ala 335 of HpUreB (Fig. 2G). This swivel motion induces structural rearrangement in the regions of strand 6/7 and strand 9/10 in such a way that the side chain of Phe 112 swings out from a buried to an exposed position (Fig. 2F). The backbone conformations of the loops are stabilized by additional hydrogen bonds involving Ile 77 , Ser 79 , Ser 81 , Ala 110 , and Phe 112 as indicated in Fig. 2G.
UreD/urease interaction is important in urease maturation
The most notable conformational changes observed in urease are the flipping of Arg 338 and Arg 340 toward UreD. The conformation of Arg 338 is stabilized by forming a hydrogen bond to and stacking against Tyr 543 of HpUreB (Fig. 3A). On the other hand, Arg 340 of HpUreB forms an intermolecular hydrogen bond with Asp 61 of HpUreD (Fig. 3A). The interaction between HpUreD and urease is further strengthened by Glu 177 of HpUreA forming hydrogen bonds with Phe 82 and Lys 84 of HpUreD. These hydrogen bonds are also conserved in KpUreD/urease ( fig. S7). To test the role of these conserved interactions in urease maturation, we have created a D61A variant of HpUreD, E177A variant of HpUreA, and Y543A variant of HpUreB and tested the interaction between HpUreFD and HpUreAB via a pull-down assay (Fig. 3, B and C).
Our results show that wild-type (WT) polyhistidine glutathione Stransferase (HisGST) tagged HpUreFD coelutes with HpUreAB, suggesting that HpUreFD interacts with HpUreAB in the pulldown assay ( Fig. 3B and fig. S8). In contrast, the interaction between HpUreFD and HpUreAB was greatly reduced by D61A substitution of HpUreD (Fig. 3B), E117A substitution of HpUreA, and Y543A substitution of HpUreB (Fig. 3C). We further show that these substitutions abolish urease activity in an in vitro assay (Fig. 3, D and E). These results suggest that these conserved polar interactions are important in the formation of the HpUreFD/urease complex and in the maturation of urease.
A tunnel inside the HpUreFD/urease complex facilitates urease maturation
We noticed that the formation of the HpUreFD/urease complex opens a tunnel that reaches the active site of urease (Fig. 4A). Tunnel searching was performed using the program CAVER 3.0 (22). This 100-Å-long tunnel starts at the active site residue Lys 219 of urease, exits HpUreB near Asp 336 of the switch II region, passes through HpUreD between the two layers of β sheets, enters HpUreF near Ala 233 , and reaches the dimerization interface of HpUreF (Fig. 4A). A tunnel was also identified in the KpUreD/urease complex that passes through similar regions in KpUreC and KpUreD ( fig. S9). In the crystal structure of the HpUreGFD complex (17), a tunnel that passes through a similar region of HpUreF was identified ( Fig. 4B) (23)(24)(25). Together, these observations suggest that the tunnel inside the HpUreFD/urease complex connects the active site of urease ( Fig. 4A) to the CPH Ni binding motif of UreG (Fig. 4B).
Conformational changes in the HpUreB/UreD interface are instrumental in the opening of the tunnel that reaches the urease active site ( Fig. 4C and movie S2). The active site residue Lys 219 , which is carbamylated and binds Ni(II) ions in mature urease, is completely buried inside urease. The access to the active site is blocked by the glycine-rich switch I loop and switch II residues (e.g., Phe 334 and Arg 338 ) of HpUreB (Fig. 4C). These residues relocate upon the formation of the HpUreFD/urease complex, making room to create a tunnel that reaches the active site of urease. On the other hand, the tunnel in the HpUreD is blocked by Phe 112 before the formation of the HpUreFD/urease complex (Fig. 4C). As described above (Fig. 2F), the UreD/urease interaction causes Phe 112 to flip to an exposed position and thereby open a passage that connects the tunnel between HpUreB and HpUreD (Fig. 4C).
To test whether the tunnel is essential for Ni delivery to the urease, we introduced tunnel-disrupting substitutions to residues along the tunnel. We first introduced charge-to-alanine substitutions in acidic residues buried inside the tunnel (i.e., D336A of HpUreB, E140A of HpUreD, and E85A of HpUreF). We hypothesized that these conserved acidic residues (refer to the alignment in figs. S3 and S5) are important in stabilizing the positively charged Ni(II) ions inside the tunnel. The second strategy was to introduce lysine substitutions to small amino acid residues along the tunnel (i.e., S81K of HpUreD and A41K, S47K, and A233K of HpUreF) because our modeling suggested that the flexible lysine side chain can block the tunnel without affecting the proper folding of HpUreFD.
We first coexpressed HpUreD str with HpUreF and HpUreG in E. coli and purified the resulting HpUreGFD str complex by Strep-Tactin affinity chromatography (fig. S10A). We show that the tunnel-disrupting substitutions in HpUreD (S81K and E140A) and HpUreF (A41K, S47K, E85A, and A233K) did not affect the formation of the HpUreGFD str complex (fig. S10, C and E), and the purified HpUreGFD str interacted with WT HpUreAB in a pulldown assay (Fig. 4D). The interaction was not affected by tunneldisrupting substitutions in HpUreB (D336A; Fig. 4D), HpUreD (S81K and E140A; Fig. 4E), and HpUreF (A41K, S47K, E85A, and A233K; Fig. 4F). These observations suggest that the tunnel-disrupting substitutions do not disrupt formation of the activation complex between the urease and its accessory proteins. On the other hand, all these tunnel-disrupting substitutions abolished urease activity (Fig. 4, G to I). As shown in Fig. 4 (A and B), these tunnel-disrupting substitutions are distributed along the 100-Å-long tunnel-D336A and S81K are located at the HpUreB/ UreD interface, E140A is located inside HpUreD, A233K is located at the HpUreD/UreF interface, E85A is located inside HpUreF, and A41K and S47K are located at the dimeric interface of HpUreF. Together, our results suggest that Ni(II) ions are delivered along this tunnel, passing through HpUreF and HpUreD, to reach the active site of urease. HpUreAB was activated with (WT/variant) HpUreFD and HpUreG in the reaction buffer at 37°C for 30 min. Ni 2+ was omitted in the reaction buffer in the negative control. Mean relative activity and SEM of at least three measurements were reported. Urease activities of D61A, E177A, and Y543A variants of HpUreD, HpUreA, and HpUreB, respectively, were significantly lower than that of WT [one-way analysis of variance (ANOVA), P < 0.0001]. There were no significant differences among variants and the negative control. To test whether urease maturation requires dimerization of UreFD, we performed the urease activation assay in two-chamber dialyzers using the strategy described previously ( fig. S11) (15). In this assay, Ni-bound HpUreG provides the sole source of Ni for urease activation. Our results showed that urease was activated when Ni-bound HpUreG was mixed with WT HpUreFD and HpUreAB in the same chamber ( fig. S11A). On the other hand, when Ni-bound HpUreG was separated from HpUreFD and HpUreAB by a dialysis membrane, urease activation was abolished ( fig. S11D). Substitutions of R179A/Y183D in HpUreF, which breaks dimerization of HpUreFD, abolished urease activation in vitro ( fig. S11B). We have previously showed that the R179A/ Y183D substitutions abolished the formation of HpUreGFD complex and inhibited urease maturation in vivo (17). Together, our results suggest that Ni delivery requires HpUreG to interact with the dimeric HpUreFD in complex with urease.
DISCUSSION
To avoid cytotoxicity, ions such as copper and Ni are tightly regulated to subnanomolar concentrations (corresponding to less than one ion per bacterial cell) (26). To activate the urease, the enzyme cannot just pick up free Ni(II) ions from the cytoplasm. Instead, the Ni(II) ions are acquired by metallochaperones such as UreE and UreG and are delivered to the active site of urease within specific protein complexes so that the toxic metal ions do not diffuse into the cytoplasm. After receiving its Ni(II) ions from the UreE 2 G 2 complex (15,18), Ni-bound UreG activates urease by forming a UreGFD/urease activation complex (17,20). When Ni-bound UreG was separated from the UreFD/urease complex by a dialysis membrane in a two-chamber dialyzer, the urease activation was abolished (15). This observation suggests that direct proteinprotein interaction between UreG and UreFD/urease is essential for Ni delivery from UreG to the urease (15). The dimerization-deficient variant of UreF(R179A/Y183D) failed to form the UreGFD complex and to activate urease in vitro (fig. S11) and in vivo (17), HpUreAB eluted in all variants tested, but not in the vector control. (G to I) In vitro urease activation assay. HpUreAB (WT/variant) was activated with (WT/variant) HpUreGFD str in the reaction buffer at 37°C for 30 min. Ni 2+ was omitted in the reaction buffer in the negative control. Mean relative activity and SEM of at least three measurements were reported. Urease activities of all tunnel-disrupting variants were significantly lowered than that of WT (one-way ANOVA, P < 0.0001). There were no significant differences among variants and the negative control. (J) A model of how Ni is delivered through a tunnel from UreG to the urease. Phe 112 of HpUreD serves as a gate residue that relocates from a buried to an exposed position, opening a passage to the urease active site. Upon GTP hydrolysis, the Ni(II) ion is released from UreG, enters the protein tunnel that passes through UreF and UreD, and reaches the urease active site. suggesting that UreG delivers its Ni through interaction with the UreFD dimer and the urease. Ni-bound UreG is likely interacting with UreF in the activation complex, as substitutions that break UreG/UreF interaction also abolish urease maturation (16,27).
We have modeled the structure of the HpUreGFD/urease activation complex based on the cryo-EM structure of HpUreFD/urease (this study) and the crystal structure of the guanosine diphosphate (GDP)-bound HpUreGFD complex ( fig. S12) (17) to propose a model of how Ni is delivered from UreG to urease (Fig. 4J). A tunnel can be identified in the activation complex that connects the active site of urease to the Ni binding site of UreG ( fig. S12). In the GTP-bound state of UreG, the Ni(II) ion is bound at the dimer interface by Cys 66 and His 68 of the UreG-conserved CPH motif (15). UreG is modeled in the GDP-bound state and may represent the structure of the activation complex immediately after GTP hydrolysis, which disrupts the Ni-binding square-planar coordination ligands ( fig. S12). Our mutagenesis and biochemical studies suggest that Ni released from UreG can pass through UreF to UreD to reach the urease buried active site via this tunnel (Fig. 4J).
A protein tunnel can also be defined with the HpUreGFD complex and predicted in K. aerogenes UreD (17,(23)(24)(25)28). The tunnel in HpUreGFD follows a similar path to that identified in the HpUreFD/urease complex until it reaches the region near Glu 140 of HpUreD. Molecular dynamics simulations suggest that hydrated Ni(II) ions can pass through this tunnel from the Nibinding CPH motif of UreG to Glu 140 of UreD inside the HpUr-eGFD complex (25). Comparing the structures of the HpUreFD/ urease and HpUreGFD complexes reveal that Phe 112 of HpUreD can serve as a gate residue that blocks the tunnel in the HpUreGFD complex (Fig. 4C).
The HpUreD gate residue only opens to allow access to the urease active site following the conformational changes in HpUreD and HpUreB induced by formation of the activation complex ( Fig. 4C and movie S2). In particular, the gate residue Phe 112 relocates from a buried position to an exposed position, thus "opening" the tunnel to allow passage of Ni(II) ions to the urease active site (Fig. 4J). On the other hand, Phe 112 of HpUreD on the other side of the activation complex, away from the urease, is expected to adopt the buried position that "closes" the tunnel (Fig. 4J). The closure of the tunnel ensures the Ni(II) ion does not diffuse into the cytoplasm at the other end of the complex. Our model also explains how GTP hydrolysis thermodynamically drives the Ni delivery to the ureasebecause GTP hydrolysis disrupts the Ni binding site at UreG, the released Ni(II) ion is trapped inside the activation complex until it reaches the active site of urease, where it can form more stable interactions.
The activation complex should dissociate after urease is activated. In native H. pylori urease, the Ni(II) ions are coordinated by carbamylated Lys 219 , His 136 , His 138 , His 274 , and Asp 320 (fig. S13A). Structural comparison reveals that Ni binding to the active site relocates His 274 in a position to form a hydrogen bond to Gly 280 of switch II (fig. S13A). This interaction promotes conformational changes in switches I and II such that Arg 338 flips back toward the active site to form hydrogen bonds to Ala 278 and Gly 279 , inducing dissociation of UreD from the urease. Similar conformational changes are also conserved in Klebsiella urease (fig. S13B). These observations suggest that Ni binding to the active site provides additional interactions that promote dissociation of the activation complex.
In summary, this study shows how cells solve the problem of delivering a toxic metal ion to an essential enzyme by opening a 100-Å-long tunnel within the activation complex so that the toxic Ni(II) ion is delivered through the tunnel to the urease active site. The urease maturation pathway thus provides a paradigm on trafficking of toxic metal ions in cells. The delivery of Ni(II) ions along the urease maturation pathway always occurs within protein complexes so that the toxic metal ions cannot escape into the cytoplasm. In addition to academic curiosity, because H. pylori requires active urease to survive in the acidic environment of human stomach (2), a better understanding of the mechanism of the urease maturation pathway could provide insights into the development of treatment for H. pylori infection (29). Figure S14 summarizes the constructs used in this study. The construction of pHpA2H and introduction of R179A/Y178D substitutions to HpUreF were described previously (14,15). To create pHpA2H str -UreF(R179A/Y183D)UreD(E140A), E140A substitution and a C-terminal Strep-tag II (WSHPQFEK) were added to HpUreD (encoded by ureH). To create the plasmid pKpUreD str ABC, the K. pneumoniae gene cluster ureDABC was cloned between the Nde I and Xho I sites of the pRSF-Duet1 vector (Novagen) with a Strep-tag II fused to the N terminus of KpUreD. The construction of plasmids pHisSUMO-HpUreG, pHisGST-HpUreF, pHpUreH, and pHpUreAB (encoding HisSUMO-HpUreG, HisGST-HpUreF, HpUreD, and HpUreAB, respectively) used in this work was described previously (16,17). To create the plasmid pHpUreGFD str , the H. pylori gene cluster ureFGH was cloned between the Nde I and Eco RI sites of an inhouse pRSETA (Invitrogen) vector with a Strep-tag II fused to the C terminus of HpUreD. To create the plasmid pHisGST, the coding sequence of the GST was cloned between the Eco R1 and Pas I sites of the pET-Duet1 vector (Novagen) with an N-terminal polyhistidine tag. Variants of HpUreA, HpUreD, and HpUreF were generated using the Q5 mutagenesis kit (New England Biolabs) or the QuikChange II site-directed mutagenesis kit (Agilent) following the protocols from the manufacturers. Variants of HpUreB were generated by overlap extension polymerase chain reaction (30). Primers used for site-directed mutagenesis are listed in the table S2.
Structure determination of the HpUreFD/urease and
KpUreD/urease complexes Protein sample preparation E. coli BL21 was transformed with the expression plasmids of pHpA2H str -UreF(R179A/Y183D)UreD(E140A) and pKpUreD str ABC, cultured in LB at 37°C, and induced overnight with 0.4 mM isopropyl β-D-thiogalactopyranoside (IPTG) when optical density at 600 nm (OD 600 ) reached 0.5. The harvested cells were resuspended in the Strep-binding buffer [50 mM Hepes, 200 mM NaCl, and 0.5 mM TCEP (pH 7.5)] and lysed using EMULSIFLEX-C5. The soluble lysate was loaded onto a Strep-Tactin XT column (IBA Lifesciences), washed extensively with the Strep-binding buffer, and eluted with 50 mM D-biotin in the Strep-binding buffer. The HpUreFD/urease and KpUreD/urease complex were further purified by size exclusion chromatography using a Superose-6 10/300 column (GE Health-Care) pre-equilibrated with the Strep-binding buffer. Quantifoil R1.2/1.3 copper grids (200 mesh) with holey carbon foil were glow discharged for 20 s at 15 mA. Four microliters of KpUreD/ urease (0.2 mg/ml) or HpUreFD/urease complex (0.5 mg/ml) was applied to the grids. Grids were plunge-frozen in liquid ethane using the Vitrobot Mark IV (Thermo Fisher Scientific) maintained at 100% humidity at 4°C, with blot time of 3 s and blot force of 0.
Collection and processing of cryo-EM data
For the structure determination of the HpUreFD/urease complex, 1953 movies were collected using a K2 detector on a Titan G3 microscope at a calibrated pixel size of 0.822 Å. Each movie was imaged with a total dose of 48e − /Å 2 . Preprocessing was streamed using SIMPLE3.0 (31) including patched motion correction (5 × 5 patches), patched contrast transfer function (CTF) correction (5 × 5 patches), and auto picking using a template derived from twodimensional (2D) averages generated from 200 handpicked particles from a preliminary run. A total of 166,893 particles were picked. Two rounds of 2D classification led to a set of 132,431 particles, and an initial model was generated from the 2D class averages using SIMPLE3.1 with tetrahedral symmetry imposed. The particles were exported to RELION3.0 (32), and 3D classification into three classes led to identification of a subset of 68,516 particles with higher occupancy for the UreFD components. Further rounds of 3D refinement, CTF refinement, and particle polishing led to a volume of 2.3-Å resolution [as assessed by gold standard Fourier shell correlation (FSC) = 0.143 criterion].
For the structure determination of the KpUreD/urease complex, 3149 movies were collected using a K2 detector on a Titan G3 microscope at a calibrated pixel size of 0.822 Å. Each movie was imaged with a total dose of 48e − /Å 2 . Preprocessing was streamed using SIMPLE3.0 including patched motion correction (5 × 5 patches), patched CTF correction (5 × 5 patches), and auto picking using a template derived from 2D averages generated from~200 handpicked particles. A total of 567,251 particles were picked. Rounds of 2D classification led to a set of 476,622 particles, and an initial model was generated in SIMPLE3.0 from the 2D class averages. 3D classification within RELION3.0 into six classes led to identification of a subset of 111,242 particles with higher occupancy of the UreD component on all three vertices. Additional rounds of 2D classification pruned this subset to 89,857 particles before further rounds of 3D refinement with C3 symmetry imposed, CTF refinement, and particle polishing, leading to a volume of 2.7-Å resolution (as assessed by gold standard FSC = 0.143 criterion).
Model building and refinement
Initial models for the KpUreD/urease complex were derived from the crystal structure of K. aerogenes urease apoprotein [Protein Data Bank (PDB): 1KRA] (3), and a KpUreD model was generated by homology modeling using the program MODELLER implemented in UCSF CHIMERA (33). Initial models for the HpUreFD/urease complex were derived from the crystal structure of H. pylori native urease (PDB: 1E9Z) (4) and the structure of HpUreFD from the HpUreGFD complex (PDB: 4HI0) (17). These initial models were fitted into the cryo-EM maps of the KpUreD/ urease and the HpUreFD/urease complexes using the program UCSF CHIMERA (34). Models were built interactively using the program COOT (35) and refined using the program PHENIX.REAL_SPACE_REFINE (36).
Structure analysis
Tunnel searching was performed by the program CAVER 3.0 (22) implemented in the program PyMOL (https://pymol.org) using the default setting of a 0.9-Å probe radius. The structures of the HpUreFD/urease and KpUreD/urease complexes were superimposed to the native structure of H. pylori and K. aerogenes ureases (PDB: 1E9Z and 1FWJ) to identify the location of the Ni binding sites, which serve as the starting point of the tunnel search. The movies S1 and S2 were created using the program PyMOL to morph the structures of HpUreAB (PDB: 1E9Z) and HpUreFD (PDB: 3SF5) into the structure of HpUreFD/UreAB (this study). To prepare cell lysates expressing WT and variants of HpUreAB, E. coli Rosetta (DE3) was transformed with pHpUreAB (WT/variants), cultured in King Broth or Terrific Broth with appropriate antibiotics [kanamycin (50 μg/ml) and chloramphenicol (25 μg/ml), induced overnight with 0.4 mM IPTG when OD 600 reached 0.6 to 0.8 at 25°C. Each gram of cell pellet was resuspended in 10 ml of the assay buffer and lysed by sonication. After centrifuging at 20,000g for 30 min, the soluble lysates were collected and filtered using 0.22μm filters. The concentrations of WT and variants of HpUreAB in the lysates were normalized according to Coomassie Blue staining ( fig. S8D).
Pull-down assay
Five milliliters of protein samples of (WT/variant) HisGST-HpUreFD or HisGST at 20 μM was loaded onto 5-ml GSTrap FF columns (GE HealthCare) and incubated at 37°C for 30 min. After the columns were washed with 30 ml of assay buffer, 5 ml of lysates of E. coli expressing (WT/variant) HpUreAB was added to the columns and incubated at 37°C for another 30 min. The columns were washed with 30 ml of assay buffer, eluted with 10 mM GSH in the assay buffer, analyzed by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and stained with Coomassie Blue (Fig. 3, B and C).
Pull-down assay for testing interactions between HpUreGFD str and H. pylori urease Protein sample preparation
To purify protein samples of WT or variants of HpUreGFD str , E. coli BL21 (DE3) pLysS was transformed with pHpUreGFD str (WT/variants), cultured in Terrific Broth with appropriate antibiotics [ampicillin (100 μg/ml) and chloramphenicol (25 μg/ml), induced overnight with 0.4 mM IPTG when OD 600 reached 0.6 to 0.8 at 25°C. One gram of harvested cells was resuspended in 10 ml of Strep-binding buffer [50 mM Hepes, 200 mM NaCl, and 1 mM TCEP-HCl (pH 7.5)] supplemented with 0.1 g of cOmplete ULTRA Tablets protease inhibitor cocktail (Roche). After lysis by sonication, the cell lysate was collected by centrifugation at 20,000g for 40 min and filtered using 0.22-μm filters. A total of 0.5 mM GDP and 1 mM MgSO 4 were added to the cell lysate, which was then loaded onto 0.5-ml Strep- Tactin
Pull-down assay
A total of 0.1 ml of 20 μM HpUreGFD str (WT/variants) was loaded onto 0.1-ml Strep-Tactin XT resins (IBA Lifesciences) pre-equilibrated with the Strep-binding buffer in a spin column. After washing the resins with 8 bed volumes of the assay buffer, 100 μl of lysate of HpUreAB (WT/variants) was loaded onto the resins. After incubation at 37°C for 15 min, the resins were washed with 13 bed volumes of the assay buffer. The last trace of buffer was removed by centrifugation at 100g for 10 s. Bound proteins were eluted by 0.25 ml of the Strep-elution buffer, analyzed by SDS-PAGE, and stained with Coomassie Blue (Fig. 4, D to F).
After sonication, cell lysate was loaded onto a HiTrap Q HP column (GE HealthCare) pre-equilibrated with buffer A. The column was washed with buffer A and eluted with a 100-ml linear gradient of 0 to 500 mM NaCl in buffer A. Fractions corresponded to~200 to 325 mM NaCl were collected and concentrated to OD 280 of~24 to 27. Two hundred fifty microliters of the sample was loaded onto a Superdex 200 Increase 10/300 column (GE HeathCare) pre-equilibrated with the assay buffer [20 mM Hepes, 200 mM NaCl, and 1 mM TCEP-HCl (pH 7.5)]. Fractions that correspond to the HpUreAB dodecamer (~10.5 ml) were collected, concentrated, and loaded onto a Superose 6 Increase 10/300 column (GE HeathCare) pre-equilibrated with the assay buffer. Purified HpUreAB dodecamer was eluted at~13.5 ml.
Urease activation assay
For the urease activation assay in Fig. 3 Fig. 4, 40 μM HpUreGFD str (WT/variants) were mixed with 10 μM HpUreAB (WT/variants) in the reaction buffer instead. NiSO 4 was omitted in the negative control. KHCO 3 (10 mM) was added to stimulate the GTP hydrolysis of HpUreG (17,18) and initiate the urease activation. The reaction mixture was incubated at 37°C for 30 min. Activation of urease was measured as described (15). For the urease activation assay in fig. S11, 40 μM Ni-bound or apo-HpUreG, (WT/variant) HpUreFD, and 10 μM HpUreAB were added to either side of a two-chamber dialyzer (Bioprobes Ltd.) separated by a dialysis membrane with a molecular weight cutoff of 6 to 8 kDa (Spectrum Labs). The buffer in both chambers contained 2 mM MgSO 4 , 1 mM GTP, 20 mM Hepes (pH 7.5), 200 mM NaCl, and 1 mM TCEP. After equilibration at 4°C for 16 hours, 10 mM KHCO 3 was added to both chambers to activate the GTP hydrolysis required for urease activation. The chambers were then incubated at 37°C for 1 hour, and the urease activity was measured as described (15). Urease activities were normalized to those measured for using WT proteins and were analyzed with one-way analysis of variance (ANOVA) followed by the Tukey post hoc test using the program Prism (Graph-Pad). In our hand, specific activity of activated urease ranged from 27 to 181 μmol min −1 mg −1 (table S3). In comparison, the specific activity of native urease purified from H. pylori was 1693 μmol min −1 mg −1 (37).
Supplementary Materials
This PDF file includes: Figs. S1 to S14 Tables S1 to S3 Legends for movies S1 and S2 Other Supplementary Material for this manuscript includes the following: Movies S1 and S2 | 7,579.6 | 2023-04-01T00:00:00.000 | [
"Biology"
] |
Implementation of a food traceability system utilizing the Internet of Things and blockchain technology to enhance safety measures within the food supply chain
. Food safety is a global concern, and a reliable Food Traceability System (FTS) is essential to address this issue. This paper introduces a novel FTS, FTS-IoT-BT, which uses IoT and Blockchain Technology to enhance safety protocols in the Food Supply Chain (FSC). The framework aims to document the condition of food items, verify data reliability, and restrict access to unsafe food. The FTS-IoT-BT provides a comprehensive overview of the highest and lowest durations associated with different operations, with a superior throughput of 96.7% when using ten IoT devices..
Introduction
Recently, there has been a notable increase in the frequency of food safety events, which has presented a significant risk to public health and has led to a gradual erosion of public confidence in the regional food industry [1].Nevertheless, in light of the swift progression of society, individuals are increasingly inclined toward the pursuit of expeditious financial gains.In search of increased profitability, numerous food companies disregard the wellbeing and safety of consumers.Nevertheless, it deceives consumers by presenting itself as an excellent, nutrient-rich, healthy food product, thereby ignoring the concept of integrity.Food quality is a matter of concern not only for consumers but also for other stakeholders.Food manufacturers also have a vested interest in understanding the distribution process of food and obtaining relevant information about it.
By employing the FTS, one can access a comprehensive range of information about food, spanning from its origin in production to its ultimate consumption [2].This system serves to guarantee the safety and dependability of food products.Furthermore, there is a keen interest among food producers to acquire knowledge regarding the dynamics of the food movement and its associated intricacies.When considering the choice of FTS, it is essential to consider their significant economic value and the crucial interests and requirements of the intended users.Individuals regard food as the utmost priority, and consumers have consistently embraced this preference.After identifying food quality and safety concerns, traceability can be employed to promptly determine the source of the issue and the specific responsible party, facilitating the recall of contaminated food.
Although the National Food Security Oversight Department places significant emphasis on the prevention and management of food quality and safety, overseeing and tracing food safety has become challenging due to the involvement of numerous stakeholders within the FSC [3].On the one hand, integrating cutting-edge technologies and management methods seeks to reestablish consumers' confidence in the food industry and fulfill their demands for comprehensive food information monitoring and management.On the other hand, this integration also aims to enhance the regulatory authorities' supervision capabilities and efficiency in overseeing relevant processes.The application of FTS is of utmost importance to enable the comprehensive tracking of food-related information throughout the entirety of the food production and distribution process.
Once food quality and safety issues are identified, traceability can quickly identify the supply chain point and the responsible party.This allows timely recall of quality-affected food products.This method has also been shown to ensure food safety [4].Unfortunately, food traceability in my country started late.Traceability data is highly centralized in the FTS.The primary FSC entities control and manage data exclusively.Thus, this arrangement raises data safety and reliability concerns.In addition, the FSC has an information imbalance among central entities.This unequal access to information makes it difficult to determine if data was intentionally tampered with during distribution.Innovative technologies and supervisory methods are being used to improve regulatory oversight.FTS must be used throughout the process to track food data [5].The client feedback method and regulatory agency food safety process are lacking.
Academics consider BT a major research area.In FTS, distributed ledger technology is efficient, cheap, secure, reliable, and trustworthy [6].Despite its resilience, BT lacks scalability.The BT can be inefficient and complicated.Data uniqueness has long been BT's main drawback.It benefits the FSC, financial systems, and others.Scalability is BT's biggest drawback, but not insurmountable.The blockchain is vulnerable.BT still struggles with scalability.BT items' immutability and permanence are limited and may change.Previous studies show it has technical advantages over degrees [7].The BT securely distributes and stores data.Malicious information tampering requires compromising over 52% of the BT nodes.This task is impractical for BT with many nodes.Thus, data integrity and security must be ensured to ensure reliability.BT protects personal data and helps organizations generate and exchange economic value at lower scales.
Implementing an FTS emerges as a prominent solution in light of the dynamic nature of the global FSC [18].This initiative utilizes the combined capabilities of the IoT and BT to enhance safety measures across the complex food production, distribution, and consumption network.In light of the increasing concerns surrounding food safety and authenticity, this initiative aims to establish a comprehensive and transparent FTS.By leveraging IoT devices to capture real-time data and employing BT for secure and decentralized record-keeping, the system offers unprecedented transparency into the FTS.This introduction establishes the context for examining the significant impact of advanced technologies on transforming food traceability.These technologies provide improved safety measures and cultivate consumer confidence in the reliability of the worldwide food distribution network [8].
Related works
The contemporary FSC encounters increasing difficulties, encompassing foodborne diseases and deceptive behaviors, demanding inventive approaches to guarantee safety and transparency.This literature review examines the utilization of an FTS that capitalizes on the collaborative benefits of the IoT and BT.The increasing consumer consciousness regarding the source and safety of food products is driving the adoption of advanced technologies that can potentiallytransform the food industry significantly.
The authors, Iftekhar and Cui (2021), put forward a proposition for creating an FTS based on BT.This system aims to guarantee food safety and safeguard consumers' well-being, particularly by addressing the potential hazards linked to the COVID-19 pandemic [9].BT is employed to establish an unalterable and easily observable record of the FSC.The system's output offers immediate traceability, presenting result values that showcase improved consumer safety and establishing supply chains free from COVID-19 contamination.The methodology described in reference [10] focuses on implementing a blockchain-based IoT system designed to ensure traceability in the food industry.This system incorporates a consensus mechanism that is seamlessly integrated within its framework.The implementation process entails incorporating IoT devices to gather realtime data.This integration is facilitated by utilizing BT, which guarantees the security and transparency of maintaining records.
The methodology proposed in reference [11] entails the development of an FTS-FSC utilizing BT.The implementation process involves the incorporation of BT to establish a traceability framework that is both secure and transparent.The resultant outcome is a conceptual framework for establishing traceability mechanisms within the intricate network of the FSC.The outcome of this endeavor encompasses enhanced levels of transparency and traceability.The advantages of this approach include improved security and increased trust.However, it is important to consider potential disadvantages, such as the requirement for widespread industry adoption of standardized practices.
The authors in reference [12] concentrate on developing an FTS by utilizing the IoT and big data.Integrating IoT devices and big data analytics facilitates a comprehensive traceability system.The resulting product is a traceability system designed to protect and ensure the well-being of the general population.The outcome values exhibit enhanced traceability and insights derived from data analysis.One of the notable benefits of this approach is the ability to conduct real-time monitoring and leverage data analytics.Feng H. et al. utilized BT to augment traceability within the agri-food sector.BT integrates to establish a traceability system that ensures both transparency and immutability [13].The output encompasses an analysis of various development methodologies, the advantages derived, and the obstacles encountered when implementing BT in agri-food traceability.The outcome values emphasize a heightened level of transparency and trust.
The methodology described in reference [14] centers on creating an FTS for agricultural products in Thailand, employing BT and the IoT.The implementation process involves the integration of BT with IoT devices to establish a comprehensive system for traceability.The resulting product is a traceability system specifically designed for the agricultural sector in Thailand.The outcome values exhibit enhanced levels of transparency and traceability.There are several advantages associated with this approach, including the improvement of product authenticity and the enhancement of consumer confidence.One possible drawback could be the requirement for extensive adoption within the industry.
The methodology described in reference [15] utilizes the integration of BT and IoT to establish a secure and unalterable data infrastructure that guarantees the availability of information about food safety.The output encompasses the provision of data availability that is resistant to tampering.The outcome values demonstrate enhanced data integrity and security.In their study, Balamurugan, S., Ayyasamy, A., and Joseph, K. S. (2021) employed traceability techniques driven by IoT blockchain to enhance safety measures within the FSC.The integration of IoT devices and BT is employed to improve the level of traceability [16].The resulting outcomes encompass enhanced safety protocols implemented The conducted literature survey uncovers a highly compelling and significant landscape that emerges from the convergence of the IoT and BT in implementing FTS.Integrating these technologies exhibits significant potential in augmenting safety protocols within the FSC.The studies that have been reviewed collectively emphasize the benefits of acquiring data in realtime, maintaining secure and transparent records, and implementing traceability frameworks that are resistant to tampering.The predominant outcomes that arise from efforts to enhance transparency, foster consumer trust, and address the risks associated with food safety issues and fraud are improved.
Implementation of an FTS utilizing the IoT and BT to enhance safety measures in FSC 3.1 System model
The integration of BT into IoT data has the potential to greatly simplify the organization of diverse and indeterminate devices and methods, encompassing transactions and interactions among these devices.Cryptographically-based IoT devices can function as intelligent agents within a blockchain network, facilitating automated and secure transactions.Given the limited resources within IoT networks and the inherent limitations of BT, every participant must maintain a precise replica of the blockchain to uphold its integrity and coherence.Driven by these conflicts, Fig. 1 illustrates the architecture of the IoT-based BT, which enhances the blockchain by serving as a hyperlink for storing data about all transactions within the IoT network.The projected architecture of the IoT comprises three distinct layers: local IoT, Blockchain network, and server.A contemporary BT communication system has been developed to transfer blocks within a hierarchical structure, specifically addressing security concerns within IoT networks.The primary objective of the BT is to securely facilitate the movement of nodes and effectively manage their extensive computational resources while ensuring the secure management of existing blocks within individual IoT networks.The overview of a conventional food processing system elucidates the fundamental techniques employed.The initial phase involves cultivating and harvesting primary food resources within a centralized facility, which serves as a hub for a wide array of food suppliers operating both domestically and internationally.Subsequently, the producers dispatch the shipments to central locations and wholesalers.Upon closer examination, it can be determined that the local distributor possesses a variety of wholesale and retail establishments that effectively cater to customers' demands.The food processing scenario is depicted in the Fig. 2.
Fig. 2.The food processing scenario
Faked food products can be brought into the market through multiple channels that food retailers and distributors commonly utilize.Wholesalers have the potential to acquire counterfeit food products from unlicensed vendors at a low cost, which can lead to various complications such as compromised product quality, misleading advertising, or the potential for becoming a commodity reseller or a wholesale distributor (see Fig. 2).The counterfeit food industry thrives at the detriment of consumer well-being.At the same time, the regulatory system inadequately addresses the challenges posed by this illegitimate market.The assessment of counterfeit food business speculation yields substantial profits, serving as a significant incentive for fraudulent activities.Furthermore, the absence of a reliable and comprehensive monitoring system contributes to the sustenance of this illicit industry.One significant issue pertains to the retailer's selling expired food to consumers without adequate information.Although the government has tried to implement preventive measures and monitor this activity, none of the proposed solutions have demonstrated effectiveness thus far.The issue of affordable food trade and distribution in wholesale and retail establishments is of considerable importance as it hinders consumers from accessing unjust or hazardous food products.
The proposed system will incorporate a singular QR code to facilitate the identification of food production and labeling across all retail containers, subsets, and containers, encompassing various categories such as fruits, vegetables, and packaged recipes.The proposed system will encompass the following components.The QR code contains information regarding the specific category of food items, the names and licenses of the producers, and the date of production, which are essential pieces of information to consider in an academic context.
The package contains information regarding its manufacturing details, expiration date, batch number, and other relevant information.The proposed system relies on QR coding as its primary component to prevent the infiltration of counterfeit food products into the FSC.The shipment of food products occurs at various points of dispersion but only after the nutritional content has been tested for validity.The implementation involved the utilization of anFTS-IoT-BT framework, as illustrated in Fig. 3.
Fig. 3.FTS-IoT-BT framework
This system incorporated a five-step approach to mitigate the occurrence and introduction of counterfeit goods into the market.
Step 1: Let us consider a scenario where a food processing unit log indicates the production of 70 containers of food items in a single batch.Given that each container is manufactured within a uniform production group, it is possible to identify them using a shared batch number.Additionally, each cap is designated with a specific numerical value representing the number of barrels.Similarly, each casket would possess distinctively , 050 (2024) BIO Web of Conferences MSNBAS2023 https://doi.org/10.1051/bioconf/2024820501616 82 labeled sub-packets, and so forth.The knowledge mentioned above is stored within the archive via a BT network.
Step 2: In step 2, the identical packaging products transport 70 containers to its affiliated material management entity.During this process phase, the logistic associate will assess the authenticity of the caps and subsequently approve the batch if they are deemed genuine.The present transfer will now proceed to revisit this matter.The blockchain database is a distributed ledger technology that enables the secure and transparent storage and management of digital records.As additional logistics team members are incorporated, an identical procedure of scrutinizing the ledger during every transaction has been implemented.This action would guarantee coherence in the entire flow of goods and services within a FSC.
Step 3: In step 3, once the validity of the shipment has been established, the food distribution will be received by the key distributors from the logistic associates.Certain key merchants are acquiring a growing number of food packages.Each subsequent transfer will also modify the handout.This movement represents the equal allocation of protective food volumes throughout the supply chain network.
Step 4: In Step 4, the primary distributors will distribute goods locally.The distributors allocate a suitable quantity of food items.The proposed measure will encompass the entirety of the market and its associated distribution channels.The details mentioned above are documented within the database constructed upon the blockchain.
Step 5: In Step 5, the customer procures the requisite items from a food retailer situated at the lower end of the supply chain.Additionally, the customer verifies the authenticity and integrity of the food packages by utilizing the QR code.In the absence of confirmed reliability, customers will decline to purchase food.The approach would be updated in the marketing strategy until consumers purchase the product.The food sold is currently completing its life cycle within the FSC.
Results and discussion
In this study, a series of experiments were conducted and subsequently analyzed, encompassing 50 iterations for each cryptographic foundation.The execution time in milliseconds, including the highest, lowest, and mean values, was measured for 50 iterations.The observed results are presented in Fig. 4. Several operations are executed denoted by the symbols ", , , , " representing the duration needed for a "bilinear connecting," a "flexible exponential growth," a "rectangular curve point (scalar) multiplication," a "elliptic curve point addition," and a "symmetric key encryption/decryption" respectively.IoT-BT provides a comprehensive overview of the highest and lowest durations (in milliseconds) associated with different operations.It is worth noting that the operation labeled"" demonstrates the longest duration of 7.21 milliseconds and the shortest duration of 3.76 milliseconds.This observation implies that the computational load associated with this specific operation is relatively higher.
On the other hand, the execution times of operations such as ", " ", " ", " "" are considerably lower, ranging from 0.219 ms to 0.001 ms.This suggests that these tasks are processed efficiently and quickly.The values collectively serve as indicators of the system's performance, wherein the preference is given to lower execution times due to their advantageous nature in real-time applications.The execution times of FTS-IoT-BT offer valuable insights into the system's efficiency, enabling the development of optimization strategies for time-sensitive processes.
The present study assesses the proposed scheme and visually presents the throughput across different nodes for implementing IoT in agriculture.The evaluation specifically focuses on the deployment of various nodes for monitoring purposes.The graphical representation is depicted in Fig. 5. [14], [15], and [16] demonstrate a consistent upward trend in throughput as the quantity of IoT devices escalates, culminating in peak values of 86%, 88%, and 90.2%, correspondingly.It is worth mentioning that the FTS-IoT-BT, as proposed, outperforms the systems mentioned in the literature by achieving a superior throughput in all tested scenarios.Specifically, it reaches a peak of 96.7% when employing ten IoT devices.The statement above implies that the proposed FTS-IoT-BT system exhibits enhanced efficacy and scalability, enabling it to support more IoT devices without compromising throughput.The values presented in the table highlight the system's capacity to effectively manage higher device workloads, which is a critical factor for applications operating in dynamic and data-intensive settings.The FTS-IoT-BT solution is proposed as a reliable and effective approach to address the increasing demands on throughput performance in IoT deployments [17].
Conclusion
This paper presents a new type of FTS, known as FTS-IoT-BT, that utilizes the Internet of Things (IoT) and Blockchain Technology (BT).The main aim of this framework is to enhance safety protocols within the FSC.This paper presents a counterargument to the belief that resolving safety, quality, and traceability concerns in food products can be efficiently achieved by adopting robust electronic food networks that depend on BT and the IoT.The annual impact of counterfeit food products on individuals is substantial in distribution and utilization.The current state of food items is recorded at specific time and location intervals to validate the credibility of data sources by employing IoT devices.The framework recognizes that Blockchain supplier ledger technology enables the transfer and storage of information at different points within the FSC, guaranteeing the accessibility, traceability, and reliability of the data.The prompt detection and subsequent implementation of access limitations to hazardous food will be readily observable at any specific location within the network.The FTS-IoT-BT comprehensively analyzes the
,Fig. 4 .
Fig. 4.Execution time for 50 iterations using FTS-IoT-BT Fig.4depicts the execution time for 50 iterations using FTS-IoT-BT.Fig.4comprehensively analyzes the system's performance across various operations.The FTS-IoT-BT provides a comprehensive overview of the highest and lowest durations (in milliseconds) associated with different operations.It is worth noting that the operation labeled"" demonstrates the longest duration of 7.21 milliseconds and the shortest duration of 3.76 milliseconds.This observation implies that the computational load associated with this specific operation is relatively higher.On the other hand, the execution times of operations such as ", " ", " ", " "" are considerably lower, ranging from 0.219 ms to 0.001 ms.This suggests that these tasks are processed efficiently and quickly.The values collectively serve as indicators of the system's performance, wherein the preference is given to lower execution times due to their advantageous nature in real-time applications.The execution times of FTS-IoT-BT offer valuable insights into the system's efficiency, enabling the development of optimization strategies for time-sensitive processes.The present study assesses the proposed scheme and visually presents the throughput across different nodes for implementing IoT in agriculture.The evaluation specifically focuses on the deployment of various nodes for monitoring purposes.The graphical representation is depicted in Fig.5.
,Fig. 5 .
Fig. 5.Comparison of throughput (%) for varying number of IoT devices Fig. 5 compares throughput (%) for varying numbers of IoT devices.The cited sources[14],[15], and[16] demonstrate a consistent upward trend in throughput as the quantity of IoT devices escalates, culminating in peak values of 86%, 88%, and 90.2%, correspondingly.It is worth mentioning that the FTS-IoT-BT, as proposed, outperforms the systems mentioned in the literature by achieving a superior throughput in all tested scenarios.Specifically, it reaches a peak of 96.7% when employing ten IoT devices.The statement above implies that the proposed FTS-IoT-BT system exhibits enhanced efficacy and scalability, enabling it to support more IoT devices without compromising throughput.The values presented in the table highlight the system's capacity to effectively manage higher device workloads, which is a critical factor for applications operating in dynamic and data-intensive settings.The FTS-IoT-BT solution is proposed as a reliable and effective approach to address the increasing demands on throughput performance in IoT deployments[17]. | 4,846.8 | 2024-01-01T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Computer Science"
] |
A Real-Time Emergency Inspection Scheduling Tool Following a Seismic Event
Emergency infrastructure inspections are of the essence after a seismic event as a carefully planned inspection in the first and most critical hours can reduce the effects of such an event. Metaheuristics and more specifically nature inspired algorithms have been used in many hard combinatorial engineering problems with significant success. The success of such algorithms has attracted the interest of many researchers leading to an increased interest regarding metaheuristics. In the present literature many new and sophisticated algorithms have been proposed with interesting performance characteristics. On the other hand, up to date developments in the field of computer hardware have also had a significant influence on algorithm design. The increased computational abilities that are available to researchers through parallel programming have opened new horizons in architecture of algorithms. In this work, a methodology for real-time planning of emergency inspections of urban areas is presented. This methodology is based on two nature inspired algorithms, Harmony Search Algorithm (HS) and Ant Colony Optimization (ACO). HS is used for dividing the area into smaller blocks while ACO is used for defining optimal routes inside each created block. The proposed approach is evaluated in an actual city in Greece, Thessaloniki.
Introduction
Following a catastrophic seismic event, the first hours are critical as far as rescue operations, evacuation procedures and infrastructure repair procedures are considered. As such events can lead to severe short and long-term economic losses, the efficiency of the civil services has to be of a great level. In case of a hazardous seismic events civil services are challenged by damaged infrastructure networks, communities in a panicking state-of-mind and system vulnerabilities. Post-disaster management is a multi-level procedure which includes a thorough planning of system response, as well as tactical and operational management. Designing of an optimized inspection plan is definitely a very complex NP-hard combinatorial problem. Metaheuristics present a very successful behavior when dealing with such problems as the one previously described.
Latest advances in metaheuristic algorithms and specifically nature inspired algorithms present automated trial-and-error techniques mostly based on the survivalof-the-fittest principal as species evolution does. It also important to point-out that the advances in the available hardware and parallel programming have also offered a significant assist in the implementation of metaheuristics. As performance features of single cores of Central Processing Units (CPUs) have not improved significantly in the last years, CPU manufactures turned to producing multi-core CPUs to increase performance through parallel processing. Graphic processing units (GPUs) also consist of processing cores which in the recent past have been used for mathematical programming. Modern hi-end CPUs consist of 18 cores with clock rates up to 3.6 GHz. On the other hand, high-end GPUs have more than 2.500 cores with clock rates up to 875 MHz. These two different types of processor, each with advantages and disadvantages, offer the ability to efficiently handle problems that in the past were too demanding to be solved.
In this work, two algorithms are used for designing the emergency inspection plan of urban areas. The first algorithm, Improved Harmony Search (IHS), was presented by Kallioras et al. [1] in 2014 while the original version of the algorithm was introduced by Geem et al. [2] in 2001. The second algorithm used in this work is Ant Colony Optimization (ACO) which was presented by Dorigo and Stützle [3] in 2004. The problem of finding the optimal inspection plan of urban structures following a seismic event can be divided into two sub-problems. Firstly, the urban area must be divided into smaller areas, one for each available inspection crew (districting sub-problem), while for each defined area, an optimal route is designed inside its premises (routing sub-problem). The districting sub-problem is dealt with the use of IHS as presented later on in this work while the routing sub-problem is solved with the use of ACO. The biggest challenge in designing this formulation apart from the quality of proposed results, is the necessary computational work-time for the algorithms to locate the solution. In order for this implementation to be able to act as a real-time solution, executional time is rather critical. With the use of GPU programming, computational times are reduced to minimum as can be seen in this work.
Post-Earthquake Response Mechanism
Inspection time required for assessment of possible damages in structure and infrastructure systems is critical for minimizing cost of effects of a possible hazardous event. Every scheduled event such as support, recovery and rehabilitation can only be properly scheduled after the inspection of infrastructures is completed. This proves the importance of a carefully designed inspection plan towards dealing with a seismic event in an urban area. Generally speaking, we can divide the disaster response procedure [4] into four basic steps: • Mitigation [5], which includes assessment of seismic hazards, probabilistic damage projection [6,7], and integration of emergency processes through decision support systems [8,9]. • Preparedness, which includes preparation for dealing with seismic events and evacuation procedures [10][11][12][13][14][15]. • Response, which basically deals with creating a plan for response-relief operations [16][17][18][19] and post-disaster infrastructure performance evaluation [20][21][22][23]. • Recovery which deals with relief performance assessment [24], protection of infrastructure element [25], and repair-fund allocation [26,27]. Emergency response has attracted the interest of many researchers and significant work has been done in this field in the past years. Despite that, after reviewing up-to-date literature, it can be seen that very little work is done considering the proper planning of inspection crews following a seismic event though it is a very important procedure. This can be explained due to the fact that necessary data is difficult to be collected, solving the associated mathematical problem is very demanding and also, formulating an algorithm that can handle such data and calculations in real-time procedure is exceptionally difficult. In this work, we present the mathematical formulation and solution of such a problem while we also present the parallel formulation of a software that solves such problems in real-time.
IHS and ACO Metaheuristic Algorithms
In this work, two metaheuristic algorithms are used for real-time inspection scheduling. As discussed previously, IHS is applied on the districting problem while ACO is used for solving the routing problem. IHS algorithm consists of four basic actions similar to HS. These actions are [1]: • Parameter Initialization In this step, the parameters of the algorithm and the problem are defined: n is the number of decision variables, n are the lower and upper bounds of each ith decision variable, HMS is the size of the harmony memory (HM) of solution vectors and HMCR is the rate for considering the harmony memory.
• Harmony Memory Initialization
In this step, the initialization of the algorithm's memory is executed where si j is the ith element of the jth solution vector s • New Harmony Improvisation In this step, a new harmony-solution vector is generated with the use of the random selection procedure or memory consideration procedure. In random selection, each member of the solution vector is randomly generated with probability 1-HMCR (0≤ HMCR≤1). In memory consideration, each variable of the solution vector is randomly chosen from HM with probability HMCR 1 2 [ , ] with probability 1 HMCR { , ,..., } with probability HMCR where s New is the new generated solution vector
• Harmony Memory Update
In this step, the generated solution vector is compared to the worst one stored in HM according to the objective function of each problem. If it is better, it replaces it. If not, it is dropped. This procedures is repeated as many times as the population of the maximum function evaluations set by the user. Once the maximum function evaluations are completed, the best solution stored in HM is the solution of the problem. ACO algorithm [3] was inspired by the food searching pattern of ants in nature. As ants search inside an area for possible food resources and the shortest route between the nest and the food position, artificial ants search inside a weighted graph for the optimal path. The search space consists of nodes which represent the places that need to be visited by the ants. At first, a colony of m ants are positioned randomly inside the search space. At each repetitive step of the algorithm, ant k decides which of the nodes it will visit next with the use of a random proportional rule. In order to avoid visiting nodes which are already visited by the same ant, ant k currently positioned at node i, maintains a memory M k containing all nodes previously visited. This memory defines the feasible neighbourhood N k i which contains the nodes that have not been visited by ant k. Ant k, positioned at node i, chooses to move to node j with a probability defined as follows: where τi,j is the amount of pheromone between nodes i and j, α is a parameter controlling the pheromone's influence, ηi,j is a heuristic information denoting the desirability of the path between nodes i and j where: while β is a parameter controlling the influence of the path's desirability ηi,j. The probability of choosing a particular connection i,j increases with the value of the pheromone trail τi,j and the heuristic information value ηi,j. When all ants have completed their routes, the pheromone concentration for each connection between i and j nodes, is updated for the next iteration t+1 as follows: where ρ is the rate of pheromone evaporation, A is the set of paths (edges or connections) that fully connects the set of nodes and Δτ k i,j(t) is the amount of pheromone ant k has deposited on connections it has visited during its tour T k and it is given by: The main concept behind the ACO algorithm is that connections chosen by many ants and have a shorter length, receive bigger amounts of pheromone due to more deposition and less evaporation. This ensures that these paths are more likely to be chosen by other ants in future iterations of the algorithm resulting to the path with the minimum distance.
Optimal Inspection Problem Formulation
As described previously, the optimal inspection problem is divided into the districting and the routing sub-problem. In the first sub-problem, the urban domain is divided into a number of areas of responsibility equal to the population of the available inspection crews. In the second sub problem, for each area of responsibility, the optimal route is designed. Urban areas consist of building blocks and road structures. Each building block is defined by the coordinates of its edges, its area, use of land, building factor and maximum allowed height of constructions. These characteristics provide us with the demand evaluation of each building block. Neighbouring city blocks with similar characteristics regarding use of land, building factor and allowed height can be joined and considered as one in order to reduce the dimensionality of the problem without degrading the quality of the solution. The problem is formulated as a nonlinear optimization problem and the objective function is: where NIC is the number of the available inspection crews, nSB (i) is the number of structural (or building) blocks assigned to the i th inspection crew, d(SBk,Ci) is the distance between the SBk building block and Ci is the starting block of the crew responsible for the i th group of structural blocks, Uin is the inspection speed of the inspection crews, and Utr is the travelling speed of the inspection crews. D(k) is inspection "demand" for the k th building block defined as the product of the building block total area A(k) times the building factor fB(k) (i.e. the structured percentage of the area) and σIC is the value of the standard deviation of the working hours of all inspection crews.. Thus, the districting problem is formulated as a discrete unconstrained nonlinear optimization problem where the objective is to define which inspection crew is responsible for which building block. The node of each building block is placed in its geometrical centre while all Euclidean distances are calculated with respect to the centre of weight of the building block.
According to the described formulation, every available inspection crew is assigned to a specific district. In the general form of the districting problem, the population of the design variables is equal to the number of building blocks. This leads to a problem with increased complexity. In order to avoid this complexity, the authors suggest a modification to the problem formulation by setting the starting point of the inspection crews as the design variables. This reduces the complexity from nSB to NIC. This is accomplished with the use of a four-step procedure where at first • The starting positions of the inspection crews are defined • Areas of responsibility are defined by assigning each building block to the closest inspection crew • The centre of gravity of each area of responsibility is found and this becomes the starting point of each inspection crew and finally, • New areas of responsibility are created around the new starting positions. The optimal routing (also called scheduling) problem of the inspection crews is formulated as a travelling salesman problem (TSP). TSP is represented by a weighted graph G=(N,A), where N is the set of nodes and A is the set of connections/paths that connect all N nodes. A cost function is assigned to paths between two nodes (i and j), represented by the distance between the two nodes di,j (ij). A solution of the TSP is a permutation p= [p (1), …, p(N)] T of the node indices [1, …, N], as every node must not appear more than once in a solution. The solution that minimizes the total length L(p) given by: is noted as the optimal one. The corresponding scheduling problem is defined as follows: (5) where d (SBk, SBk+1) is the distance between the k th and k th +1 building blocks. The desired result is to locate the shortest route between all structural blocks assigned to each inspection crew.
Numerical Tests
The development and testing of the proposed real-time application is executed on pragmatic data of a city. The chosen city Thessaloniki is the second biggest city in Greece and has in the past suffered from a seismic event. On the 20 th of June of 1978 an earthquake with a magnitude of 6.5 R took place in Thessaloniki and caused the death of 45 people while 9.480 buildings suffered non-repairable damages. The city's area is equal to 154.205.128 m 2 while it consists of 471 joint structural blocks as it can be seen in Figure 1. A hypothetical seismic event is applied in the urban structure of Thessaloniki and an inspection scenario is examined.
According to the scenario, there are 40 inspection crews available for the required task. These crews work in 8-hours shifts per day resulting to having 20 crews working in the city at any moment for 16 hours per day. As we were expecting small differences in the necessary working hours of each crew, we set the time needed for the whole inspection to be completed equal to the maximum inspection time of the various crews. The parameter values for IHS and ACO algorithms are chosen with respect to a previous work of the authors which included a thorough sensitivity analysis [29 -TRC]. In detail, HMS is chosen equal to 12 and HMCR is equal to 0.105693 regarding IHS. Regarding ACO, α = 1.0, β = 2.0, ρ = 0.10, m = 65 and Q = 0.306. The average inspection speed of the crews, Uin is set equal to 50 square meters per minute while the average traveling speed of the crews, Utr is set equal to 10km/hour. For IHS the maximum iterations for are equal to 200,000 while for ACO they are equal to 200. The hours needed for the inspection crews to complete the task can be seen in Table 1. In particular, the maximum time which also corresponds to the end of the inspection is equal to 5683.006 hours. The average working hour of the ten crews is equal to 5142.285 hours while the maximum variation from the average time needed for the crews to finish the inspection crew is equal to 11.11%. The value of the objective function is equal to 18650882.911. The best and worst value of the objective function stored in IHS memory per each of the 200,000 iterations can be seen in Figure 2. In Figure 3, the results of IHS applied on the districting problem can be seen. Each colour represents an inspection crew and an area of responsibility. In Figure 4 the routing results of ACO are presented while in Figure 5, a closer look on 5 of the designed routes are visible.
Fig. 2. Best and worst objective function value per IHS iteration
The minimization of the computational effort and time needed to solve this problem is accomplished with the use of GPU programming applied on the ACO algorithm and parallelization of the IHS algorithm. In order to evaluate the gain of such an implementation, the same problem is solved twice, first with the suggested techniques and then without them. Both implementations are solved ten times and the average execution times are presented. The GPU version is completed after T1 = 225.65 sec while the unmodified version demands T2 = 1242.87 sec. The acceleration factor is equal to T2 / T1 = 5.508 which is rather significant. The acceleration factor increases as the dimensionality of the problem increases as presented by Kallioras et al [29] in a previous work. It is also important to point out that a more up to date GPU processing unit than the one used here would have given even better acceleration results. None the less, solving such a problem in less than 4 minutes does satisfy the criterion as a real-time solver. The experiments of the current study were performed using the Intel i7 3610QM 2.30Ghz processor with 4 cores paired with 12GB memory and the NVIDIA GeForce GTX 660M with 384 stream processors (CUDA cores) with 2GB of memory and supports CUDA compute capability 3.0.
Conclusions
In this work, a real-time emergency inspection scheduling application following a seismic event is presented. This application is based on two nature-inspired metaheuristic algorithms, IHS and ACO. The first is a variation of the well-known and established Harmony Search algorithm while the second one is the also established Ant Colony Optimization algorithm. Both algorithms have proven their ability to handle hard combinatorial problems in literature. Both algorithms have also proven their robustness when applied on such problems. Through this work, a parallel GPU implementation of the algorithm is presented in order to minimize the computational time needed to solve such problems. Computational time is very important when we are dealing with situations where solutions need to be presented in realtime. Dealing with a seismic event is such a situation as a delay in scheduling support and relief efforts can cost dramatically. Our approach is implemented on the urban environment of the second largest city in Greece, Thessaloniki, who has suffered from a seismic event in the recent past. With the assumption that 20 inspection crews will be available and working on 8-hour shifts, we have 10 crews inspecting the urban structures for 16 hours per day. Based on these estimations, along with the inspection and traveling speed of crews, the necessary time for the inspection of the whole area was equal to 5683.006 hours while the variation between crews is also very small.
By implementing the algorithms with the use of GPU programming, a significant acceleration factor equal to 5.508 is achieved which is very important in real-time applications. It is also important to point out that the acceleration factor increases with the increase of the dimensionality of the problem. The proposed formulation has the ability to robustly solve the described problem in less than four minutes. In the future, it might be useful to apply modifications on the algorithm which will offer the end-user the ability to choose before executing the algorithm a series of joint city blocks that need to be inspected first. It can also be modifies to resolve the problem while city blocks with increased level of damage are set by the user according to emergency calls. Another modification that is worth investigating is giving the algorithm the ability to un-assign city blocks from crews with large inspection crews and assigning them to those who finish first. | 4,769.8 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Structural, morphological and magnetic properties of hexaferrite BaCo2Fe16O27 nanoparticles and their efficient lead removal from water
W-hexaferrite BaCo2Fe16O27 was prepared using the citrate nitrate combustion method. The sample was characterized using XRD, SEM, EDX and elemental mapping. XRD confirmed that the sample was synthesized in a single phase hexagonal structure with an average crystallite size 37.39 nm. SEM images of the sample show a spongy morphology with the agglomerated grain owing to dipole interaction between the crystallites. The magnetic properties of BaCo2Fe16O27 were studied using H-M hysteresis loop and the DC magnetic susceptibility. The sample has a ferrimagnetic behavior with saturation magnetization 64.133 emu/g. The magnetic properties of BaCo2Fe16O27 are originated from the Fe3+–O–Fe3+ superexchange. The synthesized sample is used as an adsorbent to remove the heavy metal Pb2+ from water. BaCo2Fe16O27 has Pb2+ removal efficiency 99% and 28% at pH 8 and 7 respectively. The Langmuir and Freundlich isotherms were used to analyze the experimental data. The Freundlich adsorption isotherm fitted the experimental data well.
Introduction
The hexaferrites have six main types (M-, U-, W-, X-, Y-and Z-types) according to the chemical formulas and stacking sequences of these building blocks. W-type hexaferrites have a general formula AMe 2 Fe 16 O 27 , where A is an alkaline earth ion (i.e., Ba and Sr) and Me is a divalent transition metal ion such as Mg, Zn, Co, Ni, Fe. They are characterized by ferrimagnetic behavior with high Curie temperatures. The magnetic properties of Ba(Sr)M 2+ Fe 16 O 27 vary with the type divalent cation. The hexagonal ferrites are used in a wide range of devices, such as magnetic recording media, highperformance microwave-absorbing, permanent magnets, shielding materials in electronic devices operating at GHz frequencies and bubble memories and microwave devices [1][2][3][4].
To unravel the verities of application, various divalent and trivalent cations are frequently inserted in the sub lattices of W-type barium hexagonal ferrites (BHF). Several methods of preparation were used to prepare BHF compounds with different nanocrystal sizes, shapes, and characteristics [5]. The synthesize routes such as the conventional solid-state reaction method [6], the sol-gel method [7], the microemulsion route [8], the ball milling [9], auto-combustion [10] and co-precipitation method [11] were used for preparation nanoparticles of BHF systems. The citrate combustion method is a fast, simple and easy technique for the synthesis of a variety of advanced nanomaterials and ceramics [12]. In this method, the thermal redox reaction occurs between an oxidant and a fuel (as citric acid) [13]. The monophasic nanopowders were prepared with a homogeneous microstructure in short reaction times using a citrate combustion method [14,15].
The structural, morphological and magnetic properties of Ba W-type hexaferrite (BHFs) were strongly influenced by the divalent cations and rare-earth substitution. The enhancement of zinc in Ba 1 Cu 2-x Zn x Fe 16 O 27 (x = 0.0 and 0.4) has been studied by R. Sagayaraj [5]. The pure and doped samples have a semiconductor like behavior.
The SEM images reveal that the Ba hexaferrites gained the hexagonal structure with spongy morphology. Ba 1 Cu 2-x Zn x Fe 16 O 27 nanoparticles may be utilized for treating bacterial infection in the clinical sector. Kai Huang et al. [16] studied the effect of Ca ions on the structure and magnetic properties of BHFs. The samples Ba 1-x Ca x Co 2 Fe 16 O 27 (x = 0, 0.1, 0.3, 0.4 and 0.5) were prepared using a sol-gel method. The coercive field and the saturation magnetization were increased by increasing the amount of Ca substitution. The Ca 2+ doped W-type hexaferrite improved microwave absorbency. M.A. Ahmed et al. [17] studied the effect of the influence of rare-earth ions on the magnetic properties of barium hexaferrite Ba 0.95 R 0.05 Mg 0.5 Zn 0.5 CoFe 16 O 27 (R = Y, Er, Ho, Sm, Nd, Gd, and Ce ions). The curie temperature (T C ) and effective magnetic moment increased for the Sm doped sample. Jin Tang et al. [18] were prepared Ba 1−x La x Fe 2+ 2 Fe 3+ 16 O 27 using the standard ceramic method. The values of the lattice constant c decreased with the increase of the doped La 3+ in the samples. There is a monotonic dependence of the coercivity (H c ) and the magnetic anisotropy field (H a ) on the La 3+ amount.
Heavy metals in aqueous solutions, such as Cr(VI), Pb(II), and Cd(II), are poisonous even in low amounts and have caused serious health effects on humans. As a result, it is critical to remove these heavy metals from the aqueous environment to protect biodiversity, hydrosphere ecosystems, and humans. To remove these harmful heavy metals from wastewater, several techniques such as chemical precipitation, electrolytic separation, membrane separation, ion exchange, and adsorption have been used [19].
For the removal of heavy metals, the adsorption technique has been widely used. Nanomaterials have been considered an excellent adsorbent for the removal of heavy metal ions from wastewater [20][21][22]. Nanomaterials were used for removal of heavy metals from water using adsorption techniques based on the physical interaction between metal ions and nanomaterials [23].
In this present work, the author synthesized BaCo 2 Fe 16 O 27 nanoparticles using the citrate combustion method as a fast, simple and easy technique. The aim of this study is understanding the crystallite structure, morphology and magnetic properties of W-type hexaferrites, additionally, studying the heavy metal removal efficiency using BaCo 2 Fe 16 O 27 nanoparticles. To identify the best conditions for heavy metal Pb 2+ ion removal from water, the removal efficiency of Pb 2+ ion was studied as a function of contact time and pH.
Preparation of nanoparticles
The hexaferrite BaCo 2 Fe 16 O 27 sample was prepared by citrate nitrate combustion method as illustrated in Fig. 1. The metal nitrates (purity 99.9%, Sigma-Aldrich) were mixed in stoichiometric ratios.
Measurements
The X-ray diffraction (XRD) was carried out using X-ray diffraction (XRD, Bruker advance D8 diffractometer, λ = 1.5418 Å). The sample was characterized by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) to study the surface morphology and the chemical structure using OXFORD INCA PentaFETX3-England. The magnetic properties of the hexaferrite BaCo 2 Fe 16 O 27 sample were assessed using two techniques: the first is the H-M hysteresis loop using the vibrating sample magnetometer (VSM; 9600-1 LDJ, USA). While the other is the measurement of the DC magnetic susceptibility as a function of absolute temperature using the Faraday method [24].
Effect of pH value
The heavy metal Pb 2+ ion removal efficiency of BaCo 2 Fe 16 O 27 sample was studied. Pb 2+ ions solution of concentration (50 ppm) was prepared as an initial concentration. 0.02 g of BaCo 2 Fe 16 O 27 nanoparticles was added to five beakers (250-mL) containing the above solution. The pH values of the solutions were adjusted from 3 to 8 and they were stirred using an electric shaker (ORBITAL SHAKER SO1) at 200 rpm for 1 h. A 0.2-m syringe filter was used to filter 10 ml of the supernatant solutions at regular intervals. Inductively coupled plasma (ICP) spectrometry (Prodigy7) was used to determine the concentration of heavy metals in the filtrate.
Effect of contact time
The 50 ppm Pb 2+ solution was pipetted into a beaker containing 0.10 g of BaCo 2 Fe 16 O 27 , with the pH value adjusted to its optimal value. After varying contact times, the concentration of Pb 2+ in the solution was determined. The following equations were used to calculate the metal ion removal efficiency ( ) and the equilibrium adsorption capacity (q) [25]: where C i and C e are the initial and final concentrations (mg/L) of metal ion solution, respectively. While m is the mass of adsorbent and V is the volume of Pb 2+ solution. Figure 2 illustrates the XRD pattern of BaCo 2 Fe 16 O 27 nanoparticles. The sample has a single phase hexagonal structure with the space group P6 3 / mmc [5]. The data were indexed with ICDD card number 019-0098. The average crystallite size was calculated using the well-known Debye-Scherer equation formula [26] where D is the average crystallite size, λ is the wavelength of X-ray radiation, θ is the Bragg angle, and β is the full-width at half-maximum intensity of the powder pattern peak. The average crystallite size is 37.39 nm. The lattice parameters
Results and discussion
were calculated based on the hexagonal symmetry according to Eq. (4). The value of theoretical density was calculated from Eq. (5) and reported in Table 1 where Z = 2 is the number of molecules per unit cell, N is Avogadro's number, M is the molecular weight and V is the unit cell volume [27].
The crystal axis ratio c/a is normally 4.0 for W type Ba hexaferrites. While the value of c/a ratio is 4.6733 for the investigated sample, owing to the migration of divalent cations in the voids causes the John-Teller effect [5].
The morphology and the grain size of the investigated sample were studied using SEM [28]. Fig. 3 illustrates SEM micrographs of BaCo 2 Fe 16 O 27 nanoparticles. SEM images show a spongy morphology with the agglomerated grains. The average grain size of the sample is 73.7 nm. A reason for agglomeration is dipole interaction between the crystallites [29]. The grains have an irregular distribution with a hexagonal shape, which is the basic crystal cell of W-type ferrites. clearly illustrate a small variation between the experimental and theoretical values due to oxygen deficiency, which can be advantageous for heavy metal removal from water. Figure 5 illustrates the elemental mapping of BaCo 2 Fe 16 O 27 sample. The elements barium, cobalt, iron and oxygen are present in a homogenous distribution.
The magnetic properties of the investigated sample were studied using a vibrating sample magnetometer (VSM). Table 2. The saturation magnetization (M s ) was determined also by the law of approach to saturation (LAS) [30,31]: where M s is the saturation magnetization of the domains per unit volume, A is a constant associated with microstress, B is a constant representing the magneto-crystalline anisotropy contribution, and χH is the forced magnetization term. Figure 6 (b) illustrates the plotting of M versus 1/H 2 in the high field range given a straight line. The saturation magnetization M s of BaCo 2 Fe 16 O 27 nanoparticle can also be detected by extrapolating the plot of magnetization versus 1/H 2 to approach zero [32][33][34]. In this method, the M s value is 67.28 emu/g. The obtained value is very comparable to the experimental value, signifying that an applied field of ± 20 kOe is appropriate to saturate the investigated sample. The factors affected on the magnetization of W-type hexaferrite are the composition of ferrite, the distance of the Fe-O bond and the superexchange interaction through Fe 3+ -O-Fe 3+ [35]. Moreover, the coercivity of W-type hexaferrite is affected by many factors such as magneto-crystalline anisotropy, shape anisotropy, and saturation magnetization [36]. The magneto-crystalline anisotropy can be determined by the Stoner-Wohlfarth equation as follows [37]: where K is the magneto-crystalline anisotropy constant, M s is the saturation magnetization and H c is the coercive field. The saturation magnetization and the net magnetic moment of the sample depend on the presence of the magnetic ions such as Co 2+ (3.7 μB) and Fe 3+ (5 μB) [38].
The squareness ratio R of the remanence to the saturation magnetization (M r /M s ) indicates the domain structure of the investigated sample. The squareness ratio R value = 0.5 is indicative of a single domain and the lower value is associated with a multidomain structure and the particles interact by magneto-static interactions [39]. In the present work, the sample BaCo 2 Fe 16 O 27 has R > 0.5, indicating the exchange coupled interaction between the domains. Figure 7 illustrates the dependence of the molar magnetic susceptibility on absolute temperature as a function of the magnetic field intensities. As shown in the figure, three distinct regions were obtained. With increasing temperature, χ M increases steadily in the first region, rapidly increased in the second region, and decreases rapidly in the third region.
A deeper examination of the first region reveals the sample to be pure ferrimagnetic material in which the thermal energy was insufficient to disturb the aligned moments of the spins. In the second region, the thermal energy is enough to make the dipoles freely align in the direction of the applied field. In the last region, the sample transfers from ferrimagnetic to paramagnetic behavior after the Curie temperature where χ M decreases drastically. Figure 8 shows the relation between the reciprocal of magnetic susceptibility χ M −1 and the absolute temperature at different magnetic field intensities in the paramagnetic region. The magnetic parameters, such as the Curie constant (C) and the Curie-Weiss constant (θ), were calculated from the extrapolation of the linear part χ M −1 in the paramagnetic region. The Curie constant (C) equal to the reciprocal of the slope of a straight line in the paramagnetic region. The values of effective magnetic moments (μ eff ) were calculated from the following relationship where C is the Curie constant. These magnetic parameters were reported in Table 3. The data obey the well-known Curie-Weiss law [40].
where χ M is the molar magnetic susceptibility, T is the absolute temperature and θ is the Curie-Weiss constant. The magnetic properties of the sample BaCo 2 Fe 16 O 27 are originated from the Fe 3+ -O-Fe 3+ superexchange interactions, which are mutually antiparallel, and also the Fe 3+ -Fe 3+ direct exchange interactions. Table 4 illustrates the comparative study between the present results and those reported earlier. The author noticed that my sample is the largest in the remanence magnetization and the coercive field, compared with the results obtained by other authors [16,[41][42][43][44]. The saturation magnetization (M s ) of the present sample is smaller than the sample obtained by Mohammad K. Dmour [42]. The variation in the values of M s , M r and H c for the samples is due to the different preparation methods and ionic radii.
The investigated sample was prepared in nano scale (crystallite size = 37.39 nm) with a spongy morphology so the surface to volume ratio is large, which increases the number of active sites to trap Pb 2+ ions and increases the removal efficiency. One of the main advantages of using Baco 2 Fe 16 O 27 for Pb 2+ adsorption is the easy separation from the solution using an external magnetic field due to its large magnetization.
The heavy metal Pb 2+ ion removal from the wastewater was studied with different parameters such as pH value and contact time. Figure 9 illustrates the effect of pH solution on the heavy metal adsorption process. The adsorption of Pb 2+ ion increased by increasing the pH value. It is clear that at lower values of pH, the adsorption of heavy metal ions is low. This is due to competition between H + and Pb 2+ on the active sites of adsorbent [45]. At pH = 7, the amount of H + decreases in the solution and Pb 2+ can be easily adsorbed on the active sites. Otherwise, at pH = 8, the solution contains oH − . Consequently, the heavy metal Pb 2+ can be precipitated as lead hydroxide [46]. The precipitation of Pb 2+ at pH 8 is not only a result of the adsorption of Pb 2+ on BaCo 2 Fe 16 O 27 , but also the formation of lead hydroxide. So, the optimum pH value is 7.
The effect of contact time on the Pb 2+ ion efficiency illustrates in Fig. 10 over a range min. The adsorption of heavy metal Pb 2+ increases by increasing the contact time. At the beginning of adsorption, a large number of active sites are available [47]. Finally, the optimum conditions for Pb 2+ The mechanism of Pb 2+ adsorption process can be studied using the adsorption isotherm. In the present work, two isotherm models that have been studied are the Langmuir and Freundlich models.
The multilayer adsorption on a heterogeneous surface can be described by Freundlich isotherm, which is commonly used to describe heavy metal adsorption on various adsorbents. The empirical model was proven to be consistent with a heterogeneous surface's exponential distribution of active centers. The relation between the amount of solute adsorbed (q e ) and the equilibrium concentration of solute in solution (C e ) is given by the following equation: This equation can be linearized to give the following expression: Where, K f is a constant for the Freundlich system, depends on the quantity of metal ion adsorbed onto adsorbent at an equilibrium concentration. Figure 11 illustrates the dependence of ln q e on ln C e . The values of K f and 1/n are calculated from the intercept and slope of the best fit line in Fig. 11. The Langmuir isotherm model assumes that maximal adsorption occurs in a saturated monolayer of adsorbate molecules on the adsorbent surface. Many ground water effluent treatment procedures employ the Langmuir adsorption isotherm, which has also been used to explain the adsorption of heavy metals by various adsorbents. The Langmuir isotherm is considered that once a metal ion occupies an active site, no further adsorption occurs, and the adsorbed layer is unimolecular. The following equation describes the Langmuir isotherm model: where q m is the adsorption capacity at maximum adsorption. Figure 12 shows the relation between (C e /q e ) and (C e ) for heavy metal Pb 2+ to determine the Langmuir constants.
Conclusion
The barium hexaferrites was synthesized using citrate nitrate combustion method. XRD revealed that the sample BaCo 2 Fe 16 O 27 was crystallized in a single phase hexagonal structure with space group P6 3 /mmc. The average crystallite size is 37.39 nm. The morphology and the grain size of the investigated sample were studied using SEM. The sample has a spongy morphology with the agglomerated grains. The average grain size of the sample is 73.7 nm. The magnetic properties of BaCo 2 Fe 16 O 27 were studied using two techniques: H-M hysteresis loop and the DC magnetic susceptibility. The sample has antiferromagnetic properties. The values of the saturation magnetization (M s ) and remanence magnetization (M r ) are 64.133 and 33.099 emu.g −1 respectively. The squareness ratio (M r /M s ) is greater than 0.5, indicating the exchange coupled interaction between the magnetic domains. The heavy metal Pb 2+ ion removal from the wastewater was studied with different parameters such as pH value and contact time. BaCo 2 Fe 16 O 27 has Pb 2+ removal efficiency 99% and 28% at pH 8 and 7 respectively. The Langmuir and Freundlich isotherms were used to analyze the experimental data. The Freundlich adsorption isotherm fitted the experimental data well.
Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Conflict of interest
The author declares that he has no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 4,657.8 | 2022-11-22T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Chemistry"
] |
Theory and application of Fermi pseudo-potential in one dimension
.
I. INTRODUCTION
There are several interrelated motivations for the present investigation.These are discussed in the following.
It was realized several years ago that there are significant differences between scattering in one channel and scattering in two or more coupled channels [1].For this reason, it may be useful to gain some experience in dealing with coupled channels in general.
The first question is: What is the simplest scattering problem in the case of coupled channels?Once this simplest problem is understood, it is reasonable to expect that many of its features hold also for more general situations.Clearly, for this simplest problem, the number of channels should be chosen to be the smallest, namely two, and the number of spatial dimensions should also be chosen this way, namely one.Thus the scattering problem under consideration, in the time-independent case, deals with the coupled Schrödinger equations where the 2 × 2 matrix potential is such that it cannot be diagonalized simultaneously for all x.
What is the simplest possible choice for this V (x)?In the case of one channel, the simplest potential is the one that is proportional to the Dirac delta-function δ(x−x 0 ).This potential is localized at the one point x 0 , and the corresponding Schrödinger equation is easy to solve.For two coupled channels, it is equally desirable to have the potential localized at one point, say at x = 0.However, it is not allowed to take V (x) to be the product of δ(x) and a constant 2 × 2 matrix because the diagonalization of this constant matrix decouples the channels.
What is needed is therefore another one-dimensional potential that is localized at one point.With a linear combination of δ(x) and this new potential, the 2 × 2 matrix V (x) can be easily chosen such that it cannot be diagonalized and, hence, the two channels do not decouple.
There are many practical applications of the two-channel scattering problem in one dimension.In this paper, let us restrict ourselves to one such application that is of current interest.The coupled channels can be used as a model for quantum memory; a natural approach to resetting, reading, and writing on a quantum memory is to use scattering from such a quantum memory.
In trying to find this second potential that is localized at one point, it is not necessary to study the coupled Schrödinger equations (1.1); it is sufficient to return to the simpler case of the one-channel Schrödinger equation A natural first guess for this second potential is the derivative of the Dirac deltafunction, i.e., δ ′ (x).However, the presence of this δ ′ (x) term in Eq. (1.3) implies that the wave function must be discontinuous at this point x = 0.But the product of δ ′ (x) and a function discontinuous at x = 0 is not well defined.Furthermore, even if this δ ′ (x) potential is well defined, it is not suitable for the first application to quantum memory.The reason for this will be discussed later in this paper.
A more powerful method is needed to find this desired potential.It is useful here to recall the concept of the Fermi pseudo-potential in three dimensions, which can be written in the form as given by Blatt and Weisskopf [2].The most far-reaching application of this Fermi pseudo-potential is to the study of many-body systems, as initiated by Huang and Yang [3].For the ground-state energy per particle of a Bose system of hard spheres, the low-density expansion is known to be 4πaρ 1 + 128 15 √ π (ρa 3 ) 1/2 + 8 4π 3 − √ 3 ρa 3 ln(ρa 3 ) + O(ρa 3 ) . (1.5) In this expansion, the second term was first obtained by Lee and Yang [4] using the method of binary collision, but the derivation by Lee, Huang and Yang [5] using the Fermi pseudo-potential is somewhat simpler; the third term, which involves the logarithm, was first obtained by using the Fermi pseudo-potential [6].In the derivation of the third term, it was found inconvenient to use the form (1.4), and thus a limiting process was reintroduced.This point will be of importance in this paper.Thus a great deal is known about the Fermi pseudo-potential in three dimensions.
It is a second motivation for this paper to develop the Fermi pseudo-potential for one-dimensional scattering.In many cases, once a theory has been developed for three dimensions, it is straightforward to repeat the development for one dimension.In the present case of the Fermi pseudo-potential, this is not the case.Furthermore, the result for one dimension seems qualitatively different from that for three dimensions.
For clarity of presentation, this paper is organized into two parts: Part A for the theory of the Fermi pseudo-potential in one dimension, and Part B for its application to quantum computing.Needless to say, these two parts are closely related to each other.The sections are numbered consecutively throughout the paper.
II. INTERACTION AT ONE POINT
In the absence of V , the Hamiltonian of Eq. (1.3) is for real x, where the right-hand side is suitably interpreted so that it is self-adjoint.
Let k be purely imaginary; define a real, positive κ by For such a k, the Green's function, or resolvent, for this H 0 satisfies the differential equation and is given explicitly by Let a potential V be added to this H 0 to give which is also self-adjoint.Again for κ positive, the Green's function, or resolvent, for this H satisfies, similar to Eq. (2.3), The interaction V is said to be at the one point x 0 if Eq. (2.6) implies that Eq. (2.3), with R (0) κ (x, x ′ ) replaced by R κ (x, x ′ ), is satisfied for all x except x = x 0 .Because of translational symmetry, this x 0 is chosen to be 0 throughout this paper.It is a consequence of Eq. (2.4) and the symmetry of the Green's function that this definition of an interaction at one point implies where sg x and sg x ′ mean the sign of x and the sign of x ′ , respectively.Since Eq. (2.7) is the starting point for the present paper, this is the appropriate place to add the following comments.
(1) In the present case of one dimension, the real line with the point x = 0 removed is not connected.This is a qualitative difference between one dimension and higher dimensions.
(2) It is because of this property that the f in Eq. (2.7) can depend on the signs of x and x ′ .
(3) In the three-dimensional case, the Fermi pseudo-potential can be obtained in the following way: Take the self-adjoint Hamiltonian −∇ 2 , where ∇ 2 is the threedimensional Laplacian, and restrict it to functions that are zero at r = 0; the selfadjoint extensions [7] of this restricted operator give the Fermi pseudo-potential, i.e., these self-adjoint extensions can be written as the sum of −∇ 2 and (1.4) multiplied by a constant.Such a procedure applied to the case of one dimension does not give Eq.(2.7).
It is useful to write out explicitly the f of Eq. (2.7) as: following the four quadrants in the x − x ′ plane.Note that all of these f 's are dimensionless.
III. RESOLVENT EQUATION
In view of Eq. (2.7), it is most convenient to study the resolvent equation in coordinate representation: where κ 1 and κ 2 are two values of κ.
The substitution of Eq. (2.7) into this resolvent equation (3.1) gives, after a lengthy calculation, for κ 1 = κ 2 .This is the resolvent equation for the interaction at the point x = 0 as defined in Sec.II.Equation (3.2) has the following symmetry properties besides space reflection.
(1) Since f (κ; sg x, sg x ′ ) is dimensionless, there is no scale for κ.Thus, Eq. (3.2) is invariant under the scale change Note that λ is positive since the κ's are positive.
(2) There is an additional symmetry This discrete symmetry is going to play an important role in this paper.In terms of the f j (κ) defined in Eq. (2.8), this symmetry is The next task is to solve the resolvent equation (3.2) for f (κ; sg x, sg x ′ ).Since differential equations are easier to deal with than difference equations, it is convenient to take the limit κ 1 → κ 2 .In this limit, Eq. (3.2) reduces to In terms of the f j (κ) of Eq. (2.8), this differential equation (3.6) consists of the following four equations by taking various signs for x and x ′ : An examination of these four differential equations shows the important role played by the combination f 1 (κ) + f 3 (κ), which appears twice in Eq. (3.7b) and twice in Eq. (3.7d).Define F (κ) up to an additive constant by In terms of this F (κ), Eqs.(3.7b) and (3.7d) take the form Integration of Eqs.(3.9) gives f 2 (κ) and f 4 (κ) in terms of F (κ): where c 2 and c 4 are two arbitrary constants of integration.Similarly, subtracting Eq. (3.7c) from Eq. (3.7a) gives or where c 3 is another arbitrary constant of integration.It remains to determine F (κ), which satisfies the second-order ordinary differential equation obtained from adding Eqs.(3.7a) and (3.7c): The solution of this equation is straightforward but somewhat lengthy and is thus relegated to Appendix A.
The results are as follows: ) when Similarly, when This case, which was in fact the first case worked out and also the most important one as discussed in Sec.IV, can be recovered by taking the limit c 2 3 + c 2 c 4 − c 2 1 → 0 together with either c 0 → 0 or c 0 → ∞.These two limiting cases are to be considered separately.For definiteness, they are applied to Eqs. (3.16).
IV. INTERACTION POTENTIALS
Naively, one would expect it to be straightforward to determine the potential when the Green's function (resolvent) is known.It does not turn out to be so straightforward, and this section is devoted to solving this problem.
The substitution of Eqs.(2.1) and (2.5) into Eq.(2.6) gives or with the last term defined by Eq. (2.3) or Eq.(2.4).This should determine V (x); that this V (x) does not depend on κ is a consequence of R κ (x, x ′ ) satisfying the resolvent equation.
More generally, the left-hand side of Eq. (4.2) may be an integral, and this equation takes the form Since Eq. (4.2) is simpler than Eq. ( 4.3), it is useful to study Eq.(4.2) first even though it is less general.The substitution of Eq. (2.7) into Eq.(4.2) gives The difficulty is to give a proper interpretation to this equation.The right-hand side contains a term obtained by applying the differential operator to the exponential.As seen from Eqs.
(3.14) for example, this expression (4.6) is in general the product of δ(x) and a function discontinuous at x = 0.The only reasonable interpretation of such a product is As mentioned above, there are effectively four parameters in the solutions as given by either Eqs.(3.14) or Eqs.(3.16).Since the symmetry of the Green's function implies that c 2 = c 4 , the number of parameters is reduced by 1.Therefore, the potential V (x, x ′ ) depends on three parameters.That there are three parameters instead of two is a major surprise, and this fact is to play a central role in Part B of this paper, where this interaction potential is applied to study certain aspects of quantum computing.
The three pieces of V (x, x ′ ) are of different levels of complication.They are studied in the following three subsections.
A. δ(x) δ(x) δ(x) potential
The simplest piece is the well-known δ(x) potential, where and or equivalently For this potential, the differential equation (2.6) is well defined.Its solution is Comparison with Eq. (2.7) gives independent of the signs of x and x ′ .By Eq. (2.8), this is This is a special case of Eqs.(3.21) with It is instructive to recover Eq. (4.8) from Eq. (4.12) by using Eq.(4.5).Since f (κ; sg x, sg x ′ ) does not depend on the signs of x and x ′ , Eq. (4.5) gives As already discussed in Sec.I, the potential δ ′ (x) is not acceptable because, in the Schrödinger equation, there is a product of δ ′ (x) and a function discontinuous at x = 0.While δ(x) is a potential, δ ′ (x) has to be understood as a Fermi pseudopotential in much the same way as the expression (1.4) in three dimensions.Since δ ′ (x) is odd in x, the resolvent must satisfy Next, consider the formal Eq.(2.6) with this δ ′ (x) potential Since every term on the left-hand side is of dimension x −2 times that of R κ (x, x ′ ), the resolvent R κ (x, x ′ ) must be of the form κ −1 function of κx and κx ′ .
A comparison with Eq. (2.7) then shows that f (κ; sg x, sg x ′ ) is independent of κ.This is satisfied with With the resolvent known, it is now possible to define the Fermi pseudo-potential Omitting the argument κ in f , Eq. (4.5) takes the form This expression has not only a δ ′ (x) term, but also a δ(x) term.For the left-hand side of Eq. ( 4.21), it is necessary to evaluate, using Eq.(4.7), Suppose the δ ′ p (x) on the left-hand side of Eq. ( 4.21) is replaced by δ ′ (x).Then a comparison of Eq. (4.22) with Eqs.(4.23) and (4.24) gives where the identity xδ ′ (x) = −δ(x) has been used.These are the conditions for x ′ > 0; similar conditions for x ′ < 0 are Solving Eqs.(4.25) and (4.26) gives Where is the difficulty explained in Sec.I? Another way of asking the same question is: How does δ ′ p (x) differ from δ ′ (x)?The answer is to be found in the first step of Eq. (4.24).In differentiating the quantity on the left-hand side of Eq. (4.24), the factor f (sg x, +) is not differentiated.In other words, the term with (d/dx)f (sg x, +) has been omitted; if this term were not omitted, there would be a δ(x), precisely the difficulty explained in Sec.I.
The situation is therefore entirely similar to the Fermi pseudo-potential in three dimensions, where the operator (1.4) performs the function of removing a term proportional to 1/r.Here, what δ ′ p (x) does is where This removes the discontinuity of g(x) at x = 0, which is precisely what is needed.
C. Third potential
At the beginning of this investigation it was thought that, in one dimension, there was one potential (subsection IV A) and one pseudo-potential (subsection IV B).But the detailed analysis of the resolvent equation in Sec.III shows that there are three independent parameters in the resolvent, and hence there is an independent third potential, or a second pseudo-potential in one dimension.
This third potential is most easily understood through the discrete symmetry (3.4).Let c = 2/g 1 .Then, from Eq. (4.13), the resolvent for the δ(x) potential is given by Application of the discrete symmetry (3.4) to Eq. (4.30) gives the result that the resolvent for the third potential is expressed by It remains to determine the potential, or more precisely the pseudo-potential, from Eq. (4.31), which can be written more succinctly as Therefore, for the present case of the third potential V 3 (x, x ′ ), Eq. (4.3) takes the form The task is to make sense of this equation and to determine V 3 (x, x ′ ).That the resolvent equation is satisfied means that Eq. (4.35), properly understood, does lead to a V 3 (x, x ′ ).By Eqs.(4.28) and (4.29), the δ ′ (x) on the right-hand side of Eq. (4.35) can be replaced by δ ′ p (x), because it is not multiplied by a discontinuous function of x.Therefore, V 3 (x, x ′ ) is expected to be proportional to δ ′ p (x); that δ ′ p (x) is used instead of δ ′ (x) is due to the development in subsection IV B. With these considerations, an examination of Eq. (4.35) indicates that See also Eq. (4.20).
It remains to substitute Eq. ( 4.36) into Eq.( 4.35) to find the relation between the two constants g 3 and c: The evaluation of the first integral is straightforward because e −κ|x ′′ −x ′ | is continuous: After the removal of the common factor Eq. (4.37) reduces to This integral can be evaluated using Eqs.(4.28) and (4.29): Therefore, Eq. (4.39) is simply This is the desired relation.
It is merely a matter of terminology whether this pseudo-potential V 3 (x, x ′ ) as given by Eq. (4.36) is called a local potential or not.In summary, the three potentials V 1 , V 2 , and V 3 are given by Eqs.(4.8), (4.20), and (4.36).Thus the most general Fermi pseudo-potential for the interaction at one point in one dimension is From the above experience of working with δ ′ p (x) and the fact that the product of δ ′ (x) and a function discontinuous at x = 0 is not meaningful, from here on the convention will be adopted that δ ′ (x) always means δ ′ p (x).With this convention, Eq. (4.42) is written as (4.43) Equation ( 4.43) can be rewritten in a prettier form as follows.Since where use has been made of the identity δ ′ (x)x = −δ(x), a general Fermi pseudopotential (4.43) can be written as
.45)
As already mentioned, the first and last terms are even while the middle term is odd.That is, under space inversion the coupling constants transform as
SOLVING THE SCHR ÖDINGER EQUATION
In applying the Fermi pseudo-potential to various problems, such as the one to be treated in Part B of this paper, the resolvent equation is difficult to use and it is much more convenient to employ the prescription of Sec.IV to the Schrödinger equation.
This section is devoted to studying the equation where V (x, x ′ ) is the Fermi pseudo-potential as given by Eq. (4.42).On the one hand, this is an equation for this V (x, x ′ ).On the other hand, the procedure of this section is directly applicable to the Schrödinger equation, which differs from Eq. (5.1)only in the absence of the δ(x − x ′ ) term on the right-hand side.This section serves two distinct purposes.First, the parameters in the known resolvent of Sec.III, especially Eqs.(3.14) and (3.16), are to be related to the coupling constants g 1 , g 2 , and g 3 in Eq. (4.42).This will give an explicit verification of consistency of the prescriptions given in Sec.IV.Secondly, the procedure to be followed here serves as a useful introduction to the slightly more complicated problem of the next section, where a two-channel scattering by a Fermi pseudo-potential is taken to be a model for a quantum memory.
The solution R κ (x, x ′ ) is given, as in the general case, by Eq. (2.7).The substitution of Eq. (2.7) into Eq.(5.1) gives (5.2) Since the first term has been evaluated by Eq. (4.22), Eq. ( 5.2) can be written alternatively as Using the knowledge gained from Sec. IV, a fairly lengthy calculation gives more explicitly where Eq. (4.45) has been used.In Eq. (5.4), all dependences on x ′ cancel out.It therefore only remains to identify the coefficients of δ(x) and δ ′ (x).The results are (5.5a) (5.5b) (5.5c) (5.5d) Solving Eqs.(5.5) gives (5.6a) ) where (5.7) Equations (5.6) are to be compared with Eqs.(3.14) and (3.16).First, this gives a deeper understanding why there are the two distinct cases (3.14) and (3.16).Equations (3.14) correspond to the situation where g 1 and g 3 have the same sign, while Eqs.(3.16) correspond to g 1 and g 3 having opposite signs.Secondly, in both cases, it is seen immediately from Eqs. (5.6) that c 2 = c 4 , a fact that has been used before.The results of expressing the five c's in terms of the three g's are the following: (5.8) ) ) (5.9c) Here use has been made of the scale invariance (3.18).[Strictly speaking, the righthand sides of the three Eqs.(5.9a-c) should all be multiplied by the factor sg g 3 .This factor has been omitted because it has no consequences.]For completeness and also for later use, let the scattering matrix be written down.This involves returning to the more familiar variable k through Eq. (2.2) and then letting x ′ → ±∞.After analytic continuation to positive values of k, the S-matrix is a 2 × 2 matrix where + denotes propagation in the +x direction, and − in the −x direction.For any interaction at the point x = 0, it follows from Eqs. (2.7) and (2.8) that (5.11) Equations (5.6) and (5.7) then give explicitly which is unitary.An interesting special case is that with g 2 = 0; in this case, the pseudo-potential is even and there is left-right symmetry.Explicitly, in this case g 2 = 0, the S-matrix is . (5.13)This special case g 2 = 0, generalized to the case of coupled channels, is going to play a central role in Part B of this paper.
This completes the present discussion of the theory of Fermi pseudo-potential in one dimension.Attention is now turned to the first application of this theory.
VI. MODEL FOR QUANTUM MEMORY
There are many possible applications of the Fermi pseudo-potential in one dimension.As an example, it is intriguing to ask under what conditions, if any, Bethe's hypothesis [8,9] still holds when the delta-function potential is replaced by the V (x, x ′ ) of Eq. (4.42).As a first application, however, it is more desirable to begin with a case where the Fermi pseudo-potential is used in a relatively simple situation of current interest.
For decades, computer components have become smaller and smaller, and this trend is expected to continue [10].When some of the components become sufficiently small, as to be expected in the not-too-distant future, they need to be described in general by quantum mechanics.No matter how quantum computing is to develop in the future, one important component is necessarily the quantum memory, sometimes called the quantum register.The main function of any quantum memory is to store a quantum state.
In order for a quantum memory to be useful, it must be possible to alter the quantum state in the memory in a controlled way.This can only be accomplished by sending a signal from outside of the memory.In other words, the quantum state in the memory is to be controlled by a scattering process [11].
It is the purpose of Sec.VI to propose a simple model for quantum memory.First, in order to have scattering processes, at least one space dimension is necessary.Otherwise there is no possibility of interference between the incident wave and the scattered wave.As perhaps to be expected, this interference is of central importance.Since the state in the memory must include at least two independent quantum states, it is simplest to describe the quantum memory using the coupled Schrödinger equations for two channels.This is essentially Eq. (1.1) in the Introduction.
It remains to make the simplest choice for the 2 × 2 matrix potential V (x) of Eq. (1.2).This simplest choice, the Fermi pseudo-potential in one dimension, has been investigated systematically in Part A of this paper, the general result being given by Eq. (4.42).
The symmetry properties of this V (x, x ′ ) under space inversion have been discussed at the end of Sec.IV.In particular, it is symmetrical if g 2 = 0.In order for the model to be suitable for quantum memory, it is essential to concentrate on this special case.The reason is that, only in this case, do the even wave cos kx and the odd wave sin kx not mix.This is also the basis for the comment in the Introduction, after Eq. (1.3), why the δ ′ (x) potential is not suitable for quantum memory.That this absence of mixing is important is discussed further in Sec.VII for a more general setting.
With this understanding and choice, the present model for the quantum memory is described by the one-dimensional coupled Schrödinger equations with the 2 × 2 matrix potential A more elegant way to write this potential is where the σ's are the Pauli matrices.When g 2 = 0, the potential g 1 δ(x)δ(x ′ ) does not act on the odd wave, and similarly the potential g 3 δ ′ (x)δ ′ (x ′ ) does not act on the even wave.The first part of this claim is easy to obtain, and the second part follows from the definition (4.28) of δ ′ p (x).Alternatively, they can be seen from Eq. (5.13), where S ++ = S −− and S +− = S −+ .For the even wave, the scattering phase shift is given by independent of g 3 , while, for the odd wave, it is independent of g 1 .Therefore, for the present case of two coupled channels as described by Eqs.(6.1) and (6.2), the S-matrix for the even and odd cases can be expressed in terms of these quantities as follows.Consider first the case for the odd wave; since the g 1 term does not contribute and can be neglected, the V (x, x ′ ) of Eq. ( 6.2) effectively reduces to which is diagonal, meaning that ψ 1 (x) and ψ 2 (x) do not couple.Since the behaviors of the two channels differ only in the sign of g 3 , the S-matrix for this odd case is given by Eq. (6.5), or more explicitly It is instructive to rewrite this expression in terms of σ 3 : For the even wave, it is merely necessary to replace the right-hand side of Eq. (6.5) by that of Eq. (6.4), and also σ 3 by σ 1 .Therefore Eq. (6.8) leads to When neither g 1 nor g 3 is zero, any given element S of SU( 2) can be expressed as a finite product of S + (k) and S − (k), i.e., where each S(k i ) is suitably chosen as S + (k i ) or S − (k i ).
In the language of scattering theory, the meaning of S + (k) is as follows.[The meaning of S − (k) is similar.]The "in" field is while the "out" field is In other words, before writing on a quantum memory, it is first reset so that Eq. (6.15) is satisfied.The standard state can be chosen to be any quantum state; however, once chosen, the choice is rarely altered.Since scattering from a quantum memory leads to a unitary transformation of the quantum state in the memory, resetting cannot be accomplished without first finding out the content of the quantum memory.In other words, the first step of "resetting" is "reading."After the content of the quantum memory is known, say a in In summary, if "reading" can be accomplished, then so can "resetting"; if "resetting" can be accomplished, so can "writing." The main task here is therefore to discuss, within the present model, the operation of reading a quantum memory.More precisely, what is involved is the following.When the quantum state in a memory, , is not known, find a suitably chosen sequence of incident waves cos k i x or sin k i x such that the knowledge about the field can be used to determine the values of a in 1 and a in 2 .After this determination, the quantum memory is returned to the initial state [The last step is similar to the classical case in which a core memory is read from an initial state and then returned to it.] Let the quantum state in the memory be a 1 a 2 ; the problem is to determine the values of a 1 and a 2 by scattering from this state.Suppose an odd wave is used for the first scattering; then the two-component wave function for x > 0 is given explicitly by a 2 e −iφ − e ikx , (6.17) where, by Eq. (6.8), In particular, Therefore, this scattering process gives, through the interference term, the quantity Similarly, if the quantum state in the memory is first returned to the original state by a suitable scattering, then a second scattering with an even wave gives, again through the interference term, the second quantity These two quantities, A 1 and A 2 , are sufficient to determine the values of the complex numbers a 1 and a 2 , except for a common phase.In order to determine this common phase, it is simplest to use the known standard state s.For example, a further interference with this standard state using, say, the odd wave gives a third quantity Since s 1 and s 2 are known, these three quantities A 1 , A 2 and A 3 determine a 1 and a 2 .Returning once more to the original quantum state presents no problem.This completes the description of the present model of the quantum memory, including the operations of writing, reading, and resetting.
The advantages of this model, based on the Fermi pseudo-potential in one dimension, are its simplicity and its being completely explicit.On the one hand, such an explicit model plays an essential role in the initial understanding of some aspects of a new problem.On the other hand, the usefulness of such a model really lies in the possibility of opening a line of inquiry into these aspects.This is to be discussed in some detail in the next section.That is, in Sec.VII an attempt is to be made to present a general picture concerning the quantum memory, emphasizing the operations of writing, reading, and resetting, all accomplished by repeated scattering.
Some simplifying assumptions introduced in the model of this section are clearly not needed in the general setting of the next section.An example is the choice of using the Fermi pseudo-potential V (x, x ′ ) of Eq. (6.2); another one is the use of the Schrödinger equation (6.1) in one dimension.Thus the generalization to the Schrödinger equation in three dimensions with a more general potential is immediate but the results are less explicit.The further generalization to renormalized quantum field theory also does not present any obstacle.
What is less clear, and most important, is the role played by the condition g 2 = 0, used throughout this section.This condition is closely related to, and makes it possible to use, the even waves and the odd waves.In order to appreciate this point, take instead the incoming wave as, say, from the direction of the −x axis, i.e., This is a superposition of an even wave and an odd wave.Since the Schrödinger equation is linear, the even part is operated on by the S + (k) of Eq. (6.10), and the odd part by the S − (k) of Eq. (6.7).Since g 1 and g 3 are not zero and thus these S + (k) and S − (k) are not equal, the quantum state in the memory for an outgoing wave in the +x direction is different from that for an outgoing wave in the −x direction.In other words, in order to determine the quantum state in the memory after scattering with the Ψ in of Eq. ( 6.23), it is necessary to detect the direction of the outgoing wave.
In order for a quantum memory to behave as a memory, i.e., as the storage for a quantum state, it is essential that what is in the memory does not depend on the behavior of the scattered wave.Indeed, from the point of view of scattering theory, this characterizes quantum memories.Therefore, for the present model with the Fermi pseudo-potential, some incident waves, such as the even wave and the odd wave, are acceptable or "admissible," while many others, such as e −ikx of Eq. (6.23), are not "admissible." This concept of admissible incident waves is central, not only for the present model but also in general.This is the first topic to be discussed in the next section.
VII. GENERALIZATION
It is the purpose of this section to give a general description of quantum memories.This is to be accomplished by extracting the dominant features from the model of Sec.VI on the basis of the Fermi pseudo-potential in one dimension.
In order to extract the dominant features, consider first the following two generalizations, the first one obvious and the second one less so.
First, that the potential is the Fermi pseudo-potential is not necessary.In other words, the matrix potential V (x, x ′ ) of Eq. (6.2) can take a fairly general form.That g 2 is zero translates into the condition that this V (x, x ′ ) is symmetrical, i.e., Secondly, that the model is one-dimensional is not essential.For example, the model can be a two-channel scattering in three-dimensional space.In this case, the V (x, x ′ ) is replaced by another 2 × 2 matrix potential V (r, r ′ ), while the symmetry of the V (x, x ′ ) becomes the condition that this V (r, r ′ ) is rotationally symmetrical.
While this rotational symmetry is probably not necessary, this symmetry does play an important role.In the one-dimensional case studied in detail in Sec.VI, the symmetry of the V (x, x ′ ), coming from g 2 = 0, makes it possible to use the even wave and the odd wave.Similarly, in three dimensions, the rotational symmetry of V (r, r ′ ) makes it possible to use partial waves: the various partial waves do not couple so that each partial "in" wave leads to only the corresponding partial "out" wave.
Consider now the more general setting.Let the quantum memory be in a pure state j a j |j , where |j is a complete set of linearly independent states for the memory.The standard state s is a particular linear combination of these |j .Let ψ denote the wave function sent in from the outside to interact with the quantum memory; then the "in" field for the scattering process on the memory is An example of Ψ in is given by Eq. (6.23).It should be emphasized that ψ in is at our disposal to accomplish whatever the purpose of this scattering is.
It is the fundamental characteristic of the scattering from a quantum memory that not only is the "in" field Ψ in of the form of Eq. (7.1), but also the "out" field is of a similar form, For the special model of Sec.VI, this important point has been discussed near the end of that section.For the present generalization, it is worked out in detail in Appendix B. As already seen in Sec.VI, Eq. ( 7.2) puts strong conditions on ψ in .More precisely, a ψ in is defined to be admissible if, for all j a in j |j , the corresponding Ψ out is a tensor product as given by Eq. (7.2).
It should be added parenthetically that this definition of being admissible can be easily generalized by restricting the j a in j |j to certain subsets.This generalization is expected to be useful in future investigations, but is not needed for this paper.
In order to perform the operations of writing, reading, and resetting a quantum memory, it is necessary to have a sufficiently large collection of admissible ψ in .This has been verified to be the case for the model of Sec.VI, and will be assumed to be so in this section.Let denote a ψ in with the property that, if Eq. (7.1) holds for this ψ in , then Eq. (7.2) holds.It is assumed that, given any a in j and a out j , there is at least one such ψ in .It is possible that there is more than one such ψ in .It has been seen from the model of Sec.VI that this ψ in may actually involve a sequence of ψ in 's; see especially Eq. (6.11).However, for simplicity of notation, the expression (7.3) will be retained.
The operations of writing, reading, and resetting are now to be described in this order.For the purpose of writing after the quantum memory has been reset to the standard state s, it is sufficient to use any one of the ψ in s → j a j |j , where j a j |j is the desired quantum state to be put in the memory.Reading from a quantum memory is more complicated.Let a quantum memory be in a state j a in j |j ; it is desired to determine the values of these a in j by interrogating this memory, i.e., by sending a suitably chosen sequence of admissible ψ in 's, ψ in (1) , ψ in (2) , ψ in (3) and scattering them successively by this quantum memory.More precisely, consider the successive scattering processes Corresponding to the list (7.4), there is a list of ψ out 's, From the quantities given in (7.4) and (7.6) together with their interference, the values of a in j are obtained.This has been demonstrated explicitly in Sec.VI for the model there, and it is also shown there that a further interference with the standard state s may be needed to determine the overall phase.The importance of interference cannot be over-emphasized.
Once the a in j are known, the values of a out(N ) j of the process (7.5) can be obtained.An additional scattering using any one of the admissible ψ in j a out(N ) j |j → j a in j |j returns the quantum memory to its initial state.With the above process of reading a quantum memory, resetting is now straightforward.Resetting a quantum memory in the initial state j a in j |j to the standard state s consists of the following two steps.
(i) Read the memory to determine a in j .Note that, after the process of reading is performed, the memory is in the original initial state j a in j |j .(ii) Apply an additional scattering using any one of the admissible ψ in j a in j |j → s .
This completes the description of the quantum memory together with writing, reading, and resetting, all performed through scattering from the memory.
It may be worthwhile to emphasize that the concept of the quantum memory introduced and described here is quite general.In particular, the scattering process does not have many restrictions, and may or may not be linear.Also, the linearly independent states |j are allowed to depend on time, and may or may not be the eigenstates of an operator.
VIII. COMPARISON WITH AN EARLIER MODEL
The idea of quantum computing was first discussed by Benioff in 1980 [12].In this pioneering paper, spatial dependence was retained, although not quite in the form of the Schrödinger equation.Since then, quantum computing and quantum information have become popular subjects with a vast literature [13].However, in the majority of the theoretical papers on quantum computing, spatial dependence is omitted entirely.Therefore the usual model for quantum memory consists of a spin system or its generalization, and the operations on the quantum memory consist of applying unitary matrices.This prevailing model for the quantum memory has led to a number of important results.
In the present paper, as a first application of the Fermi pseudo-potential in one dimension, an alternative model for the quantum memory is proposed.This model differs from the previous one mainly in the re-introduction of the spatial variables, much in the spirit of the original work of Benioff [12].From the point of view of physics, the spatial variables are clearly present, whether one wants them or not.Instead of saying that a unitary matrix is applied mathematically to the content of the quantum memory, here the content of the quantum memory is altered in a controlled way by applying suitably chosen scatterings to the memory.This is much more than a change of language.While the previous model has the advantage of simplicity, which is important because quantum computing is a difficult subject, the present model with the spatial variable or variables may be considered to be desirable from the following two points of view.First, it offers a closer description of the experimental situation.Since a quantum memory is necessarily small in size, for practical reasons scattering is the simplest means of modifying the content of a quantum memory.Secondly, the presence of the spatial dimensions allows more possibilities of analyzing the quantum memory.It is also worth mentioning that the theory of scattering has been developed over many decades and is well understood, in the context of both quantum mechanics and quantum field theory.It is often advantageous to be able to make use of existing knowledge to study a new subject.
In both the previous model and the present model, the content of a quantum memory is given as a pure state.This content is altered by applying a unitary transformation, directly in the previous model and indirectly through scattering in the present model.The incident, scattered, and total wave functions have no analog in the previous model.In general, the phase shift [2] of scattering is determined from the total wave function, and the analysis of the explicit model in Sec.VI is actually an especially simple application of the usual phase-shift analysis, including the prominent role played by interference.The point is that, while in the definition of an admissible ψ in in Sec.VII both Ψ in [Eq.(7.1)] and Ψ out [Eq. (7.2)] are unentangled so far as the memory and the interrogating wave are concerned, this is not true of the total wave function, which is for example for the model of Sec.VI.There are many interesting open questions for the present model.The analysis of these questions is beyond the scope of the present paper.Nevertheless, here are two examples of such open questions.(a) In Sec.VII, it is explicitly assumed that there is a sufficiently large class of admissible ψ in of the form (7.3).In the model of Sec.VI, such a large class indeed exists in the form of even waves and odd waves.On the other hand, when g 2 = 0, no such large class exists.What is needed is a more general discussion as to the conditions under which such a sufficiently large class of admissible ψ in is actually available.
Even though examples where such a large class is available are known both in one dimension and in three dimensions, the three-dimensional case seems rather difficult to achieve experimentally.If this observation is true in general, then there may well be significant advantages to connecting the various components of a quantum computer, including quantum memories, by single-mode optical fibers.In particular, sending signals through space rather than fibers may lead to unexpected problems.
(b) Another especially challenging and interesting question for the present model of quantum memory concerns the issue of the so-called no-cloning theorem.This has been derived in the context of the previous model, but such derivations do not seem to be applicable directly to the present model.This is again related to the fact that here there is not only an S-matrix but also the incident, scattered, and total wave functions.
Preliminary analysis indicates that whether the no-cloning theorem holds for the present model of quantum memory may depend on subtle aspects of the Schrödinger equation.If this is indeed the case, then the no-cloning theorem may need to be stated properly and precisely before it can be derived within the present model of quantum memory.
IX. DISCUSSIONS
The present investigation began as an attempt to understand the δ ′ (x) potential in the context of the one-dimensional Schrödinger equation.When simple attempts failed, the powerful method of the resolvent equation was used.The surprise is that, not only can the resolvent equation be solved in general in terms of rational functions, but also the solution yields, in addition to the well-known δ-function potential, not one but two linearly independent Fermi pseudo-potentials in one dimension.One of the pseudo-potentials is odd under space reflection and is the proper interpretation of the δ ′ (x) potential.The other one is originally unexpected and is even under space reflection.
It is likely that there are many applications of these pseudo-potentials to onedimensional problems.A possible use in statistical mechanics connected with the Bethe ansatz [8] has already been mentioned in Sec.VI.In this paper, only the simplest application is discussed.This has nothing to do with the proper interpretation of the δ ′ (x) potential, but depends critically on the unexpected, even pseudo-potential.By combining this even pseudo-potential with the δ-function potential, an elegant special case is found for the scattering in two coupled channels.Even though the two channels cannot be decoupled, it is easy to write down the complete solution from the known one-channel case.
In spite of the mathematical simplicity of this application of the Fermi pseudopotential in one dimension, this example gives a model for the quantum memory (sometimes called the quantum register).While this model is completely explicit, its more important function is to point out a way to gain a general picture concerning the quantum memory.
More generally, the time-independent Schrödinger equation for n coupled channels with interaction at only the one point x = 0 is with ψ 2 (x) . . .
Here C 1 , C 2 and C 3 are three numerical hermitian n × n matrices, while δ ′ p (x) is similar to δ ′ (x) and is defined in Sec.IV.For a given k and a given incident wave ψ 0 (x) with n components, the solution of Eq. (9.1) takes the form, for j = 1, 2, . . ., n, ψ j (x) = ψ 0j (x) + F j+ e ikx , for x > 0 F j− e −ikx , for x < 0, (9.4) analogous to Eq. (2.7), where the F 's are 2n coefficients that depend on ψ 0j and k.
The substitution of Eq. (9.4) into Eq.(9.1) shows that these 2n F 's satisfy 2n linear equations.Indeed, it is the power of the Fermi pseudo-potential that Schrödinger equations reduce to linear algebraic equations.It will be interesting to study the structure of these algebraic equations.Even more generally, the pseudo-potential (9.3) at x = 0 may be replaced by a linear superposition of a finite number of such pseudo-potentials at x = x 1 , x 2 , . . . .The number of coefficients in the solution increases but remains finite, leading to more simultaneous algebraic equations that are still linear.The Green's functions can be treated in a very similar manner.Needless to say, the range of integration in Eq. (9.1) for x ′ can be replaced by a semi-infinite or finite interval, and the V (x, x ′ ) may contain additional terms such as those from step potentials.A more interesting problem is to apply the Fermi pseudo-potentials to first-order differential equations.
In summary, the theory of the Fermi pseudo-potential in one dimension has been worked out here together with the simplest non-trivial application to a problem of current interest.
from the memory and the information about this outgoing wave is no longer available.This means that the final state of the memory is given by Ψ out averaged over this outgoing wave.This average can be written schematically as This M is the density matrix for the quantum memory.Here the indicates integration and summation over all degrees of freedom associated with the outgoing wave or particle, but not those of the quantum memory.The corresponding differential symbol is omitted: It is d 3 r if the wave is described by the three-dimensional Schrödinger equation; it is d 3 r together with a summation over the spin in the case of the Dirac equation; and it is a functional differential such as DA µ in the context of quantum field theory.
In order for the quantum memory to function, the final state must be a pure state j a out j |j , just like the initial state.Therefore, the above M must also be given by M = for all i and j.For i = j, define the integral This I ji is non-negative, and is zero only if But the substitution of Eq. (B5) into Eq.(B6) gives immediately that Thus, Eq. (B7) holds for all i and j.The desired Eq. (7.2) is just Eq. (B4) with Eq. (B7).
3 + c 2 c 4
.17) In Eqs.(3.14) and (3.16), c 0 > 0 and all square roots are also positive.Although there are five constants c 0 , c 1 , c 2 , c 3 and c 4 , effectively there are four because all the quantities do not change under c j → λc j (3.18) for j = 1, 2, 3, 4, and λ > 0. It remains to discuss briefly the case c 2 .23d) Note that Eqs.(3.21) and (3.23) are related by the discrete symmetry (3.5) provided that the sign of c 3 is reversed.The same results also follow from Eqs. (3.14). | 11,453.4 | 2002-08-21T00:00:00.000 | [
"Physics"
] |
Didactic Focus Areas in Science Education Research
: This study provides an overview of the didactic focus areas in educational research in biology, chemistry and physics, seeking to identify the focus areas that are investigated frequently and those that have been studied rarely or not at all. We applied the didactic focus-based categorization analysis method (DFCM), which is based on an extension of the didactic triangle. As the data set, we used 250 papers published in the Nordic Studies in Science Education (NorDiNa) between 2005 and 2013, and the European Science Education Research Association (ESERA) 2013 conference proceedings covering education at upper secondary and tertiary levels. The results show that the teacher’s pedagogical actions and the student–content relationship were the most frequently studied aspects. On the other hand, teachers’ reflections on the students’ perceptions and attitudes about goals and content, and teachers’ conceptions of the students’ actions towards achieving the goals were studied least. Irrespective of the publication forum, the distributions of foci to di ff erent categories were quite similar. Our historical analysis completes the recent studies in the field as it is based on the theory driven categorization system instead of the data driven approaches used by the previous researchers. Moreover, our further observations on more recent publications suggest that no significant changes have taken place, and therefore wider discussion about the scope and the coverage of the research in science education is needed.
Introduction
Holistic understanding about educational research is important for researchers and educational developers, as well as for anyone mentoring and tutoring pre-and post-graduate students. Gaining such knowledge calls for carrying out years of systematic research on the different fields of education. Due to the scope of such work, most researchers specialize in a narrow area of expertise in order to gain deep insight in that area. However, for the research community as a whole, achieving a versatile and holistic understanding about the field is a much more feasible goal. This may seem self-evident, but in this paper, we wish to raise the question about whether this actually holds.
A holistic understanding can be interpreted in several different ways. It can concern content, i.e., whether teaching and learning of various topical areas and key concepts in science are well covered in the research literature, e.g., [1]. It can concern some cross-cutting themes the education of a specific topic, e.g., incorporating sustainability [2], teaching argumentation [3] or scaffolding in the education of a science topic [4]. It can also concern how that specific topic, e.g. electricity, is taught at various levels of education, from pre-school education to universities and teacher training [1,5]. Moreover, it can concern various aspects of the instructional process, including relevant actors, their relations and carried out activities. Additional points of view can cover research in different countries, e.g., [6][7][8], and 1.
What are the didactical focus areas in science education research in our data pool? 2.
How does the research appear in different disciplines (biology, chemistry, physics)? 3.
Which scopes for data collection are used (course, organization, society, international)? 4.
How is the research distributed to different educational levels (International Standard Classification of Education, ISCED)? 5.
How does the teacher education context appear in the data pool? Educ. Sci. 2019, 9, 294 3 of 19 6. What research methodologies are used?
We see that our results could benefit the field in several ways. For individual researchers, our work can pinpoint areas where there is space for interesting relevant work with a research topic. Moreover, the applied analysis framework helps to build new research questions which focus on identified research gaps or little-studied areas. For instance, Kinnunen and others [10] used the method to categorize research papers on phenomena related to dropping out of an introductory programming course and found that all papers focus only on student-related issues (e.g., students' characteristics and what students do) and left other aspects such as teachers' role in teaching and learning process or curriculum planning unstudied. While our analysis in this paper does not focus on any sub-areas of physics, chemistry and biology, it is straightforward enough for it to be applied in some narrow areas to discuss gaps in current research and generate research ideas. On the other hand, the results could be used by educational decision-making bodies to identify aspects in the instructional process which merit further investigation in a wider scale and thus could be used for targeted research funding.
Findings from Earlier Studies
Below, we discuss relevant related work looking separately at various dimensions of how the SER literature has been analyzed.
Foci Areas
Tsai and Wen [6], Lee and others [7], and Lin and others [8] analyzed 2661 papers published in the years 1998-2012 in the International Journal of Science Education (IJSE), Science Education (SE), and Journal of Research in Science Teaching (JRST). They developed the following categorization scheme, hereafter called Tsai's and Wen's categorization method, which was later used by several other researchers.
Learning-Classroom Contexts and Learner Characteristics (Learning-Context), e.g., student motivation, background factors. Learning and laboratory environments, learning approaches, student-teacher and student-peer interactions, soft skills in learning; 5.
Goals and Policy, Curriculum, Evaluation, and Assessment; 6.
Cultural, Social and Gender Issues; 7.
Informal Learning.
The three most common topics in the period 2008-2012 were Teaching (19%), Learning-Conception (15%) and Learning-Context (37%). There were significant changes over the 15 year period. The Teaching aspect almost tripled from 7% to 19%, with the biggest increase happening in IJSE. In addition, the Learning-Context aspect doubled from 18% to 37%. The share of Learning-Conception, Goals, Policy and Curriculum, and Culture, Social, and Gender dropped from 50% to 60% to roughly 25% in all journals.
Tsai and others [5] continued the work with a slightly modified analysis. They analyzed 228 papers published in four major journals (IJSE/JRST/SE/Research in Science Education (RISE)) during 2000-2009 that reported work on Asian students (including Turkish). They used Tsai's and Wen's [6] categories; however, in this case, each paper could be included in multiple categories, whereas in the earlier works only the best-fit category for each paper was counted. Despite the difference, the studies of Teaching were equally common if compared with Tsai's and Wen's [6], Lee's and others' [7], and Lin's and others' [8] studies. Learning-Conception studies were much more frequent, and over half of the papers were included in this category; no decrease was visible during this period. Learning-Context was the most frequent category, with more than two-thirds of the papers being included in this category. There were no major differences between the four journals and no clear trends were visible during the 10 year period.
More recently, O'Toole and others [16] analyzed ten years (2005-2014) of abstracts from IJSE/JRST/SE/RISE and Studies in Higher Education. They used a different, but overlapping categorization method, identifying papers in five major categories: Scientific literacy (52%), Teaching methods (45%), Learning focus (42%), Teachers (39%), and Relation between Science and Education (16%). Comparing these results with the former works using Tsai's and Wen's categories [6] is difficult because there is a lot of overlap. For example, scientific literacy was the largest group of papers, but it included works which would also fit within the Learning Contexts category, as well as the Goals and Policy category in the other system. Further, the Teachers category included teacher training and professional development, counting half of the papers in this category. Finally, each abstract could be categorized within several categories, further making any numeric comparison difficult. An interesting finding was that there were substantial differences between the journals, as well as between the periods 2005-2009 and 2010-2014, which likely reflect differences and changes in journal editorial policies.
Another similar type but small-scale study was done by Cavas [17], also applying Tsai's & Wen's method [7], who analyzed 126 papers published in the Science Education International (SEI) journal between the years 2011 and 2015. Considering the research topics, the SEI journal has a different profile than IJSE/JRST/SE. The most common topics were Teacher education (23%), Learning-Conception (20%), followed by Teaching, Learning-Context, Goals, Policy, Curriculum and Culture, Social and Gender, each with 10-11% of the total.
All these studies clearly indicated that pedagogical activities and conceptions, as well as students' learning activities and results, have had a major role in research, with well over half of the papers falling into these areas, whereas other topics have much less emphasis. For example, while educational technology could be an important tool for supporting learning, only 3-6% of the papers considered it [7,9]. Another little-studied area is informal learning.
A very different approach was used by Chang and others [18]. They analyzed an overlapping data pool: papers published IJSE, JRST, SE and RISE between 1990 and 2007, also focusing on the research topics but using scientometric methods, which automatically clustered research papers based on the most common terms in the data. Since the clusters were quite different from Tsai's and Wen's [7] categories, no direct comparison with their results is possible. However, the largest clusters in this work were Conceptual change & concept mapping (40%), Nature of science & socio-scientific issues (14%), Professional development (11%), and Conceptual change & analogy (11%). The remaining clusters (Scientific concept, Instructional practice, Reasoning skill & problem solving, Design based & urban education, Attitude & gender) covered a small proportion of the papers each. During this period, a clear increase appeared in the Professional development and Nature of science & socio-scientific issue clusters, while the proportion of Scientific concept and Reasoning skill & problem solving clusters decreased to a fraction of their original size.
Discipline Characteristics and Trends
The previous surveys covered science education holistically. Two other major surveys have focused on chemistry and biology education (we did not find a similar survey for physics education).
Teo and others [19] studied chemistry education research papers using Tsai's and Wen's [6] categorization method. The data pool covered 650 papers published during from 2004 to 2013-of which, 446 were from two major journals in the field, Chemistry Education Research and Practice (CERP) and Journal of Chemical Education (JCE), and the other 204 papers from IJSE, JRST, SE, and RISE. The most popular topics were Teaching (20%), Learning-Conception (26%), Learning-Contexts (19%), whereas Goals, Policy, Curriculum, Evaluation and Assessment (16%), and Educational Technology (11%) were studied less. The rest of the categories were small, amounting to a few percent each. Trends emerging from their data were an increase in papers focusing on Teaching and Learning-Conception.
They observed no major differences in chemistry education research journals if compared with the SER journals.
Gul and Sozbilir [1] studied 1376 biology education papers published in eight journals, including IJSE/SE/JRST/SE and Journal of Biology Education (JBE), Journal of Science Education and Technology (JSET), Research in Science & Technological Education (RSTE) and Studies in Science Education (SSE) in the years 1997-2014. They used their own categorization scheme and found that the most common aspects in this area were Learning (21%) comprising learning results, learning styles and misconceptions, Teaching (19%) comprising effects on achievement, attitudes, scientific process skills, and comparing different methods, Studies on attitude/perception/self-efficacy (17%), followed by Computer-aided instruction (9%), Studies on teaching materials (8%), Nature of science (7%), and several small categories. They did not report any trends concerning these results.
Thus, here, teaching and learning also comprise a large proportion of the papers. Interestingly educational technology was more commonly researched in both chemistry education and biology education than in the previous analyses focusing on the science education field overall.
Scope of Data Collection
Earlier analyses of the literature present very little information about the scope of the data collection. Gul and Sozbilir [1] are one of the few that report sample sizes. They report that sample sizes were most common in the range of 11-30 (16%), 30-100 (23%), 101-300 (20%) or 301-1000 (11%) participants. Large-scale (N > 1000) and small-scale studies (N ≤ 10) were rare. Moreover, 17% of the papers analyzed did not report sample sizes. Similarly, there is little information about the organizational units from which the data were collected, e.g., from a single course, degree program, national or international survey. Teo and others [19] reported on the educational levels and countries where data had been collected, but there is no information about the organizational scope.
Educational Level
A few survey papers have reported information about the sample population in the target papers. To allow these results to be compared, we have presented them in terms of the ISCED reference framework, which is used to define the educational levels in different educational systems and disciplines. It is used by Eurostat, Unesco and OECD and it has been implemented into all EU data collections since 2014.
In their analysis of biology education research papers, Gul and Sozbilir [1] found that 20% of the study targets were at ISCED 1-2 levels (grades 1-8), 34% at ISCED 3 level (grades 9-12), and 23% at ISCED 6-7 levels (undergraduate). Further, educators were in focus in 18% of the studies, whereas at the postgraduate level (ISCED 8), parents, pre-school, admin people or others were rarely among the target population. Some papers reported more than one sample type. The study by Teo and others [19] of chemistry education research papers gives different results: ISCED 2 level (years 7-9, 14%), ISCED 3 levels (years 10-12/13, 25%), and ISCED 6-8 levels (54%), Preservice teachers were used in 8% of research papers and in-service teachers in 12%. Pre-school and elementary school samples were rare.
The study by Tsai and others [6] (of science education papers in IJSE/JRST/SE/RISE with Asian students) reported that ISCED 1 level was in focus in 19% of the studies, ISCED 2 level in 33%, ISCED 3 level in 41%, ISCED 6 level (colleges) in 14%, and ISCED 6-7 (pre-service teacher) in 8% of the studies. Again, in some papers more than one sample type was used. They did not find major differences between the journals. This is in line with O'Toole and others' (2018) work, which categorized the same journals, but all papers, within a somewhat overlapping period: 31% secondary (ISCED 2 and 3), 18% post-secondary (ISCED 6 and 7), 17% primary (ISCED 1) and 2% Early childhood (ISCED 0). Lin and others [9] reviewed papers focusing on scaffolding for science education and reported that over half of the studies focused on the upper secondary level (ISCED 3). On the other hand, Wu and others [20] reported in their study on intervention studies on technology assisted instruction that in half of the studies, the sample was from the tertiary level (ISCED 6-8), 21% from the primary level (ISCED 1) and 20% from lower and upper secondary levels (ISCED 2-3).
Here, it seems that the results are mixed. Some fields focus more on school-level education whereas some, such as chemistry, on higher education. Other target groups, such as teacher students or in-service teachers, are studied less frequently, or are not reported separately.
Research Methodology
Tsai and Wen [6], Lee and others [7], and Lin and others [8] analyzed the research methodologies that have been used. They identified five categories, including empirical work, theoretical work, reviews, position papers and others. Overall, some 90% of all papers were considered to be empirical research and the other categories covered the balance. There were no clear differences between the IJSE, JRST and SE journals in this aspect.
Gul and Sozbilir [1], in turn, found that 53% of biology education research papers used qualitative designs. A closer analysis revealed that more than half of these were descriptive papers and case studies; the rest of the papers were distributed among a wide variety of more advanced methods. Quantitative approaches were used in 43% of the papers, and less than one-third of these used an experimental method and the rest were non-experimental, e.g., descriptive, surveys, comparative or correlational. Only 4.2% of the papers used a mixed methods design. Lin and others [8] reported a similar finding that qualitative methods were the dominant analysis method in SER papers. This matches the findings of O'Toole and others [16].
Teo and others [19] noticed that more than half the chemistry education research papers (52%) used a mixed methods approach, while a pure qualitative approach was used in 22% of the papers and a pure quantitative approach in 26% of the papers.
In summary, it is clear that a large proportion of the papers in SER included empirical research and both quantitative and qualitative methods were widely used. Table 1 summarizes the previous analyses.
Study Target
The focus in this study is papers in biology, chemistry and physics education research papers at the upper secondary and tertiary levels (ISCED 3, 6-8) published in the NorDiNa journal from 2005 to 2013 and in the ESERA conference proceedings in the year 2013. These educational levels and disciplines were selected as they match the scientific background of the authors of this paper. The researchers are experienced tertiary level teacher educators, and they have expertise in biology, chemistry, physics, and computer sciences and subject pedagogics. Jarkko Lampiselkä is specialized in chemistry and physics didactics instruction and Arja Kaasinen is specialized in biology didactics instruction at the University of Helsinki. Both Päivi Kinnunen at the University of Helsinki, and Lauri Malmi at Aalto University, are specialized in computing education research and they are experts in the analysis methodology used.
The NorDiNa data pool comprises 138 papers from the years 2005-2013-of which, 52 were included in our data pool. The ESERA proceedings papers are based on the presentations at the ESERA 2013 conference. The analysis started in 2015 and, at that time, the proceedings from 2013 were the latest release available. A total of 960 manuscripts were submitted-of which, 339 were included in the proceedings [21], and 138 in our data pool. The data pool from the ESERA proceedings is comprehensive and extensive even though the data are from one year only.
The NorDiNa journal was selected as it is the leading peer-reviewed science education research journal in the Nordic countries. The ESERA conference proceedings were selected as they are from one of the largest SER conferences in the world, which attracts over 1500 visitors and over 1000 presentation proposals each time the conference is held. None of these publication venues had been analyzed in related works before this study. These venues provide an excellent overview of the area of educational research, and show the blind spots, if such exist.
Analysis Method
We used the didactic focus-based categorization method (DFCM) to analyze the papers. It has its origin in J. F. Herbart's didactical triangle as presented by Kansanen and Meri [22] and Kansanen [9] and it describes the formal instructional process in a holistic manner (Figure 1). The triangle presents the relationships between the three main actors in the instructional process: the content to be learned, the student and the teacher.
Study Target
The focus in this study is papers in biology, chemistry and physics education research papers at the upper secondary and tertiary levels (ISCED 3, 6-8) published in the NorDiNa journal from 2005 to 2013 and in the ESERA conference proceedings in the year 2013. These educational levels and disciplines were selected as they match the scientific background of the authors of this paper. The researchers are experienced tertiary level teacher educators, and they have expertise in biology, chemistry, physics, and computer sciences and subject pedagogics. Jarkko Lampiselkä is specialized in chemistry and physics didactics instruction and Arja Kaasinen is specialized in biology didactics instruction at the University of Helsinki. Both Päivi Kinnunen at the University of Helsinki, and Lauri Malmi at Aalto University, are specialized in computing education research and they are experts in the analysis methodology used.
The NorDiNa data pool comprises 138 papers from the years 2005-2013-of which, 52 were included in our data pool. The ESERA proceedings papers are based on the presentations at the ESERA 2013 conference. The analysis started in 2015 and, at that time, the proceedings from 2013 were the latest release available. A total of 960 manuscripts were submitted-of which, 339 were included in the proceedings [21], and 138 in our data pool. The data pool from the ESERA proceedings is comprehensive and extensive even though the data are from one year only.
The NorDiNa journal was selected as it is the leading peer-reviewed science education research journal in the Nordic countries. The ESERA conference proceedings were selected as they are from one of the largest SER conferences in the world, which attracts over 1500 visitors and over 1000 presentation proposals each time the conference is held. None of these publication venues had been analyzed in related works before this study. These venues provide an excellent overview of the area of educational research, and show the blind spots, if such exist.
Analysis Method
We used the didactic focus-based categorization method (DFCM) to analyze the papers. It has its origin in J. F. Herbart's didactical triangle as presented by Kansanen and Meri [22] and Kansanen [9] and it describes the formal instructional process in a holistic manner (Figure 1). The triangle presents the relationships between the three main actors in the instructional process: the content to be learned, the student and the teacher. The didactic triangle was further developed by Kinnunen [23] and Kinnunen and others [14] to consider the teacher's impact on the student-content relationship, the teacher's self-reflection and the student's feedback (shown as arrows 7 and 8 in Figure 1). In addition, as a part of further development, the didactic triangle was placed in a wider educational scope, so that it enabled the researchers to discuss the instructional processes at the institution, society and international. That is, whereas the original didactic triangle described the instructional process from a classroom point of view, the extended version of this triangle can also be used to describe instructional phenomena that take place at the educational organization level, society level, or even the international level. For instance, the extended didactic triangle can capture phenomena such as the goals a society sets for the level of education for its citizens (node 1 in Figure 1) and the degree to which citizens achieve these goals (arrow 5 in Figure 1).
The improved didactic triangle was used as the basis for the DFCM. It includes eight foci-of which, two have sub-foci ( Table 2). The sub-foci were developed to improve the resolution of the DFCM. The development of the categories that are based on the didactic triangle have been described in more detail in our previous publications [10,11,14,23]. Table 2. The list of didactic foci and their definitions. Each didactic focus operates at four levels: the course, organization, society, and international level. Examples include references to publications in which the corresponding focus has been a research topic.
Name of The Didactic Focus
Definition Example
Goals and contents
The goals and/or contents of a course, study module, and goals of a degree program. [24]
Relations between students and teacher
How students perceive the teacher (e.g., studies on how competent students think the teacher is) or the teacher perceives the students.
Students' understanding of and attitude about goals and contents
How students understand a central concept in the course or how interesting students find the topic. [28] 5.2 The actions (e.g., studying) the students do to achieve the goals Students' actions include all actions/lack of actions that are in relation to learning and achieving the goals. [29]
The results of the students' actions
The outcome of the study process, e.g., a study that includes a discussion of the learning outcomes after using a new teaching method. [30] 6. Relation between the goals/contents and the teacher How teachers understand, perceive or value different aspects of the goals and contents. [31] 7.
Additional Classification Attributes
In addition to foci analysis, we also investigated the educational scope, educational level, teacher education, discipline characteristics and research methodology points of view.
The educational scope attribute is based on the educational context in which the research was carried out. The attributes were the course, organization, society, and international levels. The course attribute means that the study was carried out in one or in a couple of classes at the same educational institution. The organization attribute means that the study was carried out in two or more classes at two or more educational institutions. The society attribute means that the study was carried out in two or more regions in the country. The international attribute means that the study involved two or more countries. The educational scope of the studies analyzed may reveal something about how generalizable the results are, but on the other hand, many education-related phenomena manifest themselves more only if a wide-enough scope is applied to the study. For instance, trends related to educational policy making and its consequences to the national science education curriculum require stepping out of the single course context. The scope of the educational level attribute is on which school level the study was carried out. The classifying attributes were primary (ISCED 1), lower secondary (ISCED 2), upper secondary (ISCED 3) and tertiary (ISCED 6-8). On some occasions, the lower levels were also incorporated in the study and therefore appear in the data pool.
The teacher education (TE) attribute is concerned with whether the research was carried out in a teacher education context. A typical context could be the pedagogically oriented subject studies given in the department of teacher education or the subject department, or the guided teaching practice carried out in a training school.
The scope of the discipline characteristic attribute is how the research foci are distributed among the disciplines. The classification attributes were biology, chemistry, physics and science. The classification was a straightforward process in most cases. Some research papers focused on several disciplines, such as biology and physics, and in these cases the paper was classified as a science paper. The number of the papers in this category was small and we see that they did not have a significant impact on the reliability and validity of the distributions.
The concern with the research methodology attribute is the kind of research methodology used in the data collection and/or in the data analysis of an empirical study. The categories were quantitative, qualitative, mixed and descriptive methodology. The classification was based on the information given in the methodology and the research results sections. Some studies were marked as N/A denoting that no empirical study design was applied. These could be a historical review of evolution of the force concept during centuries, a synopsis of a PhD thesis, for example, or some other theoretical paper.
Using the Method
The team comprised four researchers who worked in pairs. First, each researcher read through all the papers assigned to him or her and did the preliminary coding. Next, the pair compared and discussed their coding, and jointly agreed on which didactic foci described the paper the best. If the pair could not reach consensus or they were uncertain about the didactic foci of the paper, the whole research team read the paper and discussed it as long as needed until they could come up with a collective decision. The research group worked in this way volume-by-volume and strand by strand in the conference proceedings. We emphasize the importance of the collective analysis process. Discussion with other researchers is essential to ensure the quality of the analysis process.
It is important to read through the entire article and the classification should not be based only on the title and/or the abstract of the study. For example, even though we limited our data pool to the upper secondary and tertiary levels, some primary level education studies were included because the study focused on the education from the tertiary level teaching practice point of view. On the other hand, STEM education covers several educational contexts-some of which were within the scope of our study, but others fell outside the scope, such as geography or technology education.
On some occasions, the teacher education attribute was difficult to apply. The difficulties emerge in the manifold roles that the teachers and the students have in the educational context. An in-service teacher can be in a student role in a continuing education course and a tertiary level student can be in a teacher's role in teaching practice. We included school education studies if their focus was the upper secondary or tertiary levels. Instead, a tertiary level teaching practice course focusing on the student teacher's teaching in a primary level classroom was excluded unless the study focused on formal teacher education. Moreover, non-formal teacher activities, such as a summer camp in astronomy for physics teachers, were also excluded.
Other reasons for excluding papers were a lack of clarity on educational level, empirical data, or the theoretical nature of the study unless the topic was relevant to secondary and tertiary levels. Justification of the educational level based simply on the students' age can be difficult. For example, a 16-year-old student could be in a lower or an upper secondary level depending on the country or the date of birth. Some students begin school earlier or later than expected, they can repeat or skip the grade, and immigrant students are sometimes placed in a lower educational level than their relevant age group. Further, conceptual difficulties exist. The term "high school" could refer to the lower or upper secondary level depending on the researcher's vocabulary habits. If there was any unclarity, the paper would be excluded. If there were no empirical data, it would be difficult to justify which educational level the study was focusing on. In many cases, this related to the theoretical nature of the paper. For example, a study on the historical evolution of quantum mechanical concepts was regarded as relevant to upper and tertiary level education and the study was included in the data pool, but on the other hand, a study on comprehensive school curriculum (ISCED 1-2 levels) was excluded.
General Trends
The distributions within ESERA and NorDiNa are in line with each other. The most frequently studied aspects were the student-content/goals relationship (Focus 5, abbreviated F5) and teachers' impact on this relationship (F7) (see Table 3). Gul and Solzbilir [1] refer to Asshoff and Hammann [36] who found that distributions of biology education research papers at the ERIDOB conference and in the IJSE differ considerably from each other. The publications from the ERIDOB conference focused more on learning (similar to focus 5 in our study) whereas the papers in the IJSE were distributed more equally between categories. In general, our finding shows that the ESERA conference and the NorDiNa journal are better in line with each other. However, the students' learning (focus 5) is to some extent more frequently reported in the ESERA conference (51%) than in the NorDiNa journal (42%). Gul and Sozbilir explained the difference by noting the audiences of different kinds: journals are aimed at international readers, and the aim of conferences is sharing the recent findings among the participants. Taking this as hypothesis for a conclusion, it seems that audiences for NorDiNa and ESERA do not differ much from each other.
The least studied aspects were student characteristics (F2), teacher characteristics (F3), and the teacher-student relationship (F4). The proportion of the most frequently studied aspects was in line with the studies produced by Tsai and Wen [6], Lee and others [7], Lin and others [8], Tsai and others [5], Cavas [17] and Chang and others [18], but contradictory to some extent with the less-studied aspects. For example, students' understanding was one of the most frequently studied aspects in our study, but infrequent in the earlier studies. On the other hand, the educational goal was one of the less-studied aspects both in our study, the above-mentioned studies and in Cavas's [17] study. Nevertheless, we were not able to find a coherent pattern. In addition, we were not able to compare the missing gaps, as the earlier studies did not pay interest to this aspect.
On the other hand, compared with the O'Toole and others study [16] is challenging as they have used a categorization system more abstract than our system. For example, they use 'Scientific Literacy' as the main category but divide it into several sub-categories-some of which are like our foci areas, such as curriculum, and several others fall beyond the scope of our categorization system. Some other similarities also exist, for example, they have 'Teacher' and 'Student' categories as we have, but their category system comprises more sub-categories than our system (three in student foci areas 5.1-5.3 and four in teacher foci areas 7.1-7.4). Despite the differences in the categorization system, they also found that teachers and students are among the more frequently studied aspects. They show that teaching strategies are one of the more frequently studied aspects, which is similar to our finding (focus 7.3, teacher's pedagogical actions).
Discipline Characteristics
The results show ( Table 4) that there are differences between the ESERA and NorDiNa forums as well as among disciplines. In general, the research in ESERA seems to be more versatile when compared with NorDiNa. There are fewer empty cells in Table 4 in the ESERA data pool than in NorDiNa data pool and hence the research in ESERA has better coverage than in NorDiNa. On the other hand, it can be said that the research is more focused in the NorDiNa than in the ESERA forum. The difference is even more apparent if we compare the distributions of the subject-oriented papers between ESERA and NorDiNa and leave out the science-oriented papers. If we compare the subject orientation within the forum and between forums, it seems that the distribution of the biology oriented papers differs from the others to some extent. There, the research seems to focus more on pupils' preconceptions and attitudes (F5.1) and to the teachers' relation to the content (F6) than in the other disciplines. The distribution of chemistry education papers is in line with the findings by Teo and others [19]. They found that conceptual understanding is the most popular topic, reaching 26% of all papers, which is similar to our data pool. Further, they found that teachers' teaching was the second most frequently studied focus area, which is also in line with our finding. The similarity goes even deeper as Teo and others show that one of the most frequently studied specific focus areas was the participants' attitudes and beliefs, which is in line with our finding. Even though the number of chemistry education research papers in the study by Teos and others (N = 650) is much bigger than in ours, the many similarities infer that research in chemistry education focuses on the same foci in different publication forums. Similar coherence appears between our data and those of Gul and Sozbilir [1]'s study on biology education. Common to their study and ours is emphasis on the students' understanding, their learning results, and the teachers' teaching. Table 5 shows that course-level studies were more frequent in ESERA than in NorDiNa. In general, it seems that science education studies in our data pool focused on smaller courseand organizational-level studies, whereas society and international-level studies are rare. The earlier studies do not provide a comprehensive database for comparison, as few studies have paid any attention to this topic. Some studies give information on the educational levels, such as Gul and Sozbilir [1], Teo and others [19] and O'Toole and others [16]. However, it is a different classification attribute than the educational scope. Nor are the sample sizes [1] comparable with the educational scope, as a small sample size does not self-evidently refer to course-level study and a large sample size to international-level study, but they depend on the practical arrangements and the context of the study. Hence, the following finding has some novelty as it has not been reported well before our study. First, most of the studies in ESERA focus on the course level and less on the other levels, whereas the organization-level studies were the most frequent in the NorDiNa journal. Secondly, international-level studies are rare in both publication forums. There are several plausible explanations for the differences, but we propose that the foundational difference lies in the characteristic features of the publication channels. It is typical among SER forums that the quality requirements and the threshold level for a manuscript to be published in the journal are higher than in conference proceedings. This means that studies with smaller sample sizes and a more local context are more likely to exceed the threshold level of conference proceedings than a journal. More comprehensive data and bigger sample sizes are needed in order to meet the journal threshold level which is better achieved in two or more classrooms/schools study design. This trend produces a larger proportion of organizational-and society level studies in NorDiNa.
Educational Scope
The small number of international-level studies may originate from the reporting style. When the international-level studies are reported as a whole, they tend to be massive reports and therefore reported as monographs or similar publications, such as a PISA report or a TIMMS report. When international-level studies comprise research teams from different countries, such as in EU projects, it is plausible that each country reports their own case studies and therefore are classified as either society level studies or as organizational-level studies. Consequently, international-level studies may be rare, but this does not mean that international co-operation would be scarce. Nevertheless, the number of international-level studies is underrepresented in these publication forums and represents a gap in the data. Table 6 shows that tertiary level studies are more frequent in ESERA than in NorDiNa, whereas secondary level studies are more frequent in NorDiNa than in ESERA. Studies focusing on both the upper secondary and tertiary levels and from primary to tertiary are uncommon in both forums. Our data pool has been restricted to upper secondary and tertiary levels and therefore the comparison to earlier studies is limited. The distribution of studies in the NorDiNa journal seems to be more in line with studies by Gul and Sozbilir [1], Tsai and others [5] and Lin and others [8] studies than with the ESERA proceedings. The conclusion is tentative, but it seems that NorDiNa focuses more on school education whereas the tertiary level is better-represented in the ESERA conference proceedings. Further, this is corroborated with the findings of O'Toole and others [16] that studies focusing on secondary level education are the most frequent. It seems that the ESERA conference focuses more on tertiary level education than the NorDiNa journal. This could be explained by the different audiences. Conferences are aimed at a more research-oriented audience and the participants are typically researchers working in the tertiary sector, while the journal audience is more international and also aimed at school educators as well. However, the findings of the previous studies are not strictly comparable in terms of the correlation between the distribution of educational levels and the nature of the audience, if compared with each other and therefore the conclusions are highly tentative. This topic could be investigated more and represents some a gap in the field of educational research. Table 7 shows that the impact of the teacher education (TE) context appears in both data pools. In general, 53% of the research papers in ESERA proceedings focused on TE and 29% in NorDiNa. The teachers' impact on how the students' study (F7.3) is more frequent in the TE context than in the non-TE context. The trend is similar in the NorDiNa journal, but more emphasis is paid on teachers' self-reflection (F7.4), the teacher-content relationship (F6), teacher characteristics (F3) and teachers' understanding about students' actions towards achieving goals (F7.2). The proportions of TE studies in our data pool are notably bigger if compared with the studies by Gul and Sozbilir [1] on biology (2.8%), Teo and others [19] on chemistry (5%), Lin and others [8] (6%) or O'Toole and others [16] study (7%). Only the Cavas [17] study (23%) is in line with our study, but still reaches a smaller proportion than NorDiNa (29%) and an especially smaller proportion than ESERA (53%). The earlier studies do not provide comprehensive background information in order to carry out a comparative analysis, but it seems that the NorDiNa and ESERA forums are oriented more to teacher education publications than the IJSE, the JRST or SE [9], SSE or RSE [1], JSET or RSTE [2], or CERP or JCE [19]. Table 8 shows that a clear majority of the studies published in the ESERA proceedings and the NorDiNa journal are empirical: 87% and 81%, respectively. However, distributions in the ESERA and NorDiNa forums also differ in terms of research methodology. The quantitative and the qualitative research methodologies are equally represented in the ESERA proceedings, whereas the qualitative method is clearly the most frequent methodology in NorDiNa. Mixed method studies are similarly represented in the ESERA and NorDiNa data pools, but the numbers of mixed methodology studies differ across these publication channels. Descriptive studies were absent in NorDiNa, and studies with no empirical results (no methodology used, N/A) were much more frequent in NorDiNa than in ESERA. The proportions of empirical studies compared with other research papers are in line with many previously mentioned studies, such as those by Tsai and Wen [6] (87%), Lee and others [7] (87%), or Lin and others [8] (91%). However, the distributions of quantitative and qualitative research methodologies vary between our data pools and among our research and that of other researchers. It seems that a quantitative methodology is more frequent in the ESERA data pool than in many of the reference studies. In this sense, the distribution in the NorDiNa journal is closer to the reference journals used in O'Toole's and others' [1] study. Lee and others [8] noted that most empirical studies were qualitative among the highly cited papers. The proportion of qualitative vs. quantitative studies in ESERA is closer to that found by Gul and Sozbilir [1] and Teo and others [19] where these methodologies are equally as common. The mixed method studies were equally frequent in ESERA and NorDiNa, but the propotion of mixed methodology studies differ considerably in all forums. This varies from Gul's and Sozbilir's [1] 4% to Teo's and others' 52% [19]. Maybe this indicates journal characteristics or trends of some sort. Descriptive studies were absent from NorDiNa, and studies with no empirical results (no methodology used, N/A) were much more frequent in NorDiNa than ESERA. However, this is easily explained with a relatively large number of curriculum studies reported in NorDiNa as a curriculum renovation was underway in Norway during 2005-2013.
Discussion
The results showed that research in science education in this data pool focused much on students' understanding, attitudes, learning results, and on the teacher's impact on these aspects. Much less studied aspects were the teacher characteristics, what happens in the classroom while students are studying, how the teachers perceive the students' actions, attitudes and understanding, the students' feedback to the teacher, and teachers' self-reflection. These findings are in line with our previous findings in computing education research [10], science education research [14], and engineering education research [37].
This leads us to wonder why some aspects are studied more than the others in our data pool (ESERA, NorDiNa). We sought the answer to this question, but our results did not give clear answers. One explanation could be the research policy and the policy making. The Ministries of Education, the National Science Academies, and the National Boards of Education can have a significant impact on what research is done in the universities. Politicians need up-to-date information on teaching and learning and therefore these aspects could be funded more often. The researchers might even strengthen reciprocity and propose topics that will probably get the most funding. It is also possible that the less studied aspects will be published in other forums that focus more on the general pedagogy, such as the European Conference on Educational Research (ECER). However, we see that the ESERA conference should provide a forum broad enough for all kinds of research to appear to some extent at least. Therefore, the alternative publication channels would not seem to be a probable explanation, especially as the same aspects are absent from both the ESERA and the NorDiNa forums.
Nonetheless, we are not alone with our question, as earlier researchers searched for an answer to this too. For example, O'Toole and others [16] proposed that perhaps there has been an ongoing generational shift of some kind as the number of studies about teachers' pedagogical content knowledge, constructivism and public understanding about science are declining and other topics are increasing. This seems plausible and studies on the teacher's pedagogical content knowledge were uncommon in our data pools as well. On the other hand, this is also good news for those who are looking for less-studied research topics. It is unlikely that these issues have been fully researched and there is nothing new to discover. On the contrary, there is much to be discovered and one of the aims of our research was to find the less-studied aspects. A fundamental change in classroom dynamics is underway as learning environments are digitalizing. Clearly teachers' pedagogical content knowledge and students' study habits should be investigated much more.
The investigations published in NorDiNa and ESERA focus on course-and organizational-level investigations, whereas society and international-level studies were scarce. The finding has some novelty as this aspect has not been reported in earlier studies [16]. There are several tentative explanations ranging from ease and inexpensiveness of small-scale studies to the generalizability of the results. Readily collectable data and a faster publication phase are positive factors for small-scale studies. However, small scale is not a synonym for low quality. Results of a small-scale case study can be as relevant as those from a large survey. Society and international-level studies can be complex to arrange and expensive to carry out. These studies require research groups in which each researcher has their own topic. Consequently, larger studies are potentially divided into smaller investigations and specific study designs, which in turn can increase the number of course-and organizational-level studies.
Our data pool was limited to specific educational levels and therefore comparisons to earlier studies are tentative. We noticed that the studies in the NorDiNa journal focus more on school education and less on tertiary level education. The frequency of school education level studies and the infrequency of continuing education studies in NorDiNa is in line with the study by O'Toole and others [16]. Most of the studies reported were carried out at secondary and post-secondary levels (>40%), whereas primary (15%) and especially early childhood studies (2.5%) were less frequent. However, studies focusing on both the upper secondary and tertiary levels were rare, and the trend was similar in both journals. We conclude that this might indicate some sort of missing spot in science education research in Europe.
We sought to establish whether there are any discipline-based characteristics. Distributions between chemistry and physics education were similar and focused on the same foci areas, but biology education papers seemed to focus more on conceptual understanding than chemistry and physics education research. This view is supported by Asshoff and Hammann [36], who showed in their study that the most frequently investigated topic was the pupils' conceptual understanding-more specifically, their understanding about genetics and ecology. We propose that this might originate from the subject matter itself, which has taken giant leaps in DNA technology, molecular biology, virology, climate change, cloning, and health care over in recent decades. Our knowledge of human biology and environmental systems has been updated rapidly and the teachers, the pupils and the curricula have difficulties staying up to date. Consequently, perhaps, research on conceptual understanding has a more central role in biology education and therefore is a slightly more popular topic in our data pool as well.
We were interested to determine whether the teacher education (TE) context appeared in the data pool somehow. In other words, we asked ourselves whether the studies carried out in the TE context have any characteristic features. We found that the impact of the TE context appears in the data pool to some extent; however, the differences were not as apparent as one might have expected. The TE-related studies were distributed more evenly in different foci in NorDiNa, but they were more frequent in ESERA. This could corroborate the above mentioned finding that the NorDiNa journal focuses more on school education, and higher education studies are published elsewhere. The finding is new, as the earlier researchers had not expressed much interest in it.
Why are some methodologies more frequently used than others? In contrast with the general belief, studies using quantitative methodologies are not as frequent as we think, and qualitative methodologies have become predominant. There can be several reasons, such as research tradition for using a particular methodology. For example, qualitative research has been a popular research methodology in the Nordic countries for a long time, perhaps partially due to the influence of Marton's tradition in Sweden. We noticed the same trend in the ESERA data pool when we isolated the Nordic studies from the other European studies. The infrequency of mixed methodology studies is presumably due to its complexity as it calls for knowhow in both the quantitative and qualitative methodologies. These findings are in line with the previous researchers' findings [6][7][8]16], but also dissimilarities appeared. For example, the others reported a small number of theoretical papers, but in the NorDiNa data pool we noticed a somewhat notably large proportion, reaching 19% of all studies. We took a closer look into the data pool and found that there was curriculum review underway in Norway during 2005-2013 which solely explained the anomaly. As we deducted this distortion effect, the proportion of descriptive and theoretical studies was scarce and in line with the other studies in the field. However, the editorial line of the NorDiNa journal may have some additional impact on the frequency of publishing theoretical papers. The editorial board endorses authors to send descriptions of their ongoing projects and short abstracts of dissertations in the field wherein methodology descriptions might be limited or absent.
Also, the research team discussed the possible impact of the editorial line on the distribution of papers in different forums. True, to some extent the given topics published can be influenced by journal editors, and thus, with changes in editorship, the emphases of a journal or conference may change simply because of the views of the editors. However, we see that this is merely a theoretical note as the journal editors are not changed very often, and the editorial line is better seen as a joint vision of the editorial board rather than of one editor. On the contrary, it is more plausible that the editorial board will try to keep the editorial line the same from year to year, instead of having changes in the editorship. The situation might be different in conferences which typically have a thematic emphasis and which change from conference to conference and can have some impact on the versatility and manifoldness of the research papers.
Recommendations
We advise the authors to pay more attention to the internal coherence and clarity in their manuscripts. The didactic foci were not always clearly described in the paper, not even in the journal articles. The theoretical framework might introduce several foci, but not all of them were reported in the results section or one focus was emphasized more than the others. In several cases, we noticed some disparity between the title, the written research questions and what the results were about. Consequently, a classification method based solely on the title, the keywords, the abstract, or the research questions is apt to lead to misinterpretation. Comprehensive reading is therefore suggested. The approach is time consuming but improves the quality of the analysis.
The theoretical underpinnings of the DFCM are in school education and therefore it works best to illuminate the relations between a teacher, student, and the content in a formal education context. In the future, the DFCM will be further developed to cope better with informal learning phenomena. The resolution of the DFCM could be further developed with more subcategories. Currently, some interesting aspects such as teacher's pedagogical knowledge and evaluation do not show in the results to an extent which relates to their significance in education. In particular, the evaluation aspect could be addressed better.
Based on the findings from this study, it seems that science education research could be more versatile and comprehensive, and currently misses some important aspects from the didactic triangle point of view. Polarization of the research into fewer aspects may not have been this apparent within the academic community before our study. | 12,279.4 | 2019-12-12T00:00:00.000 | [
"Education",
"Chemistry",
"Biology",
"Physics"
] |
Review Maximum Entropy Approaches to Living Neural Networks
Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.
Introduction
A great scientific challenge of this century is to describe the principles governing how ensembles of brain cells interact.Neural network models and theories suggest that groups of brain cells, containing tens to thousands of neurons, are collectively able to perform elementary cognitive operations.These operations include pattern recognition, associative memory storage, and coordination of motor actions in complex sequences [1][2][3][4].Although a thorough understanding of these processes would not completely tell us "how the brain works", it would bring us closer to developing a theory of mental activity rooted in physical laws.
While we are still far from understanding how networks with a thousand neurons operate, two recent developments have brought us much closer to describing interactions in ensembles containing tens of neurons.First, progress in multineuron physiology has allowed many laboratories to perform long-duration (~1 h) recordings from a hundred or more closely-spaced neurons [5][6][7].These recordings come from both calcium imaging [8,9] and multielectrode array experiments [10,11].Second, progress in theory has allowed the field to construct phenomenological models of activity in neuronal ensembles.One of the most popular models in this regard has come from the "maximum entropy" approach, which is the subject of this review [5,12,13].Also gaining attention, however, have been the efforts made with "Generalized Linear Models (GLMs)" (for a review and application of GLMs, see [14,15]).These models can replicate many of the statistics of neuronal ensembles, and may therefore give us insight as to how tens of neurons interact.Together, these advances have generated tremendous interest in physiology at the neural network level.
What is the maximum entropy approach?Briefly, it provides a way for quantifying the goodness of fit in models that include varying degrees of correlations [16].To explain this, consider a hierarchy of models, each including progressively more correlations to be fit.A first-order model is one that would seek to fit only the average firing rate of all the neurons recorded in an ensemble.A second-order model is one that would seek to fit both the average firing rates and all pairwise correlations.Similarly, an nth-order model is one that would seek to fit all correlations up to and including those between all n-tuples of neurons in the ensemble.The maximum entropy approach provides a way to uniquely specify each of these models in the hierarchy [17].It does this by assuming that the best model at a particular level will be the one that fits the correlations found in the data up to that level and that is maximally unconstrained for all higher-order correlations.Relaxing all other constraints can be accomplished by maximizing the entropy of the model, subject to fitting the chosen correlations.By uniquely specifying this hierarchy of models, the maximum entropy approach allows the relative importance of higher-order correlations to be evaluated.For example, this framework permits us to ask whether an accurate description of living neural networks will require fifth-order correlations or not.If so, then describing ensembles of neurons will be a truly difficult task.Alternatively, it could tell us that only second-order correlations are required.Either way, the maximum entropy approach provides us with a way to quantify how accurately we are fitting the system [16].
Why has the maximum entropy approach attracted so much attention?Two groups recently applied maximum entropy methods to neural data with apparent success [5,12].They sought to describe the distribution of states in ensembles of up to 10 neurons from retinal tissue and dissociated cortical networks.Interestingly, they both reported that, on average, ~90% of the available spatial correlation structure could be accounted for with a second-order model.This result suggested that living neural networks were not as complicated as initially feared.This conclusion has been widely noticed, and many groups have vigorously responded both positively and negatively to the original papers on maximum entropy by [5,12].At present, this area of research is very active and can be characterized as having healthy and lively discussions.Regardless of how these issues are ultimately resolved, it is likely that maximum entropy approaches to neural network activity will continue to be researched in computational neuroscience for years to come.
Here, we will review all of these developments.This paper is organized into five main parts.First, we will explain the maximum entropy approach as it was originally applied to neural ensemble recordings in 2006 [5,12] so that people unfamiliar with this method will be able to apply it to their data.Second, we will describe the problem that these first models had with temporal dynamics.Third, we will review a very recent attempt to incorporate temporal dynamics.Fourth, we will discuss criticisms and limitations of the maximum entropy approach.Fifth, we will mention topics for future research.
Maximum Entropy for Spatial Correlations
Figure 1 depicts the problem of describing activity states in ensembles of neurons.A small slice of neural tissue (e.g., retina or cortex) is placed on a multielectrode array (Figure 1A), where it can be kept alive for hours if it is bathed in a warm solution containing the appropriate salts, sugars and oxygen.For a full description of the methods used to maintain these tissues and for recording from them with multielectrode arrays, consult [5,12,18].Under these conditions, the neurons lying over the electrode tips can produce voltage "spikes" when they are active (Figure 1B).Spikes from many neurons over time can be represented as dots in a raster plot (Figure 1C).At any moment, the state of an ensemble of, say, five neurons can be represented by a binary vector (Figure 1D), where 1 s indicates spikes and 0 s indicate the absence of spikes.A first step toward understanding collective activity in these neurons would be to construct a model that could account for the probability distribution of states from the ensemble.The maximum entropy model, as first applied by [5,12] sought to predict the distribution of states with a second-order model, given only information about how often each neuron spiked, and the pairwise correlations of spiking between neurons.Note that in this form of the problem, we are only concerned with the spatial correlations between neurons.As we will see later, there are significant temporal correlations between neurons as well.In what follows, we will recapitulate the approach first used by [5,12] to solve the problem of spatial correlations in neural data.Earlier versions of the maximum entropy approach were first described in [16,17,19,20].Because spiking neural activity is either on (1) or off (0), we can represent the state of each neuron i by a spin, σ i , that can be either up (+1) or down (-1).There is a rich history of using this binary representation of neural activity in network models [2,21].To represent temporal activity as a sequence of numbers, the duration of the recording is broken down into time bins.We will discuss the implications of the width of these time bins more in section 5.A typical width is 20 ms, as in [5,12], but other widths have been used also.Thus, a 100 ms recording in which a single spike fired at 23 ms would be represented by the sequence: (-1,+1, -1, -1, -1) if the data were binned at 20 ms.With this representation, the average firing rate of neuron i is given by: where T is the total number of time bins in the recording, and the superscript t represents the bin number.The pairwise correlations are given by: where the angled brackets indicate temporal averaging.The state vector, V, represents the state of N spins (out of 2 N possible states) at a particular time bin, and is given by:
=
(3) To predict the probability distribution of these states, an analogy with the Ising model from physics [2,22] is made.This is because the equation that specifies the "energy" of the second-order maximum entropy model is the same as that used to specify the energy in the Ising model [12].Please note that this is only an analogy, as connectivity in the Ising model is strictly between units that are nearest neighbors, whereas connections in neural networks can potentially occur between any neurons.In this model, each spin will have a tendency to point in the same direction as the local magnetic field, h i , in which it is embedded.In addition, the state of each spin will depend on the interactions it has with its neighbors.When the coupling constant J ij is positive, spins will have a tendency to align in the same direction; when it is negative, they will have a tendency to be anti-aligned.In this language, the "energy" of an ensemble of N neurons can be given by: ( ) where summation in the second term is carried out such that i ≠ j.Please note that while this model is mathematically equivalent to the Ising model of a ferromagnet, no one is suggesting that neural firings are primarily influenced by magnetic interactions.Neural interactions are thought to be governed by synapses, which can be described largely by biochemistry and electrostatics [23].
Once energies are assigned to each state in the ensemble, it is possible also to assign a probability to each state.This is done the same way that Boltzmann would have done it, by assuming that the probabilities for the energies are distributed exponentially, in the manner that maximizes entropy [5,12,24,25]: where the state vector V j again represents the jth state of spins and where the denominator is the partition function, summed over all 2 N possible states of the ensemble.Note that this equation will make states with high energy less probable than states with low energy.
Because this distribution gives the probability of each state, the expected values of the individual firing rates 〈σ i 〉 m and pairwise 〈σ i σ j 〉 m interactions of the maximum entropy model can be extracted by the following equations, where the subscript m denotes model: and where σ i (V j ) is the activity (either +1 or -1) of σ i when the ensemble is in state V j .The expected values from the model are then compared to 〈σ i 〉 and 〈σ i σ j 〉 found in the data.Although the model parameters are initially selected to match the firing rates and pairwise interactions found in the data, a given set of parameters in general will not produce harmony of every neuron with its local fields and interactions for every state.To improve agreement between 〈σ i 〉 m , 〈σ i σ j 〉 m and 〈σ i 〉, 〈σ i σ j 〉, the local magnetic fields h i and interactions J ij are adjusted by one of many gradient ascent algorithms [26].Several groups [12,18] have used iterative scaling [27], as follows: where a constant α < 1 is used to keep the algorithm from becoming unstable.After adjustment, a new set of energies and probabilities is calculated for the states, and this leads to new values of h i and J ij .Adjustments can be made iteratively until the local field and interactions are close to their asymptotic values.Because the entropy is convex everywhere in this formulation, there are no local extrema and methods like simulated annealing are not necessary.After iterative scaling, the final values of h i and J ij are then re-inserted into equation ( 4) to calculate the energy of each state, and then this energy is inserted into equation ( 5) to calculate the probability of observing each state V.This process of adjustment can be time consuming because it is computationally intensive to calculate the averages in equations 6 and 7, which requires summing over 2 N terms.Faster methods have been developed which allow larger numbers of neurons to be analyzed [26,28,29].These methods exploit ways of approximating the averages in equations 6 and 7.
We now explain how the performance of the model is evaluated.Recall that models of several orders can be used to capture the data.A first-order model accurately represents the firing rates 〈σ i 〉 found in the data, but assumes that all higher-order interactions, like 〈σ i σ j 〉, are independent and can be given by the product of first-order interactions: 〈σ i σ j 〉 = 〈σ i 〉•〈σ i 〉.Schneidman and colleagues [12] denoted the probability distribution produced by a first-order model by P 1 .A second-order model, as described above, takes into account the firing rates and pairwise interactions, and produces a probability distribution that is denoted by P 2 .For an ensemble of N neurons, an accurate N th -order model would capture all the higher-order interactions found in the data, because its probability distribution, P N , would be identical with the probability distribution found in the data.The entropy, S, of a distribution, P, is calculated in the standard way: Note that the entropy of the first-order model, S 1 , is always greater than the entropy of any higherorder models, S 2 … S N , because increased interactions always reduce entropy [12,30].The multiinformation, I N , is the total amount of entropy produced by an ensemble, and is expressed as the difference between the entropy of the first-order model and entropy of the actual data [12,31]: The amount of entropy accounted for by the second-order maximum entropy model is given by: The performance of the second-order maximum entropy model is therefore quantified by the fraction of the multi-information that it captures, denoted by the ratio r: This fraction can range between 0 and 1, with 1 giving perfect performance.When the values of h i and J ij are computed exactly, the ratio can also be expressed as [5,30]: where D 1 is the Kullback-Leibler divergence between P 1 and P N , given by: and D 2 is the Kullback-Leibler divergence between P 2 and P N : Shlens and colleagues used a ratio derived from the Kullback Leibler divergence [5], while Schneidman and colleagues used a ratio of multi-information [12].When both of these groups applied the second-order model to their data, they found that it produced ratios near 0.90 on average, meaning that the model could account for about 90% of the spatial correlation structure [5,12].While this suggests that the model predicted the probabilities of most states correctly, it is worth noting that even with ratios near 0.90, these models still failed to accurately predict the probability of some states by several orders of magnitude.An intuitive way to think about this ratio r in equation ( 13) is that it measures how much better a second-order maximum entropy model would do than a first-order model.From this perspective, some errors are unsurprising.
Another particularly important potential source of error arises from estimating the entropy of a distribution of states produced by an ensemble of spike trains.Generally, if the firing rates of the neurons are relatively low and the recording duration is short, there may not be enough data to adequately sample the distribution of all possible states, making estimates of the entropy inaccurate.This problem becomes exponentially worse as the number of neurons in the ensemble increases [32][33][34].
Aware of this issue, Shlens and colleagues noted that each of all the binary states in their recordings occurred 100-1,000 times [5], indicating minimal errors.Schneidman and colleagues noted that all of their recordings were on the order of one hour long, which they argued made undersampling unlikely for their firing rates [12].There is a large literature on methods of entropy estimation [32,[34][35][36], and this area is being actively researched.The main point here is that entropy estimation errors may lead to inaccuracies in the maximum entropy model, even if values of r appear to be high.Long recordings are necessary to minimize these errors.
The Issue of Temporal Correlations
After these relatively high ratios were reported, it appeared that the maximum entropy model could account for spatial correlations, at least in relatively small ensembles of neurons (N = 4 to 10).But brains are faced with the task of responding to patterns in space and time.It was therefore of interest to see whether temporal correlations played a substantial role in the activity generated by samples of brain tissue.If they did, future models of ensemble activity would need to account for temporal as well as spatial structure.
Tang and colleagues [18] looked at the issue of temporal correlations in a very elementary way.If correlations across time were relatively unimportant, they reasoned, activity states in an ensemble of neurons would occur in random order, with no particular preference for one state to be followed by another.
To test this, they compared the distribution of sequence lengths in actual neural ensembles with those generated by randomly selecting states from the model.Here, a "sequence" of length L was defined as L time bins in which at least one neuron was active in every time bin (Figure 2).Also, a sequence had to be bracketed at the beginning and at the end by time bins with no activity, as shown in Figure 2B.
Interestingly, they found that most sequence lengths from the data were significantly longer than those randomly drawn from the maximum entropy model (Figure 2C).This indicated that temporal correlations were important, and challenged future maximum entropy models to account for them.While this was an important step forward, it was still unclear whether temporal correlations played a major or a minor role in governing activity in neural ensembles.
Incorporating Temporal Correlations
Soon after it became clear that temporal correlations were important, Marre and coworkers modified the existing maximum entropy model to account for them [37].Here we will overview how they did this, and how this ultimately led to an estimate of the relative importance of temporal correlations, something that we consider to be a very significant advancement.Marre and colleagues realized that if temporal correlations were to be included, the new expression for the energy had to have the form: ( ) where now the state vector V represents a particular state of N spins at times t, t + 1, and t + 2. The terms T' and T'' are temporal coupling constants, relating spins to each other across one and two time steps, respectively.For simplicity, here we will consider only temporal correlations two time steps into the future.We could account for more time steps simply by adding more T terms, but this dramatically increases the dimensionality of the problem, as we will discuss more later.Although all of these additional terms might seem to complicate the optimization process, it is still possible to combine each matrix of temporal couplings (T' and T'') with the original matrix of spatial couplings (J) so that they form one large composite matrix.This is shown in Figure 3.In this form the adjustment of model parameters can take place just as before, using iterative scaling, for example, on the terms in h, J, T' and T''.(a) Schematic of a four spin system at three different time steps (t, t + 1, and t + 2).For simplicity, only the correlations between σ 1 and σ 3 are shown over space and time.The solid line represents spatial correlation; the dotted line represents temporal correlation one time step into the future; the dashed line represents temporal correlation two time steps into the future.(b) Matrices required for the model only of spatial correlations for the four spin system.The local magnetic field is represented by h, and the spatial coupling constants by J. (c) Composite matrix required for the model of spatial and temporal correlations up to two time steps for the four spin system.The local magnetic field h is as before, but the matrix of coupling constants is considerably expanded.The matrix of spatial coupling constants J occurs whenever interactions among spins at the same time step occur.The matrices of temporal coupling constants, T' and T'', occur whenever there are interactions among spins at temporal delays of one and two time steps, respectively.Note that transposed matrices are used below the diagonal, indicating that delayed correlations are treated here as if they are symmetric in time, following [37].
To test the performance of this model, it is necessary to consider the distribution of states of N spins at times t, t + 1, and t + 2 (Figures 4A, B).In this formulation, the state vector has 3N elements, and can take on 2 3N different configurations.As before, the probability distribution of these states from the data is compared to the distribution produced by the model (Figure 4C).But now it is possible to tease out the relative importance of spatial and temporal correlations.This can be done by creating three different types of models: one that accounts for spatial correlations only, one that accounts for spatial correlations and temporal correlations only one time step ahead, and one that accounts for spatial correlations and temporal correlations two time steps ahead.All of these models can be made to produce a distribution of states for N neurons over three time steps.For the model that only has spatial correlations, states are randomly drawn from the distribution and concatenated into sequences of length 3.For the model with temporal correlations of one time step, the first state is randomly drawn and subsequent states are then drawn from the subset of states in the distribution whose first time step matches the last time step of the previous state.In this way, the configuration of activity found in N neurons always changes in time in a manner that agrees with the temporal correlations found in the distribution produced by the model.The model with temporal correlations of two time steps already has a distribution of 2 3N states and does not need to be concatenated to produce states of three time steps.Once temporal distributions for all three models are obtained, they can be compared as before by using a ratio of multi-information.The results of this comparison are shown in Figure 5.The ratio of multi-information (see text) is plotted for the three maximum entropy models: One that accounts for spatial correlations only; one that accounts for spatial and temporal correlations of one time step; one that accounts for spatial and temporal correlations of two time steps.Ratios were obtained for ten ensembles of four neurons (N = 4), each drawn from a slice of cultured cortical tissue prepared by the Beggs lab.Error bars indicate standard deviations.Here, all three models were evaluated on the basis of how well they accounted for the distribution of states containing three time steps, where there were 2 3N = 4096 possible states.Note that this number is far more than the number of states in the spatial task, where 2 N = 16.Because the dimensionality has increased dramatically, it is perhaps not surprising that the ratios are below 0.65, the value obtained when spatial models were applied to spatial correlations only between spiking neurons in cortical tissue for N = 4 [18].Adding more temporal correlations to the model clearly improves its performance, but also reveals a fraction of temporal correlations that are not captured by the model.These preliminary results should be interpreted with caution, however, as they are obtained from a relatively small ensemble size.Calculations for larger ensemble sizes are challenging because of the dramatic increase in dimensionality.
Although this is only a small sample from which to draw conclusions, three features of this result are worth highlighting.First, note that the ratios here are below the ~0.65 ratio values obtained when the spatial model was applied to spatial correlations alone between (N = 4) spiking neurons in cortical tissue [18].In interpreting this result, we must remember that the dimensionality of the temporal problem (2 3N = 4,096) is substantially greater than the dimensionality of the spatial problem (2 N = 16).If temporal correlations play any role, it would therefore be reasonable to expect a somewhat lower ratio when only one or no time steps are included in the model.Second, note that including two time steps of temporal correlations nearly doubles the ratio obtained by the spatial model.This quantifies how important temporal correlations are: they account for roughly half of the correlation structure captured by the model.Third, note that even when two time steps are included in the model, a portion of correlations in the data are still unaccounted for.For example, if we were to fit the three points in the plot with an exponential curve, it would asymptote somewhere near 0.75, suggesting that about 25% of the spatio-temporal correlation structure would not be captured by the model, even if we were to include an infinite number of temporal terms in our correlation matrix.Conclusions here are preliminary, though, as this example is taken from a small sample of neurons prepared by our lab.For example, it is presently unclear whether an exponential function, rather than a linear one, should be used here.We are in the process of analyzing larger samples of neurons to clarify this issue.Despite the fact that the temporally extended maximum entropy model is not able to account for some fraction of correlations, the model still may be useful in giving us insight as to how correlations are apportioned in ensembles of living neural networks.This issue was not previously appreciated, and represents a gap that must be filled by future generations of models [18,37,38].
In some ways, the fact that all of the spatio-temporal correlations are not captured is not surprising.The maximum entropy models used here are all related to the Ising model.In this formulation, correlations between neurons are assumed to be symmetric in time.But abundant evidence indicates that neural interactions over time are directional and therefore asymmetric [15,39,40].In addition, the Ising model is strictly appropriate only for problems in equilibrium statistical mechanics; it can tell us how spins in a magnetic material will align at a given temperature only after the material has been allowed to settle, without further perturbations.A local neural network is probably not an equilibrium system, however, as neurons are constantly receiving inputs from other neurons outside the recording area.For neural networks, non-equilibrium statistical mechanics may be more appropriate.Unfortunately, this branch of physics is still undergoing fundamental developments, and the question of how to theoretically treat non-equilibrium systems is still very much open [41,42].
Limitations and Criticisms
A serious issue of concern is how well the model scales with the ensemble size, N. Ideally, one would like to apply the model to ensembles of N ≥ 100 neurons, as many current experiments are based on simultaneous recordings from populations of this size [7,10,11,43].Several issues may make it difficult to extend maximum entropy models to larger ensembles, though.
The first challenge is computational and is rooted in the problem that 2 N states are needed for the spatial model and 2 3N states are needed for the temporal model with only three time steps.Here, two types of solutions are relevant: exact and approximate.When N ≥ 30, it becomes computationally unmanageable to solve these models exactly.If we are interested in approximations, however, it is possible to work with much larger values of N. Several groups have worked on ways to rapidly approximate the model for relatively large ensembles [44,45].These approximations, however, were performed only for the second-order model.In both the exact and the approximate cases, solving the model for large numbers of neurons is computationally challenging.It seems likely that we will be able to record from more neurons than we can analyze for many years to come.
Another issue related to large N is that exponentially more data are needed to accurately estimate entropy, as mentioned previously in section 2. Recall that the number of possible states in an ensemble of N neurons is given by 2 TN for the spatio-temporal correlation problem, where T is the number of time steps to be included in the model.For an ensemble of ten neurons to be modeled over three time bins, it would take ~ 3.4 years of recording to ensure that each bin in the distribution of states was populated 100 times, even if the ensemble marched through each binary state at the rate of one per millisecond.This is of course a very conservative estimate, as it is unreasonable to assume that each state would be visited in such a manner.The criterion of 100 instances per bin was used by Shlens and colleagues for minimal entropy estimation bias [5].Such long recordings are obviously unattainable, so entropy estimation errors are inevitable.
Roudi and colleagues [33] have brought up a different set of concerns related to N. They argue that second-order maximum entropy models that seem to work for relatively small ensembles of neurons are not informative for large ensembles of neurons.They showed that there is a critical size, N c , that is given by: where ν is the average firing rate of the neurons in the ensemble and δt is the size of the time bin.For many data sets, ν •δt ≈ 0.03, giving N c ≈ 33.When the number of neurons in the ensemble is below this critical size (N < N c ), results from the model are not predictive of the behavior of larger ensembles.But when the size of the ensemble is above this critical size (N > N c ), the model may accurately represent the behavior of larger networks.Thus, Roudi et al. [33] would argue that the results reported by [12] and [18] are not really surprising, as these ensemble sizes are below the critical size, where a good fit is to be expected.In the case of Shelns et al. [5], however, this is not true, as their neurons had very high firing rates and thus had a relatively small critical ensemble size.Indeed, when Shlens and colleagues examined whether or not the model fit for ensembles of up to 100 neurons in the retina, they found that it fit quite well [10].All of this is consistent with the arguments presented in [33].These results in the retina are promising, but the circuitry there is specialized and it remains to be seen whether similar results can be obtained in cortical tissue.One way to overcome the critical size restriction would be to increase the bin width, δt, which could bring N c down to a reasonable size, particularly if the neurons are firing at a high rate.Roudi and colleagues point out that this could create other problems, though, as temporal correlations will be lost when large bin widths are used.We should note that these arguments about a critical size are based on the assumption that the model under consideration is second-order only [33].These arguments may not apply if higher-order models are used.But models with higher-order correlations pose challenges of their own.One of the main reasons the maximum entropy approach to neural networks [5,12] generated interest was because it suggested that neuronal ensembles could be understood with relatively simple models.If almost all the spatial correlation structure could be reproduced without knowledge of higher-order correlations, it seemed that there would be no need to include coupling constants for interactions involving three or more neurons.While this result indeed seems to hold true in papers where the maximum entropy model was applied to retinal neurons [5,10,12], it should be noted that work using cortical tissue indicated that somewhat less of the spatial correlation structure could be accounted for by a second-order model [18,38].This result raised the possibility that higher-order correlations are more abundant in the cortex than in the retina, a conjecture that awaits more rigorous testing.A third-order coupling constant could account for situations where two input neurons would drive a third neuron over threshold only when both inputs were simultaneously active.If the cortex is to compute, it seems obvious that this type of synergistic operation must be present.But including third-order correlations will again greatly expand the amount of computational power needed to develop an accurate model.Future maximum entropy models of large ensembles of cortical neurons will necessarily be computationally challenging.
Future Directions
Here we present a brief list of some topics for future research in the area of maximum entropy models.Several of these problems are already being actively researched: How does the performance of maximum entropy models compare with that of generalized linear models (GLMs) [14]?Are there circumstances in which one model is clearly better than the other?
When extending the maximum entropy model in time, is it possible to have temporally asymmetric couplings in the matrices T' and T''?If so, does this significantly improve performance over the case where the couplings are symmetric?
How do maximum entropy models constructed with data from spontaneous activity differ from models constructed with data where responses were conditioned on a stimulus?
Can maximum entropy models generally be used to make inferences about network connectivity, as was done by Yu and colleagues [38]?How do such inferred networks differ from those obtained by measures of effective connectivity [46,47]?Do the highly correlated states identified by the maximum entropy model really constitute a type of error-correcting code, as was originally claimed by Schneidman and colleagues [12]?
Maximum entropy models take as inputs average values of firing rates and pairwise correlations.To what extent does averaging obscure more subtle, higher-order interactions?
Can ways be found to construct accurate maximum entropy models for large ensembles of neurons from recordings only one or two hours long?Can approximations or inferences greatly ease this sampling problem [28]?
Conclusions
Maximum entropy models of neuronal ensembles have been reported to perform well in capturing spatial correlation structure for ensembles of about 10 neurons.It remains to be seen whether these models can be successfully extended to larger ensemble sizes in cortical tissue.In addition, temporal correlations may make up about half of the available correlations in neuronal ensembles.The original maximum entropy model for spatial correlations has been augmented to include temporal correlations, but even when extended, it fails to capture a substantial portion of spatio-temporal correlations.This weakness may be because these models are rooted in equilibrium statistical mechanics; perhaps the brain should be viewed more appropriately as a non-equilibrium system.Future versions of maximum entropy models will be challenged to account for three things: (1) temporal activity, (2) activity in large ensembles of neurons (N ≥ 30) and (3) higher-order correlations.
As maximum entropy models become more complex to better match these aspects of the data, they inevitably will become less similar to the Ising model from which they are descended.In the limit, this approach could develop into a very complicated data fitting exercise.Such an exercise would not necessarily be bad, but the original aim of applying statistical mechanics to the brain will have been lost.The hope was that we may have only needed to know a few details and one principle: construct the model so as to maximize entropy.The reality at this point is still unclear, but it appears that we will need to know many more details.Because of this, we might need to know more than one principle as well.What these principles are, or even if they exist at all, is still an open question.This problem is not limited to neural networks, but is common to complex systems in general.It is not yet clear whether a conceptual revolution, like those brought on by Copernicus in astronomy, Heisenberg and Schrodinger in quantum mechanics, or Einstein in relativity, is in the future of neuroscience.But it is encouraging to note that the painstakingly adjusted epicycles of Tycho Brahe eventually gave way to the more principled descriptions of Kepler, which finally set the stage for Newton's great synthesis [48].We can only hope that more detailed descriptions of living neural networks will provide pave the way for a similar transformation in our conception of the brain.
Figure 1 .
Figure 1.The problem to be solved.
Figure 2 .
Figure 2. Temporal correlations are important.(a) Activity from many neurons plotted over time.Boxes highlight an ensemble of four neurons over six time bins.(b) Within the boxes, there was activity for four consecutive time bins, bracketed by no activity at the beginning and the end.This is a sequence of length L = 4 (see text).(c) Sequence length distributions from actual data were significantly longer than those produced by random concatenations of states from the model.This suggests that temporal correlations play an important part in determining activity in neuronal ensembles.
Figure 4 .
Figure 4. Distribution of states for a model with temporal correlations (a) An ensemble of four neurons is selected from the raster plot.(b) Here, activity over a span of three time bins (t, t + 1, and t + 2) is considered one state.(c) The distribution of states is plotted for the model and for the data. | 8,722 | 2010-01-13T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Dosimetric Study of Automatic Brain Metastases Planning in Comparison with Conventional Multi-Isocenter Dynamic Conformal Arc Therapy and Gamma Knife Radiosurgery for Multiple Brain Metastases
Objective The efficacy of stereotactic radiosurgery (SRS) using Gamma Knife (GK) (Elekta, Tokyo) is well known. Recently, Automatic Brain Metastases Planning (ABMP) Element (BrainLAB, Tokyo) for a LINAC-based radiation system was commercially released. It covers multiple off-isocenter targets simultaneously inside a multi-leaf collimator field and enables SRS / stereotactic radiotherapy (SRT) with a single group of LINAC-based dynamic conformal multi-arcs (DCA) for multiple brain metastases. In this study, dose planning of ABMP (ABMP-single isocenter DCA (ABMP-SIDCA)) for SRS of small multiple brain metastases was evaluated in comparison with those of conventional multi-isocenter DCA (MIDCA-SRS) (iPlan, BrainLAB, Tokyo) and GK-SRS (GKRS). Methods Simulation planning was performed with ABMP-SIDCA and GKRS in the two cases of multiple small brain metastases (nine tumors in both), which had been originally treated with iPlan-MIDCA. First, a dosimetric comparison was done between ABMP-SIDCA and iPlan-MIDCA in the same setting of planning target volume (PTV) margin and D95 (dose covering 95% of PTV volume). Second, dosimetry of GKRS with a margin dose of 20 Gy was compared with that of ABMP-SIDCA in the setting of PTV margin of 0, 1 mm, and 2 mm, and D95=100% dose (20 Gy). Results First, the maximum dose of PTV and minimum dose of gross tumor volume (GTV) were significantly greater in ABMP-SIDCA than in iPlan-MIDCA. Conformity index (CI, 1/Paddick’s CI) and gradient index (GI, V (half of prescription dose) / V (prescription dose)) in ABMP-SIDCA were comparable with those of iPlan-MIDCA. Second, PIV (prescription isodose volume) of GKRS was consistent with that of 1 mm margin - ABMP-SIDCA plan in Case 1 and that of no-margin ABMP-SIDCA plan in Case 2. Considering the dose gradient, the mean of V (half of prescription dose) of ABMP-SIDCA was not broad, comparable to GKRS, in either Case 1 or 2. Conclusions The conformity and dose gradient with ABMP-SIDCA were as good as those of conventional MIDCA for each lesion. If the conditions of the LINAC system permit a minimal PTV margin (1 mm or less), ABMP-SIDCA might provide excellent dose fall-off comparable with that of GKRS thereby enabling a short treatment time.
Introduction
Gamma Knife (GK) (Elekta, Tokyo) stereotactic radiosurgery (SRS) (GKRS) can treat multiple small brain metastases easily with multi-isocenter planning [1]. Even if the brain metastases have a radio-resistant histological nature such as those from melanoma and renal cell carcinoma, they can be treated effectively by GKRS [2][3]. As shown in most of the formerly published studies, GK provides good conformity and an excellent dose gradient [4], but shows less homogeneity, which may be an advantage for tumor ablation as well [5]. However, the treatment time (beam-on-time) in GKRS will be long in the case of numerous brain metastatic lesions, though shorter in most cases than that possible with CyberKnife [6].
Linear accelerator (LINAC)-based dynamic conformal multi-arc (DCA) SRS and stereotactic radiotherapy (SRT) are also effective for brain metastases with a small number of tumors [7]. Conventional DCA SRS/SRT, for example, planned with iPlan (BrainLAB, Tokyo), needs a set of DCA for each individual lesion, and the treatment time would be long as well. Recently, Automatic Brain Metastases Planning (ABMP) Element (BrainLAB, Tokyo) was commercially released. It covers multiple off-isocenter targets simultaneously inside a series of multi-leaf collimator fields and enables SRS/SRT with a single group of LINAC-based DCA for multiple brain metastases. In ABMP-DCA, multiple brain metastases up to 10 tumors are covered in a micro-multi-leaf collimator field. Figure 1 shows an example of multiple brain tumor SRS by ABMP. All tumors are irradiated by one multi-arc group (Figure 1, Right). Each tumor is targeted by some of the 10 arcs, five arcs by 'go' and 'return'. Three tumors (arrows) are irradiated by 'return' arc in this case (Figure 1, Left). In this way ABMP facilitates a shorter treatment time. group. Each tumor is targeted by some of the 10 arcs, five arcs by 'go' and 'return,' in this case (Right).
In this study, dosimetry of ABMP (ABMP-single isocenter DCA (ABMP-SIDCA)) for SRS of multiple small brain metastases was evaluated in comparison with that of conventional multiisocenter DCA (iPlan-MIDCA) and GKRS.
Materials And Methods
The Research Ethics Boards of Aichi Medical University (No.2015-H332) and Nagoya Kyoritsu Hospital approved this study. Informed consent was waived for this study. Simulation planning was performed with ABMP-SIDCA, and GKRS was made in two cases with multiple small brain metastases, which had originally been treated with iPlan-MIDCA. Both cases had nine metastatic brain tumors.
Case 1
Case 1 was a 71-year-old female with multiple brain metastases from breast carcinoma. All nine brain tumors were treated by conventional iPlan-MIDCA. A planning target volume (PTV) margin of 2 mm was added. A leaf margin of 1 mm was adopted. D (95%) (dose to 95% volume of PTV) was 95% dose (=19 Gy) of 20 Gy. In iPlan-MIDCA, nine tumors were treated with four DCAs each. A three-day treatment was done, in which three of the nine tumors were treated each day.
Case 2
Case 2 was a 76-year-old female with multiple brain metastases from papillary thyroid carcinoma. All nine tumors were treated by iPlan-MIDCA. Each tumor was treated with four arcs. A PTV margin of 1 mm was added. A leaf margin of 1 mm was adopted. D (95%) was 100% dose of 22 Gy. A three-day treatment was done, in which three tumors were treated each day. In this case we gave a greater dose to the lesions than in Case 1 because thyroid carcinoma is thought to be relatively radio-resistant.
Imaging protocol and version of radiation therapy planning workstations
To determine GTV (=clinical target volume (CTV)), contrast-enhanced magnetic resonance imaging (MRI) and computed tomography (CT) were acquired. A 3.0 tesla scanner (Siemens Skyra Ver.VE, Siemens, Tokyo) and a 16 detector CT (Aquilion/LB, Toshiba, Tokyo) were used. The references for dose calculation in treatment planning were the CT images in iPlan Image (version 4.1) and iPlan Dose (version 4.5.3) for iPlan-MIDCA and ABMP Elements (version 1.0) for ABMP-SIDCA. Pencil beam convolution algorithms were used in both. Leksell GammaPlan (LGP, version 10.1.1) treatment-planning workstation (Elekta, Tokyo) was used for GKRS. LGP adopted water reference dosimetry [8].
Dosimetric analysis
First, a dosimetric comparison was done between ABMP-SIDCA and iPan-MIDCA with the setting with PTV margin of 2 mm and D95=95% dose (19 Gy) in Case 1 and PTV margin of 1 mm and D95=100% dose (22 Gy) in Case 2. A leaf margin of 1 mm was adopted in iPlan-MIDCA in both cases. ABMP does not have the function of leaf margin selection. The indices of dosimetry were as follows: as for conformity index (CI), reverse of Paddick CI [9] was evaluated. Reverse of Paddick CI = (TVPIV)2 / (TV x PIV), where PIV is the prescription isodose volume, TVPIV is Target Volume covered by PIV, and TV is the target volume. CI was considered acceptable when smaller than two. The gradient index was calculated with the formula: GI = PIVhalf / PIV [10]. PIVhalf = Prescription isodose volume, at half the prescription isodose. The maximum dose in PTV and the minimum dose in GTV were also compared. Maximum doses to OARs (eyes, lenses, brainstem, and optic pathways) were also evaluated.
Second, dosimetry of GKRS ( Figure 3B) was compared with that of ABMP-SIDCA with different PTV margins of 0, 1 mm, and 2 mm with the setting of D95=100% dose (20 Gy) in both Case 1 and Case 2. All nine tumors were treated with a one-isocenter plan in LGP in both cases. The percent isodose adopted as target margin in GKRS was 60% to 95% (median 85%) in Case 1 and 50% to 90% (median 80%) in Case 2. PIV (=V (prescription dose)) and V (half of prescription dose) were evaluated. The collected dosimetry data were analyzed using R version 2.14.2 (The R Foundation for Statistical Computing). The paired t-test was used to examine differences between indices of treatment plans. Differences with p < 0.05 were regarded as significant.
Comparison between ABMP-SIDCA and iPlan-MIDCA
There was no significant difference in the means of PTV in either Case 1 ( Table 1) or Case 2 ( Table 3), though the manner of contouring differed, namely 3D brush contouring in ABMP and 2D brush contouring slice by slice in iPlan. Neither CI nor GI was significantly different between ABMP-SIDCA and iPlan-MIDCA. The maximum dose of PTV and minimum dose of GTV were significantly larger in ABMP-SIDCA than in iPlan-MIDCA. Both in Case 1 ( Table 2) and in Case 2 ( Table 4) the maximum doses to eyes and lenses were minimal in iPlan-MIDCA, because the arcs of iPlan-DCA were cut manually with the intention of sparing them. Table 5 showed the comparison between GKRS and ABMP-SIDCA in Case 1. The volume of GTV in GKRS was close to that of no margin-PTV in ABMP-SIDCA. However, the volume of PIV in GKRS was close to that of 1 mm-margin-PTV in ABMP-SIDCA. V (1/2 prescription dose) in GKRS was significantly smaller than that of 1 mm margin V(1/2 prescription dose) in ABMP-SIDCA (p=0.007) but in GI (V (1/2 prescription dose) / V (prescription dose)) was not different (4.74 and 5.26, respectively in mean, p=0.34). This showed that the same level of dose fall-off around the target was obtained in ABMP-SIDCA, as compared with GKRS.
Discussion
The effectiveness of GKRS for multiple small brain metastases has been reported repeatedly [1][2][3]. The need for SRS/SRT for brain lesions is expanding, since recently it is expected that an increasing number of primary cancer lesions will be controllable for long periods. LINAC-based SRS/SRT is also reported to be effective for brain metastases with a small number of tumors [7]. ABMP is a newly developed treatment planning system for multiple brain metastases. In this study, both CI and GI in ABMP-SIDCA were good as compared with conventional iPlan-MIDCA.
In GKRS planning procedures, only GTV is usually contoured and the PTV margin is not defined in most cases, especially for round well-contrast-enhanced and well-demarcated metastatic tumors. However, when an isocenter is placed for a lesion, some intentional margin (maybe 1 mm or less), which is not defined as PTV margin, is usually added for the PIV around the GTV. If the lesion is very small like those of the present cases, PIV/GTV tends to be large in GK, because this additional margin for the PIV around the GTV would be relatively large against the small volume of GTV.
In this study, contouring of GTV and definition of PTV, or decision of PTV margin, was also evaluated in ABMP as compared with GK. The PIV of GKRS was very small, or non-defined PTV margin was minimal (1 mm or less). Not only precise GTV contouring but also deciding the optimal PTV margin is very important in SRS/SRT. Only evaluation of CI and GI would not be sufficient, as PTV itself might be larger in SRS plans by LINAC systems. If the radiotherapy system is deficient in accurate targeting and a wide PTV margin is employed, the surrounding normal brain may be damaged by wide diffusion of the radiation dose. To obtain optimal treatment results with less possibility of adverse effects on the surrounding normal brain, the same as by GKRS, efforts at quality control and quality assurance using LINAC systems to reduce possible uncertainties including image distortion, patient setup error and to avoid the need to add large PTV margins are indispensable. In this study, in the comparison between GKRS and ABMP-SIDCA, ABMP-SIDCA provided a good dose fall-off compatible with GKRS, if the minimal PTV margin, less than 1 mm, (1 mm in Case 1 and 0 in Case 2) was adopted.
Recently, various reports have focused on VMAT (volumetric modulated arc radiotherapy) [11][12][13] for multiple brain metastases. Clark, et al. [11] reported the feasibility of three non-coplanar arc VMAT in an only three metastasis (10,15, and 20 mm diameter lesions) patient scenario (four cases). They gave no margin to GTV, and the prescription was made as the setting of D (GTV 100%) >100% dose. Reverse Paddick's CI was reported as 1/0.761 and Paddick GI as 4.21-5.22. If the number of lesions is not large, each lesion can be targeted by some part of the micro-multi-leaf collimator without any overlap. Iwai, et al. [12] reported on two plans in a phantom study of a nine-lesion setting, geometrical placement and a 14-tumor clinical scenario. In most of the 14 tumors (0.03-0.71 cu cm) both reversed Paddick's CIs and Paddick's GI were quite large (only graphs are shown). Lau, et al. [13] reported clinical results in 15 patients (total 62 tumors, 2-13 tumors per patient). They showed V (12 Gy) and V (4.5 Gy) of PTV (no margin or 1 mm PTV margin) in the setting of prescription of D95%=100% dose of 20 Gy. V (12Gy) / PTV is quite large, though reverse Paddick CI is good (small). VMAT cannot achieve a good dose distribution if the number of targets is large, such as 10. In VMAT when the number of targets increases, tumor overlap in the collimator leaf direction and an increase in the maximum distance between scattered tumor lesions in the leaf movement direction may occur, and the limitations of each of the machines may make it impossible to cover all of the lesions. ABMP overcame these problems by dividing lesions into two groups, with 'go' arc and 'return' one, when lesions overlap or are separate in the collimator movement direction.
In this study, only planning simulation was investigated in ABMP. In our institution (Aichi Medical University) Varian STx (Varian Medical systems, Tokyo) with ExacTrac system (BrainLAB, Tokyo) is used for SRS. The field of multi-leaf collimator is 40 x 20 cm and the maximum opening width of the collimator is 30 cm. The width of micro multi-leafs varies, namely 2.5 mm in the central portion and 5 mm in the peripheral. A PTV margin of 1 mm is thought to be reasonable in our systems. As the next step, evaluation of ABMP with real dose measurement will be needed.
Conclusions
The conformity and dose gradient with ABMP-SIDCA were as good as those of conventional MIDCA with multiple groups of DCA by each lesion. If the conditions of the LINAC system permit a minimal PTV margin (1 mm or less), ABMP-SIDCA might provide an excellent dose fall-off compatible with GKRS and enable a short treatment time. This study investigated only simulation planning of ABMP. Next, ABMP with real dose measurements will also need to be evaluated.
Additional Information Disclosures
Animal subjects: This study did not involve animal subjects or tissue. | 3,427.2 | 2016-11-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Design, modeling, and manufacturing of high strain composites for space deployable structures
The demand for larger and lighter mechanisms for next-generation space missions necessitates using deployable structures. High-strain fiber polymer composites show considerable promise for such applications due to their exceptional strength-to-weight ratio, manufacturing versatility, packaging efficiency, and capacity for self-deployment using stored strain energy. However, a significant challenge in using composite deployable structures for space applications arises from the unavoidable extended stowage periods before they are deployed into their operational configuration in orbit. During the stowage period, the polymers within the composites experience material degradation due to their inherent viscoelastic and/or plastic properties, causing stress relaxation and accumulation of plastic strains, thereby reducing the deployment capability and resulting in issues related to recovery accuracy. This paper aims to give a state-of-the-art review of recent advances in the design, modeling, and manufacturing of high-strain composites for deployable structures in space applications, emphasizing the long-term stowage effects. This review is intended to initiate discussion of future research to enable efficient, robust, and accurate design of composite deployable structures that account for the enduring challenges posed by long-term stowage effects.
deformation.Particular HSCs made from fiber-reinforced polymers can be configured to possess two or more stable states which extends the applications compared to metals 5 .
A variety of deployable composite structures have been engineered for both compact folding and autonomous deployment.These structures typically undergo three distinct stages throughout their service life, namely, (1) Folding: Before being launched into space, deployable structures are folded into a compact form.The folding process is usually a slow, quasistatic process where strain energy is stored within the structure as it deforms.Essentially, the structure is carefully manipulated into a shape that can fit within the constraints of the launch vehicle.(2) Stowage: During launch and while being transported into orbit, deployable structures are securely held in their folded state by locking mechanisms.The stowage state can last for an extended period, which may range from months to years, depending on the specific requirements of the mission.(3) Deployment: Once in orbit, these structures are released by unlocking the previously engaged locking mechanisms.HSCs demonstrate versatility in deployment, utilizing both quasi-static methods with the motor roll out and dynamic processes triggered by the rapid release of stored strain energy.This dual deployment capability enhances the adaptability and applicability of HSCs in various engineering contexts.
Based on folding methods, deployable composite structures can be categorized into four groups, as shown in Fig. 1: foldable tubes, collapsible and rollable booms, elastic extension lattice, spring-back reflector antenna.The foldable tubes are a typical kind of Deployable Composite Booms (DCBs) created by incorporating cut-out slots in cylindrical tubes, forming tape springs for folding.The foldable tubes are folded by bending at both ends of the structures and thus serve as flexible joints that connect two large panels along their edges.The collapsible and rollable booms represent another category of DCBs designed with specific cross sections, allowing them to be collapsed and rolled up for compact storage and then extended to their full length during deployment.They are usually employed as supporting structures for a variety of large-scale membranes or reflectors in space systems.Indeed, HSC can also be used to manufacture flexible surfaces that can be folded into compact forms along prescribed creases, creating deployable composite structures known as flexible surfaces.Table 1 shows the current research focus and the existing challenges of the deployable composite structures.
To start with, understanding the folding and deployment behaviors of DCBs is essential for ensuring the functionality of space deployable systems and achieving mission success.Yee et al. 6 derived the analytical expressions and numerical models to incorporate the orthotropic properties of carbon fiber-reinforced composites for analyzing the moment-rotation behavior of a tape spring constructed from composite laminates.Mallikarachchi et al. 7,8 conducted extensive numerical and experimental studies to investigate the relationship between quasi-static folding moments and folding angles, as well as the dynamic deployment behaviors for composite tubular foldable hinges.Chen et al. [9][10][11] conducted a study on the flattening process of composite thin-walled lenticular tubes (CTLTs) under compression and tension, comparing experimental, numerical, and analytical results.Additionally, Bai et al. [12][13][14][15][16][17] studied the stress, deformation, and failure behaviors of CTLTs during folding, utilizing geometrically nonlinear finite element models and an analytical model to characterize the flattening and rolling behaviors of CTLTs.Yang et al. 18 employed a multi-objective optimization approach to design a C-cross section thin-walled rollable DCB.Their design process involved six consecutive steps in a full simulation, including flattening, endcompacting, releasing, coiling, holding, and deploying around a hub, all based on a nonlinear explicit dynamics analysis.
Furthermore, once DCBs are deployed, they serve as supporting structures that maintain the operational configuration of large-scale space systems.The analysis of strength, stiffness, and stability in deployed DCBs is crucial to ensure their structural integrity under external forces, preventing deformation, instability, or failure.Fernandez et al. 19,20 investigated various aspects related to folding and deploying behaviors, structural deploying stiffness, shape and ply effects, and fabrication methods for different types of DCBs intended for missions like solar sails and gossamer sail systems.They derived an inextensional analytical model to describe the bending deformation mechanics of collapsible tubular mast (CTM) booms, and explored the impact of varying lamina material, laminate layup, and shell arc geometries between different inner and outer shell segments on the bistability and stiffness properties of CTM.Murphey et al. 21analyzed the basic structural mechanics including deployment stiffness, buckling strength, and packaging constraints of triangular rollable and collapsible (TRAC) booms using both closed-form analytical and finite element approaches.Leclerc et al. 22 performed a study of the nonlinear elastic buckling behavior of TRAC booms under pure bending.Jia et al. 23,24 explored the nonlinear buckling, post-buckling, and collapse behaviors of CTLTs under pure bending, and revealed the influence of cross-sectional geometry on their stiffness properties and critical buckling loads through a parametric study.
It should be noted that most of the existing studies on the folding and deployment mechanics and stiffness and stability analysis of DCBs have adopted the assumption of linear elastic material properties.However, when DCBs are used in real missions they are often subjected to prescribed loads or enforced displacements for long periods of stowage time.An example of this is a compliant composite flexure in a deployable structure which may be held in a folded, or stowed configuration for many months between assembly and ultimate deployment.Over these timescales, it has been observed that many fiber/resin systems exhibit time-dependent effects which usually, although not always, correspond to degradation of behavior in comparison to predictions made assuming elastic properties or shortterm experimental data 25 .This effect caused by the long-term stowage is attributed to the viscoelastic-plastic properties of the polymer matrix 26 .During the long-term stowage period, the DCBs are usually subjected to high-strain deformations, which may result in significant stress relaxation and accumulation of plastic strains.As a result, the stress relaxation dissipates stored strain energy and subsequently diminishes the booms' deployment capability.The plastic strains can affect the structural integrity of DCBs and lead to issues like reduced stiffness and stability.Moreover, the permanent deformation also affects the shape recovery accuracy of DCBs after deployment, potentially causing deviations of the deployed https://doi.org/10.1038/s44172-024-00223-2configurations from the intended configurations.Recognizing that the surface accuracy of space deployable systems, such as antennas and telescopes, significantly impacts their performance, it becomes imperative to incorporate a thorough consideration of the long-term stowage behaviors of DCBs during the design phase.This ensures reliable deployment and the preservation of accurate surface accuracy and overall structural performance.
This paper presents a state-of-the-art review of recent advances in the design, modeling, and manufacturing of HSC for deployable structures in space applications.The key point of this review is firstly to provide a comprehensive and detailed understanding of the shape design and material design of deployable composite structures in order to reduce the folding stress levels while ensure structural stiffness.Then the modeling methods which taken into account the viscoelastic behavior of composite are summarized and the influence of long-term storage is discussed.Finally, the focus is placed on the material selection, manufacturing processes, and functional aspects of deployable composite structure for desired functionalities and shape recovery accuracy.
Design of deployable composite structures
The design of deployable composite structures in the application of spacecraft needs to strike a balance between flexibility and rigidity to ensure successful folding without material failure and the preservation of structural integrity upon deployment.Beyond this critical balance, there are several aspects that should be considered when designing DCBs to minimize the impact of stowage effects and enhance deployment performance.
Geometry design of deployable composite structures
In terms of geometry design, DCBs should be shaped to enable as uniform as possible stress distribution at a minimal stress level during folding and stowing, thus minimizing stress relaxation effects.Avoiding stress concentration is also beneficial in reducing the risk of plastic strain accumulation and material failure.As has been discussed in introduction, to facilitate efficient folding without causing material failure, DCBs are engineered by either embedding cut-outs at the folding region or using a collapsible cross-section for roll-up.In the design of cut-outs for foldable DCBs, size, shape, and topology optimization methods were developed to determine the optimal cut-out design for optimal performance.Mallikarachchi et al. 27,28 proposed a failure criterion for symmetric two-ply plain-weave laminates of carbon fiber-reinforced plastic, and investigated the effect of geometry parameters such as the length, width, and end diameter of the cuts on the failure indices of composite tape-spring hinges, as shown in Fig. 2a.Jin et al. 29 formulated a cut-out shape optimization for the composite tape-spring hinge (CTSH) in which they aimed to concurrently maximize the maximum strain energy stored during the folding process and the maximum bending moment during deployment while imposing failure constraints.The multi-objective optimization problem was solved by integrating data-driven surrogate modeling and shape optimization.Ferraro et al. 30 utilized level-set functions to define a variable number of cut-outs in the cut-out topology optimization of foldable joints, which enables damagefree folding while maximizing the stiffness of the structure.Yang et al. 31 explore the potential of replacing the reed structure with a honeycomb topology, as depicted in Fig. 2b.When compared to traditional spring steel structures, the honeycomb design offers superior mechanical properties per unit mass and can effectively substitute the reed structure.Rakow et al. 32 introduced the Slit-LockTM boom, designed to provide substantial shear stiffness by incorporating interlocking edge features as the boom unfurls.This relatively new boom technology, equipped with teeth that engage and lock the seam during deployment, enhances stiffness.The Slit-LockTM boom effectively carries bending loads, performing similarly to a closedsection tube, as long as the ends are shear-fixed.Holes can be cut into the entire thin-wall shell.
On the other hand, the rollable DCB is usually composed of thin-walled shells with varying curvatures that are bonded along their edges.The selection of the cross-sectional shape for these DCBs depends upon the technical specifications of the spacecraft.Common shapes include lenticular, triangular, tubular, C-shaped, N-shaped, and more, as illustrated in Fig. 2c. Lee et al. 33,34 introduced a two-shelled DCB, where two symmetric or a Deployable Composite Boom (DCB) is cut slots near the fold creases 27,28 .b The honeycomb topology on the reed structures 31 .c The most common crosssectional shapes of DCB.d Two asymmetric omegashaped shells forms a closed-section 33,34 .e Four-cell lenticular combined cross-sections 37 .f Eight C-shape combined sections 38 . https://doi.org/10.1038/s44172-024-00223-2
Review article
asymmetric omega-shaped shells form a closed-section, resulting in high stiffness and dimensional stability, as depicted in Fig. 2d.They developed a two-parameter inextensional analytical model to identify laminates and shell geometries inducing bistability and conducted a parametric analysis to determine optimal configurations that maximize stiffness while retaining bistability.Yang et al. 35,36 studied triangle-shaped and N-shaped DCBs.Parametric analyses were conducted, and all design variables were found to significantly influence the wrapping peak moment, maximum stress, and deploying fundamental frequency.Notably, the maximum stress showed higher sensitivity to changes in the central radius.Yang et al. 37 also proposed a new four-cell lenticular honeycomb DCB, as shown in Fig. 2e.Fatigue cracks caused by stress concentration are avoided by setting maximum principal stress to a specific constraint.Cao et al. 38 introduced a novel combined cross-sectional shape, consisting of eight C-shaped thin-walled shells, as depicted in Fig. 2f.To minimize the maximum stress in the stowage configuration, a gap is maintained between adjacent thin-walled shells.Sharma et al. 39 investigated slit cross-section overlapped section DCBs and accurately predicted the transition zone length and working stress.To enhance strain energy during storage and the deployment force, a smaller cross-section radius for the DCB is recommended.Furuya et al. 40 introduced the concept of corrugated closed-section booms to enhance deployment torque, shape restoration performance, and storage efficiency.Compared to traditional DCBs, the adoption of cut-outs design significantly reduces the folding stress level during folding and stowing, thereby mitigating the risk of stress release effects.However, this design also introduces new challenges such as reducing structural strength in the opening part, and increasing complexity in the manufacturing process.Additionally, it may lead to new failure modes such as local buckling and interlayer shear failure of composite structures.To address these issues, a novel cross-sectional shape design is proposed for these DCBs which enhances their structural stiffness and deployable moment without increasing the folding stress level.Nevertheless, this design poses difficulties in connection, fixing, and manufacturing while considering the surge in total mass of the DCBs.Therefore, it is necessary to multi-objective optimize the topology of the DCBs under various constraints in the future.
Design of composite material
The selection of composite materials and the configuration of their layup are crucial.The choice of materials, including fiber type, resin, and stacking sequence, not only significantly influences structural flexibility and rigidity but also impacts stress relaxation effects.The DCBs are prepared with carbon fiber/epoxy resin composites conventionally.Augello et al. 41 analyzed the effect of different materials on the folding of ultrathin tape-spring hinges.The materials include isotropic metallic with hardened steel tape spring and unidirectional T300 graphite fiber/epoxy prepreg.Bowen et al. 42 presents an extensive study of the minimization of the Brazier moment to enhance the design of orthotropic cylindrical flexible hinges.Su et al. 43 established the progressive damage model of composite cylindrical thinwalled hinges.To solve the problem of localized folds of bistable composite tubular booms due to local buckling phenomena when the diameter of the coil increased, Fernandez 44 proposed an improved scheme of a variable angle change over the DCB length.Then, the bending properties of the shell structure are changed at every section, yielding DCBs to naturally coil into a stable spiral as is imposed in reality.Recently, smart composites and hightemperature resistant materials have been applied to the composite deployable booms.An et al. 45 proposed self-deployable systems by the synergic combination of shape memory alloy-enabled smart soft composites with kirigami/origami reflectors.Roh et al. 46 proposed a DCB using woven fabric fibers and shape memory composites.The viscoelastic timedependent unfolding behavior including structural nonlinearity of thinwalled shape memory composite booms are experimentally and numerically investigated.Liu et al. 47 studied the mechanical properties of shape memory polymer composites-based DCBs.Compared with traditional DCBs, shape memory polymer composites-based DCBs exhibit the advantages of controllability and stability in the deploying process and still work well after folding 30 deformation cycles.DCBs are prepared with epoxy resin composites conventionally, which are prone to undergo glass transition at high temperatures.Zhang et al. 48proposed a novel hightemperature resistant carbon fiber/bismaleimide resin composite shell and its bistable characteristics are studied in megathermal environment.The effect of elevated temperatures and different ply angles were investigated on curvatures and snap behaviors of carbon fiber/bismaleimide resin composite bistable shells.The material design method for the DCBs based on carbon fiber/epoxy resin composite is highly refined.Compared to traditional composite materials, some novel composite material introduces exceptional properties to the DCBs, thereby significantly expanding their application potential.However, substantial research efforts are necessary prior to practical application, including precise analysis, optimal design, and experimental validations.The multi-physics coupling analysis of various materials is very important to investigate the mechanical characteristics of the DCBs.Furthermore, a layer optimization model for heterogeneous composite materials also remains an imperative task for future studies.
The influence of layer orientation on creep behavior mainly has two aspects.Firstly, the shear strain of the single layer is related to the orientation angle under specific deformation of the total laminates.Secondly, the stiffness of the laminates has a great correlation with the layer orientation, so the deformation of the laminates is related to the layout under specific external force.Mao et al. 49 monitored the deployment process of the fiberreinforced tape springs with laminate layouts of [−45°, 45°], [−45°, 0°, 45°] and [−45°, 0°, 90°, 45°].The results showed that the four-layer tape springs were self-deployable after more than 6 months of stowage, while the twoand three-layer tape springs lost their self-deployability after a few days of stowage.Singh et al. 50,51 investigated the creep behavior of the woven glass fiber-reinforced polymer laminates with layouts of The results showed that as the fiber off-axis angle increased, the specific creep increased.Kang et al. 52 studied the viscoelastic properties of carbon fiber-reinforced polymer (CFRP) and concluded that the laminates with the layout of [0°, 90°] 4 showed lower relaxation than [45°, −45°] 4 .The studies mentioned above show that the viscoelastic behavior is related to the layer orientation, and it's feasible to reduce the influence by changing the laminate layout.However, the relationship between laminate layout and creep behavior has not been revealed yet, which is an important research direction for the future.
Modeling stowage behavior of HSCs
Multiscale viscoelastic modeling is essential for gaining a deep and comprehensive understanding of the stowage effects in deployable composite structures.By examining the material behavior and structural response at various scales, from the microscale of individual material constituents to the macroscale of the complete structure, engineers can make more informed decisions about material selection, design optimization, and mitigation strategies to address the challenges associated with stowage effects.
Multiscale modeling of viscoelastic composites
In terms of the viscoelastic mechanics of composite materials, multiscale modeling techniques are employed to simulate the overall mechanical behavior of laminated composite structures.The modeling strategy can be summarized as a two-step multiscale homogenization method [53][54][55][56] , as depicted in Fig. 3.The first homogenization step involves creating a microscale representative volume element (RVE) that represents the microstructure of the unidirectional composites.This step determines the effective material properties of the composite through homogenization of the fiber and matrix properties in the yarn microstructure.The fiber is assumed to be a linear elastic material, either transversely isotropic or isotropic, characterized by its Young's Modulus and Poisson ratio.The matrix is assumed as an isotropic, and linear viscoelastic material, which can be described by a generalized Maxwell model using the Prony series 26,57 .The average behavior of the unidirectional composites is then computed as a homogeneous, linear viscoelastic, and orthotropic material.The average strain-stress behavior of the unidirectional lamina can be described using the following equation.
where t and T denote stowage time and temperature, respectively.Note that the stowage temperature has a significant effect on the time-dependent behavior of a viscoelastic material.Specifically, a long-time relaxation process of a material at low temperatures is equivalent to a short-time relaxation process at high temperatures.The temperature dependence of the relaxation modulus is correlated to time through the time-temperature superposition principle, in which the relaxation times at two temperatures are related by a shift factor.[C(t; T)] is the stiffness matrix of the unidirectional lamina, each entry of which is essentially time-and temperature-dependent.Eq. ( 1) indicates that the overall stress level of each composite layer σðtÞ is measured as a function of storage time and temperature when the lamina is subjected to a constant deformation ε cst .The second homogenization step utilizes the obtained unidirectional composite (yarn) properties to analyze the behavior of the woven composite by constructing a mesoscale RVE, which homogenizes the yarn and matrix properties in the mesostructure of the woven composite and yields the relaxation ABD matrix as a result.The ABD matrix represents the overall stiffness of the laminate, which is used to relate the deformation strains and curvatures of the middle surface of a laminate to the resultant forces and moments exerted on the laminate.
where the sub-matrices [A] and [D] represent the extensional and bending stiffness properties of the laminate, respectively, and [B] represents the coupling between the in-plane and out-of-plane loads and deformations.The ABD matrix is not constant but instead varies with time and temperature for viscoelastic composite laminates.This means that When a composite laminated structure is subjected to constant deformation characterized by applied strains (ε cst x , ε cst y , γ cst xy ) and curvatures (κ cst x , κ cst y , κ cst xy ) at a given temperature T and for a specific stowage period t, the resulting forces and moments exerted on the structure will depend on the stowage time and temperature.
The core of the multiscale modeling approach is indeed to calculate the stowage time-and temperature-dependent stiffness matrices, such as the lamina stiffness matrix [C(t; T)] and the ABD matrix for laminate stiffness.This can be achieved by implementing the homogenization approach within the standard finite element method (FEM).For the calculation of the lamina relaxation stiffness [C(t; T)], a microscale RVE is created with periodic boundary conditions, and six in-plane specific loading conditions, including three axial loadings and three shear loadings, should be performed subsequently.Differently, the ABD matrix calculation requires the implementation of periodic boundary conditions that enforce midplane strains and out-of-plane curvatures of the mesoscale RVE as a homogenized thin Kirchhoff plate.It aims to capture the overall deformation of the woven composite when it is subjected to not only in-plane extensional loads but also out-of-plane bending moments.In summary, the calculation results of the micro-RVE analysis are the time-and temperature-dependent stiffness matrix [C(t; T)] of the unidirectional composite (yarn), while the result yielded by the meso-RVE analysis is the time-and temperature-dependent ABD matrix for the composite laminates.Finally, the ABD matrix can be used to define the constitutive behavior of composite laminates in the standard finite element analysis, facilitating analysis of the viscoelastic behavior of any thin-walled composite laminates.It should be noted that, for unidirectional laminates, the two-step homogenization procedure can be simplified by avoiding the calculation of the ABD matrix, since the laminate code of unidirectional laminates can be defined via built-in functions in commercial FEM packages.
Viscoelastic behavior of DCBs
Following the above-mentioned modeling strategy, many studies have been reported to understand the viscoelastic behavior of composite deployable structures during long-term stowage.A summary of these studies is given in Table 2. Long et al. 58 derived and implemented two solution techniques, i.e., quasi-elastic and direct integration, into an anisotropic viscoelastic shell formulation, and captured nonuniform deformation in the bending of thinply HSCs.Hamillage et al. 59 investigated the bending relaxation behavior of thin-ply composites and verified the theoretical model predictions with experiments by performing column bending tests.Kwok et al. 53 analyzed the deployment performance of plain-weave composite tape-spring shells that are deployed after being held folded for a given period of time.Zhang et al. 60 analyzed the bistable behavior of C-shaped composite shells with viscoelastic material properties, and reported that the principal curvature of the shell's second stable state increases as the applied temperature and relaxation time increase.Brinkmeyer et al. 61 studied the stowage effects on the deployment behavior of storable bistable tape springs, and found that the deployment time increases predictably with stowage time and temperature, and for cases where stress relaxation is excessive, the structure is shown to lose its ability to deploy autonomously.Fernandes et al. 62 proposed a numerical approach that simulates the viscoelastic relaxation of composite tape-spring hinges.An et al. 63 developed a user-friendly RVE analysis plugin tool in Abaqus/CAE to rapidly estimate the effective orthotropic viscoelastic properties of unidirectional composites by taking as input the microstructure geometry as well as the known properties of fibers and matrix.The tool was utilized to simulate the influence of modulus relaxation on the deployment dynamics of a composite tape-spring hinge.Gomez-Delrio and Kwok derived an analytical solution for the recovery of a composite plate after stowage and studied the stowage and recovery of a deployable lenticular boom 64 .Deng et al. 65 simulated the viscoelastic strain energy relaxation in the long-term stowage periods of CTLTs made of unidirectional laminates.Guo et al. 66 built a gravity-unloading system and performed dynamic deployment tests on the deployable booms before and after stowing them for 6 and 10.5 months.The experimental results provide verification for the multiscale simulation methods and demonstrate that the deployment time increases with the increase in stowage time.Most of the existing studies have analyzed the viscoelastic behavior of HSC deployable structures, however, there is a gap in exploring the viscoplastic deformation of these structures.The viscoplastic deformation involves the permanent deformation of the composite materials under the influence of both timedependent and plastic deformation mechanisms.
Effect of boom-hub interface
The composite deployable booms need to be connected to the central deployment devices such as the hub, and the boom-hub interface provides the root boundary conditions of the boom.There are three main fixation conditions 67 : (1) The root cross section of the boom is fully fixed.This method requires a long length from the fixed end to flatten and wrap the boom to avoid damage.(2) The root cross section of the boom is partially fixed.In this case, the length needed for cross section shape transition is smaller but this method also lowers the bending stiffness of boom in the fully deployed condition.(3) The root cross section of the boom is totally flattened so that the boom can be wrapped around the hub in a very compact manner.But this method weakens the load-carrying capacity of boom significantly.So the interface between the boom and the hub must be comprehensively investigated and designed to achieve efficient storage and avoid failure.
Okuizumi 67 designed a novel metallic spring root hinge as the interface of composite boom.The root hinge is inserted between boom and hub, the end connected to the hub is fully fixed while other end connected to the boom can be collapsed with boom.This concept can improve storage efficiency and ensure the stiffness of boom after deployment.Pellegrino 68 proposed a new interface for CTLT booms.By removing material from the fix end of CTLT boom to achieve the small radius folding and avoid curvature localization which will cause the failure of composites.
Mallikarachchi 69 applied four different boundary conditions in the numerical simulations of the dynamic deployment of CTSH to investigate the sensitivity of the response to the root boundary conditions.The result shows that the boundary condition significantly affect the results of the simulations, which also illustrates the necessity of experiment to assist the appropriate selection of boundary conditions in simulation.But the onground test is still challenged by the design of clamp conditions and gravity compensation system and the test results also affected by the friction, air resistance and other factors.
Time-dependent failure and optimization
The investigation of the failure mechanisms of HSC deployable structures is essential to ensure that the structure's operational performance is as intended.In order to predict the failure behavior of composite laminates, several experimentally based failure criteria have been developed.Mallikarachchi et al. 7 proposed a method to identify potential damage areas by searching for the largest midplane strains and comparing these peaks with material damage values to estimate the safety of the structure.Mallikarachchi and Pellegrino 27 presented a failure criterion suitable for two-ply carbon fiber laminates considering three different loading cases: failure due to in-plane, bending, and combined in-plane and bending loads.The failure behavior predicted by this criterion is in high agreement with experiments and this criterion has been successfully applied to topology and shape optimization problem of composite deployment structures 28,30 .
It's important to note that when dealing with long-term stowage of H materials, not only does the stiffness of the material degrade, but the strength of the composite laminates can also be influenced 70 .Ubamanyu et al. 71,72 proposed a Flattening to Rupture test to effectively load composite coupons under long-term bending, enabling the measurement of time-dependent rupture and identification of the underlying time-dependent failure mechanisms.Numerical simulation methods were also developed to understand the sequence of rupture events and the parameters that affect the time-dependent rupture.Furthermore, the accumulation of plastic deformation in laminated composite structures during long-term stowage is indeed a significant challenge when it comes to modeling and predicting the behavior of these materials.Plastic deformation, which is permanent and non-recoverable deformation in the material, can occur in composite laminates for various reasons, and it can have a critical impact on the structural integrity of these components over time.Zhang et al. 73 developed a nonlinear viscoelastic-viscoplastic constitutive model for epoxy polymers.Matsuda et al. 74 analyzed in-plane elastic-viscoplastic deformation of carbon fiber/epoxy laminates using a homogenization theory of nonlinear timedependent composites.Megnis et al. 75,76 useds Schapery's nonlinear viscoelastic, viscoplastic material model to characterize the inelastic response of glass fiber epoxy composites.In addition, in the context of optimization design of composite depolyable structures, most of existing optimization problems were constructed without considering the material degration during the stowage effects 29,30,35,77 .The long-term stowage effects of the HSC deployable structures should be accounted for in the future work to ensure the recovery accuracy of space mechanisms such as antennas. https://doi.org/10.1038/s44172-024-00223-2
Manufacturing of composite deployable structures
Resin, as the matrix material in composite materials, exhibits various characteristics during its curing process, such as thermal expansion coefficient, curing shrinkage, and mechanical properties, which directly affect the generation and distribution of residual thermal stress in the composite material.These residual stresses can lead to unpredictable shape changes and performance degradation, or even material failure during application.Therefore, it has become an urgent and important issue to enhance the mechanical performance and shape recovery precision of composite deployable structures by considering resin properties, thermal stress control, and manufacturing processes.
Enhancing resin performance and controlling thermal stress Deployable composite structures for space applications have specific resin performance requirements.These requirements aim to ensure that composite deployable structures can meet the unique demands and operating conditions in space.The resin performance requirements can be summarized as follows: (1) Mechanical properties: The resin should possess sufficient strength, stiffness to withstand stresses and loads in the space environment.Additionally, the resin should exhibit good fatigue and impact resistance to handle loading conditions during long-term usage.
Review article
(2) Thermal properties: Extreme temperature variations exist in the space environment, ranging between −80 °C and 100 °C.Therefore, the resin needs to exhibit excellent thermal stability and high-temperature resistance to maintain structural stability and strength.(3) Viscoelastic characteristics: Viscoelastic materials exhibit deformation and stress relaxation, also negatively impact the deployable behavior and shape recovery accuracy of composite deployable structures.Consequently, viscoelasticity should require appropriate correction and control.(4) Dimensional stability: Composite deployable structures for space applications need to possess excellent dimensional stability.The resin should have a low coefficient of thermal expansion and excellent linear thermal shrinkage performance to ensure stability and precision of the structure during temperature variations.(5) Radiation resistance: Radiation, such as cosmic rays and ultraviolet radiation, exists in the space environment.The resin needs to possess appropriate radiation resistance to prevent damage to the structure and resin performance.( 6) Processability: The processability of the resin are crucial for manufacturing composite deployable structures for space applications.The resin should exhibit good flowability and moldability to facilitate the fabrication and assembly of composite materials, and consider the curing characteristics and controllability of the resin comprehensively.
In conclusion, the resin performance requirements for composite deployable structures are stringent.Therefore, it is necessary to comprehensively consider these requirements during resin design and selection, and conduct thorough evaluation and testing to ensure that the resin can meet the specific demands of space applications and guarantee the reliability and safety of composite deployable structures.
In additionally, residual thermal stresses and deformations during the processing and shaping of composite structures are not only intricately connected to the performance of the resin, but also are generated by mismatches of thermal expansion coefficients of component materials, material anisotropy, structural forms, lay-up methods, chemical shrinkage, mismatched coefficients of thermal expansion between molds and components, mold-component interfaces, allowance for defects (such as voids and inclusions), and curing processes.Many researchers have extensively studied these issues using experiments, analytical methods, and numerical simulations.
Experimental testing is the most direct and accurate method for studying the residual thermal stresses and deformations generated during the curing process of composite laminates.Bogetti and Gillepie 78 investigated the curing deformation mechanism of composite laminates through experiments and found that temperature, degree of cure, and resin distribution in the thickness direction have significant influence on residual thermal stresses and deformations.White and Hahn 79 found that reducing the curing temperature while increasing the curing time, or conducting the cooling process at a lower rate, can effectively reduce the post-curing thermal deformation.Daniel and Liber 80 used the measured strains to calculate the residual thermal stresses in the laminates and further investigated the influence of lay-up schemes on residual thermal stresses, validating that residual stresses in the laminates can lead to transverse cracking in individual layers.Additionally, low-temperature curing and staged curing have also been shown to effectively reduce post-curing residual thermal stresses and thermal deformations 81,82 .However, experimental testing has limitations such as high costs, lengthy processes, and dependence on experimental conditions.Numerical simulations based on finite element methods can effectively address these limitations.Bogetti et al. 83 studied the curing process of laminates with arbitrary cross-sectional shapes and boundary conditions using a two-dimensional finite element model.Satish et al. 84 introduced shear layers in their finite element model to simulate the effect of molds on composite materials, enabling accurate prediction of thermal deformation resulting from mold-material interactions.Some studies have investigated the factors affecting the curing deformation in asymmetric laminates.The results showed that thermal deformation in asymmetric laminates is closely related to the dimensions of the laminates, and when the length and width increase to a certain extent, the stable shape after curing changes from saddle-shaped to cylindrical shell-shaped.The curing deformations increase with an increase in curing temperature, while the cooling rate has little effect on thermal deformation [85][86][87] .Compared to experimental testing and numerical models, theoretical modeling offers lower computational complexity and faster solution speeds, allowing preliminary results to be obtained in a shorter period of time.Hahn and Pagano 88 proposed an analytical model based on classical laminate theory for estimating residual thermal stresses in laminates.Xiong et al. 89 developed a micromechanical theoretical model to predict the residual thermal stresses in plain-weave composite laminates and provided an analytical expression for estimating thermal stresses.
Summarily, there are numerous factors that influence thermal deformation and thermal stress, including component material properties, material anisotropy, structural form, lay-up scheme, interaction between molds and components, curing process and parameters, chemical shrinkage, etc. 90,91 .During the preparation of deployable composite structures, residual stress is mainly caused by internal and external stress sources.Among them, the main internal stress source is the shrinkage of the resin during curing and the mismatch of thermal expansion coefficients, while factors such as specimen shape, specimen-mold interactions, and process conditions constitute the external stress sources.Due to the limitations of material inherent properties, it is impossible to completely eliminate the existence of residual stress.Therefore, the key lies in finding effective ways to minimize or mitigate the impact of residual stress during the preparation process.Specific methods include: 1) optimizing the curing process (such as adjusting curing time, temperature, and curing degree gradient); 2) introducing expandable materials; 3) improving interface properties; 4) using new curing technologies such as electron beam curing, UV curing, and laser curing; 5) applying external prestress.These measures belong to external intervention methods, which theoretically can reduce the impact of internal residual stress to a certain extent, but still face many technical challenges in practical operation.Among them, optimizing the curing process is a relatively effective method, but it is limited by various factors, so the effect is not significant.Due to the dispersion, anisotropy, and resin characteristics of composite materials, it is extremely difficult to find monomers that are compatible with them, leading to difficulties in implementing the method of filling foreign materials.In addition, the interface properties of composite materials involve the microscale and are difficult to intervene from a macro perspective.New curing technologies require extremely high requirements for process, material compatibility, and equipment precision, often requiring high cost investment.Recent research has shown that applying prestress may effectively reduce residual thermal stress 92 , but this method is still in its infancy and its mechanism is not yet clear.In theory, applying prestress can not only induce multistable characteristics of deployable composite structures, but also reduce residual stress in composite materials.However, how to find a balance point in deployable composite structures that does not interfere with each other still needs further research.This process involves multi-disciplinary fields such as multi-physical field theory, composite mechanics, and material forming processes, and still requires a large amount of theoretical, numerical, and experimental research support to find effective solutions for controlling thermal deformation and thermal stress.
Fabrication process
Compared to other composite deployable structures, the lenticular composite deployable boom poses greater complexity and challenges in the manufacturing process due to the unique structural form.The preparation process is a crucial factor in achieving the folding and deploying functions and controlling the shape accuracy.The lenticular composite deployable boom, being thin-walled structures, require considerations from various aspects such as process cost, feasibility, and quality precision.Currently, vacuum bag or autoclave methods are widely used for manufacturing https://doi.org/10.1038/s44172-024-00223-2lenticular composite deployable boom 12,13,[93][94][95][96][97][98][99] .In this section, detailed introductions of the manufacturing process by Bai and his colleagues [78][79][80][81][82] are provided as follows: (1) The mold, prepreg, and thermoplastic film for molding were prepared, as shown in Fig. 4a.(2) The prepreg was cut and laid out according to the scheme.Air trapped between the layers was minimized during the laying process, as shown in Fig. 4b.(3) Once the prepreg was laid, auxiliary materials were positioned on top of it, followed by the encasement of the vacuum bag using a sealing strip, as depicted in Fig. 4c.(4) The vacuum-bagged mold was placed into a baking box.During the cooling process, the pressure should be maintained until the temperature of the part was lower than 60 °C, as shown in Fig. 4d.(5) After full curing, one half of lenticular composite deployable boom from was removed from the mold, as shown in Fig. 4e.The bonding edges of one half of lenticular composite deployable boom were trimmed and polished.Subsequently, adhesive was applied to one of the half molds, and then the two halves were pressed together, lastly, the lenticular composite deployable boom specimen was obtained, as shown in Fig. 4f-h.
The vacuum bag method offers advantages like cost-effectiveness, feasibility, high-precision, and broad applicability, making it a practical approach for producing composite deployable structures.However, it faces challenges when fabricating ultra-long booms (≥20 m) due to constraints related to space, molds, and equipment.Advanced pultrusion (ADP) technique enables automated production of composite deployable structures with unlimited continuous length, providing another technical means for manufacturing ultra-long lenticular composite deployable boom.The specific manufacturing process involves using prepreg material to produce composite deployable structures with specific cross-sectional shapes through steps such as preforming, hot pressing, post-curing, and cutting.Based on the ADP technique, Zhang et al. 100 fabricated a 75 m lenticular composite deployable boom using the continuous curing process, achieving continuous curing of one half of boom and continuous bonding of the boom, as shown in Fig. 5a and b.
The ADP technique offers high automation and versatility, making it suitable for ultra-long composite deployable structures.However, it may have slightly lower precision compared to the vacuum bag method.The vacuum bag method is well-suited for structures with complex crosssectional shapes and less stringent dimensional requirements, but it demands careful selection of resin and adhesive, along with precise control over the curing process.
The manufacturing process of composite deployable structures has the complexity of multi-physical field coupling, in which the interaction between fibers and matrix has an important impact on the quality and performance of the finished product.This mainly involves the resin flow, crystallization, compaction process, and the properties of the interface layer.During the curing process, as the temperature of the outer surface of the component changes, the internal resin undergoes volume shrinkage under the combined effect of thermal effects and polymerization crosslinking reactions.In addition, there is a significant difference in the thermal expansion coefficients between fibers and resins, which may cause resin to undergo extrusion or relaxation around the fibers and flow under external forces.This uneven flow may result in uneven distribution of fiber volume fractions, local resin enrichment or depletion, and weakened interfacial adhesion between fibers.The compaction process of composite materials is related to the resin infiltration flow, and its compaction behavior and extent affect the uniformity of resin flow.At the same time, the fiber skeleton and material properties also affect the compaction degree, resin viscosity, and curing effect.In summary, the curing shrinkage and thermal expansion coefficient mismatch between fibers and matrix are the inherent factors that cause thermal deformation and thermal stress in composite materials.The more severe their effects, the more prone the finished product is to surface precision issues such as porosity and wrinkling, which seriously affect its mechanical properties such as strength, stiffness, and durability.It may even lead to structural failure issues such as delamination and cracking, seriously threatening product reliability and safety 101,102 .Therefore, in-depth understanding of the interaction mechanism between fibers and matrix is key to reducing thermal deformation and thermal stress, while also providing important theoretical guidance for improving the dimensional precision, surface quality, and mechanical properties of composite products.
This section provides a review of the preparation process of deployable composite structures, deeply analyzes the internal and external factors that affect their forming quality, mechanical properties, and shape recovery precision.Through comprehensive analysis, it summarizes the current problems and challenges faced by deployable composite structures in the curing process.According to the authors' knowledge, there is currently no other literature that comprehensively summarizes the key elements, considerations, and technical difficulties in the design process of deployable composite structures from a production and preparation perspective.
Space applications of composite deployable structures
Space applications Figure 6 and Table 3 show the commonly used composite deployable structures and their space applications.Examples include solar arrays, antennas, booms, radar reflectors, solar sails, deployable radiators, and boom-mounted instruments.Roll Out Solar Array (ROSA) is designed to generate power for spacecraft, particularly satellites and other missions in space.It is known for its compact, lightweight, and roll-out design, which allows for efficient storage during launch and easy deployment in space 103 .Oxford Space System (OSS) has developed an X-band wrapped-rib antenna, with a 2.7 m-diameter parabolic reflector supported by 48 CFRP CTMs 104 .At the test of the primary structure, the derivation from the ideal shape has not changed after the first stowage test.Surrey Satellite Technology Ltd (SSTL) and OSS have confirmed to build and launch a wrapped-rib antenna with a 3m-diameter reflector.The antenna has successfully completed ground-based tests and is now ready to demonstrate its performance in orbit 105 .The surface accuracy of the reflector is strongly related to the bending stiffness of the ribs 106 .HSC structures can be used as not only the support frames, but also the deployable reflector.Tan et al. 107 developed a 4.6m-diameter stiffened spring-back antenna with deployable reflector.Soykasap et al. 108 did the optimization study of Tan's work and proposed a 6m-diameter thin shell spring-back reflector antenna, with the reflector made of CFRP.The analyses showed that the antenna met the specifications required for Ku-band operation.HSC can also be used in foldable hinges.The European Space Agency (ESA) launched a foldable antenna, the Mars Advanced Radar for Subsurface and Ionospheric Sounding antenna in 2003 109 .The main deployable members of the antenna were three foldable tube hinges made of a Kevlar and glass fiber composite material.Because of viscoelastic behavior of the material and complex space environmental conditions, one of the hinges had not completely locked into space 110 , and the deployment was after a year of delay.
Flexible membrane deployable antennas are mainly in the form of inflatable parabolic type, electrostatically molded deployable antennas, and supple film tensioned antennas.The inflatable and flexible tensioning types are the two types with more research.The development of inflatable antenna has gone through a long and tortuous development path, and IN-STEP inflatable antenna experiment (IAE) 111 in 1996 undoubtedly pushed the inflatable space antenna technology to a milestone new peak.The inflatable antenna in the test mainly consists of an inflatable reflector combination device and a ring/limb support structure, and the reflector comprises 62 aluminum-plated, triangular polyester film diaphragms about 7m thick.The ring/limb structure is made of neoprene-coated Kelvar, and the ring supports the edge of the reflector assembly.Flexible tensioned membrane deployable antenna refers to a new type of sizeable space antenna that integrates and innovates flexible electronics, thin-film material, flexible structure, and flexible unfolding technology, characterized by lightweight, high spreading ratio, high gain, and beam flexibility.The reflective surface of a membrane deployable antenna usually consists of a thin-film composite material, such as polyimide, which can realize the advantages of a low- density antenna structure and a high spreading ratio.Flexible tensioned membrane deployable antennas come in two forms: reflective-array antennas and direct-radiation array antennas.A reflector array flexible membrane antenna is a passive antenna; the light film structure forms the reflecting surface, and the feed is placed outside the reflecting surface.A typical space application is Radio Frequency Risk Reduction Deployment Demonstration (R3D2) supported by Defense Advanced Research Projects Agency (DARPA) 112,113 .The phased-array antenna is active; the reflecting surface is formed by a deployable thin-film structure, and the Transmitter and Receiver(T/R) module is integrated into the membrane structure to realize the function of the antenna.Typical space applications are, for example, the 40 m 2 deployable synthetic aperture radar (SAR) antenna and solar sails designed by ESA and German Aerospace Center (DLR) [114][115][116] .The composite material was used in the membrane's lightweight design, and a CFRP boom was developed.Japan Aerospace Exploration Agency (JAXA) researched the IKROS composite solar sails and validated them in orbit in the 2010s 117,118 .Also, the Shanghai Institute of Aerospace System Engineering achieved the Pujiang 2-membrane composite antenna flighted test, and the China Academy of Space Technology (CAST) designed membrane antennae and spring-back antennae, achieving the ground test 119,120 .
Recent advances in space antenna in China Specially, the application of composite materials in space-borne antennas is worth introducing in detail.The space-borne antenna is the space vehicle's "eyes" and "ear", used to receive the radio frequency signal of the launched satellite.It is one of the most critical core products of the spacecraft.Due to the space environment being exceptionally harsh, the antenna must overcome the carrier launch's vibration loads, space high and low temperatures (−200 °C-150 °C), space zero gravity, and other environments.The development of antenna technology in antenna material also put forward higher requirements.Composites can realize the comprehensive and high-index performance requirements that are difficult to meet by nano-materials, so they become the inevitable trend of material development.Carbon fiber composites, for example, have the following merits: high specific modulus (5 times higher than steel and aluminum alloys); high specific strength (3 times higher than steel and aluminum alloys); lightweight (density is half of aluminum alloy, one fifth of steel); small coefficient of thermal rise (up to 10 −6 or less); high-temperature performance and stability; excellent corrosion resistance and radiation performance.Composites are widely used in mesh and solid surface antenna.
Space mesh antenna
Composites, characterized by light weight, good thermal stability and high specific stiffness, are widely used in mesh antennas, accounting for about 70% or more of the number of mesh antenna parts.
The umbrella antenna of Chang'e-4 (China's Lunar Exploration Project, CLEP) relay satellite is currently the farthest mesh antenna flying in the international community, and the antenna works stably in the deep space environment.Chang'e-4's antenna is shown in Fig. 7. Composites are used in design of antenna's base.The carbon fiber base's stiffness and thermal deformation must meet the following requirements: 1) Stiffness: axial: ≥20N μm −1 ; radial: ≥50N μm −1 ; 2) Thermal deformation: axial thermal deformation ≤70 μm; radial thermal deformation ≤60 μm.
The wrapped-rib antenna has the advantages of a high stowage ratio, simple structure, lightweight, etc. Usually, it is necessary to wrap the flexible antenna rib to the center hub to realize the stowage and then realize the selfexpansion to obtain the parabolic profile by releasing the deformation energy after launching into the orbit.With the development of composite materials, a CFRP composite structure is applied to the flexible rib design of the wrapped-rib antenna.CFRP has the advantages of high specific strength, specific rigidity, creep resistance, low coefficient of thermal expansion, low thermal conductivity, high specific heat capacity, resistance to thermal shock, thermal abrasion.Space structures, made of the composites, have lighter mass, higher modulus and easier forming process.
For the development of high-throughput satellites, high-precision Kaband umbrella antennas are more competitive.The composite rib structure is adopted to achieve the goal of a lightweight antenna.At the meantime, membrane composites are specific for developing new lightweight, highprecision umbrella antennas with preformed, stress-free, and highly stable properties.Space solid surface antenna High-precision solid surface reflectors require high pointing accuracy, structural shape, and positional stability with good stability in this harsh space environment.Choosing suitable antenna reflector surface material and processing technology is necessary to solve the ultra-high profile accuracy problem.Carbon fiber composites are used for reflective surfaces and backbone on solid surface antennas.Carbon fiber composites have the merits of being lightweight, having near-zero expansion, high specific stiffness, quasi-isotropic, insensitive to moisture, and good processing performance.The solid surface reflectors of carbon fiber composites can achieve surface accuracy of up to 10 μm.Commonly used composite structural forms include honeycomb sandwich structures, laminated structures, etc.With the development of high/ultra-high flux satellites, the antenna operating band crosses from C and KU to Ka and even Q/V bands, submillimeter waves, and terahertz bands.For high-precision composite solid surface antennas, such as all-composite grid structure terahertz antenna.Using a sub-block splicing-paved technique can improve the accuracy of the paving angle.On the other hand, it can effectively balance the thermal residual stress in the reflector surface and ensure the reflector surface's overall uniformity and the profiles' precision.At the same time, the integrated molding design of the reflector and reinforcing backbone structure is adopted, which solves the problems of mismatch of the reflector's thermal expansion coefficient as an integration, poor thermal stability, and difficulty in maintaining the precision of the profile caused by the traditional adhesive method.
Conclusion and outlook
This article highlighted several critical factors, including composite configuration, viscoelastic modeling, residual thermal stress, etc., that affect the shape recovery accuracy of HSC used in deployable structures in design, modeling, and manufacturing.The next step in development for HSC for space deployable structures can be concluded as follows.
Composites design
Communications, Earth observation, data relay satellites, deep space probes, and space microwave wireless energy transmission applications have put forward ultra-large or even extensive antenna structure size requirements, the size of which can be up to 100-meter class.For larger apertures and higher profile accuracy, the requirements for composites are high specific stiffness, zero expansion coefficient, and process consistency.The field of DCBs has gained significant advancements and promising applications in recent years.This paper provides a comprehensive summary and commentary on the state-of-the-art in geometry design and composite material design for DCBs.The cut-outs design method and cross-sectional shape design method are introduced to effectively reduce folding stress levels while enhancing structural stiffness, thereby presenting new technological challenges.Configuration design methods and design criteria have rarely been studied in detail.Subsequently, it is necessary to establish the performance characterization system and failure models of the DCBs.The design theories on the composite material and structures need to be systematically carried out.Then the optimal design of the DCBs will be carried out, and the optimal design scheme can be obtained finally.With regards to wrapped-rib antenna, the ribs need to have considerable stiffness at the deployment state and be foldable without material failure during the wrapping process.With high specific strength and mature processing technique, CFRP is a kind of suitable rib material.Besides, by changing the laminate layout, the stiffness of the boom could be designed to satisfy multi-requirements, which makes it superior to isotropic material.When CFRP booms are used as the ribs of the wrapped-rib antenna, the viscoelastic behavior of the HSC would greatly affect the performance of the antenna.On the one hand, the relaxation of composite material might have an influence on the deployment process of the antenna and make the deployment time increase, even make the antenna lose the deployability.On the other hand, the surface accuracy of the reflector might be affected with the stiffness degradation and residual deformation of the ribs after repeated and long-term stowage.It should be focused on the methods to predict the viscoelastic behavior in the future work.Besides, some structural measures might be taken to weaken the viscoelastic effect and make the antenna meet the requirements.The ribs are the crucial structural members of the antenna, so it's significant to formulate the way to design and optimize the geometric parameters and laminate layout of HSC ribs.
Composites modeling
Developing a viscoelastic-plastic composite material model that accurately predicts the time-dependent stiffness, plastic deformation, and failure mechanisms during long-term stowage is a complex and challenging task.To address this, the following key considerations should be taken into account in future studies: (1) Comprehensive material testing and characterization are essential to comprehend the complex viscoelastic-plastic behavior of the polymer matrix.A constitutive material model is required to effectively describe the complex material behavior.(2) Multiscale modeling techniques are crucial to establish the anisotropic viscoelastic-plastic behavior of composites.This involves bridging micro and macro scales while considering the time-dependent behavior of the materials.(3) The influence of space environmental factors, such as significant temperature changes in orbit, should be considered when assessing the composite materials' behavior during stowage.(4) Besides constitutive composite material models, there is also a lack of appropriate numerical tools tailored to address the complexities of viscoelastic-plastic behavior in composite materials.(5) Numerical optimization emerges as a valuable method for optimizing the folding and deployment performance of composite deployable structures.Constructing an optimization problem that incorporates the time-dependent behavior of the material, considerations for long-term stowage, and specific performance metrics for the composite deployable structure is essential.
Composites manufacturing
In the design process of composite deployable structures, focus is on material selection, manufacturing processes, and functional aspects for desired functionalities and shape recovery accuracy.High-performance CFRP composites are commonly used due to unique operational conditions.Resin properties play a critical role in achieving functionality.Precise resin selection involves numerical simulation analysis and functional testing.Optimization through reinforcement or toughening is essential.Attention to residual thermal stresses and deformations during manufacturing is vital.Consideration of mold design, adhesive bonding, and surface treatment is important.Techniques like mold compensation, adhesive optimization, and laser treatment enhance bonding quality and precision.Functional validation studies viscoelastic effects on deploying behavior and shape recovery accuracy.A comprehensive review of the curing process flow, characteristics, and key elements of deployable composite structures are provided.It focuses on the resin characteristics and selection criteria in the molding process of resin-based composite materials.Additionally, it delves into the mechanisms of thermal deformation and residual stress generation during the preparation process and their impact on the final structural properties.Furthermore, a series of typical curing molding process options are also enumerated.In the outlook section, effective solutions and suggestions are proposed to enhance the comprehensive mechanical properties and shape recovery precision of deployable composite structures from the perspective of preparation processes.
Fig. 1 |
Fig. 1 | Different configurations of deployable composite structures.a Foldable tubes.b Collapsible and rollable booms.c Elastic extension lattice.d Spring-back reflector antenna.
Fig. 2 |
Fig. 2 | Design of the composite deployable booms.a Deployable Composite Boom (DCB) is cut slots near the fold creases27,28 .b The honeycomb topology on the reed structures31 .c The most common crosssectional shapes of DCB.d Two asymmetric omegashaped shells forms a closed-section33,34 .e Four-cell lenticular combined cross-sections37 .f Eight C-shape combined sections38 .
Fig. 3 |
Fig.3| Multiscale modeling framework of homogenization methods for predicting the viscoelastic behavior of composite laminates.The multiscale modeling strategy can be summarized as a two-step process.The first homogenization step involves creating a microscale representative volume element (RVE) that represents the microstructure of the unidirectional composites.The fiber is assumed to be a linear elastic material, while the matrix is assumed as an isotropic, and linear viscoelastic material, which can be described by a generalized Maxwell model using the Prony series.The second homogenization step utilizes the obtained unidirectional composite (yarn) properties to analyze the behavior of the woven composite by constructing a mesoscale RVE.
Fig. 7 |
Fig. 7 | Chang'e-4's umbrella-shaped antenna, China Academy of Space Technology (CAST), Xi'an.Space large-scale umbrella mesh antenna, applied to Chang'e-4 (China's Lunar Exploration Project, CLEP) relay satellite, is becoming currently the farthest mesh antenna flying in the international community.
Table 1 |
Technical challenges of deployable composite structures
Table 2 |
Summary of studies on stowage effects for DCBs
Table 3 |
Summary of space applications | 13,204.8 | 2024-06-10T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Assessing the TSS Removal Efficiency of Decentralized Stormwater Treatment Systems by Long-Term In-Situ Monitoring
: Decentralized treatment of stormwater runoff from heavily polluted surface can be a good solution for effective source control. Decentralized stormwater treatment systems (DS) and test procedures to monitor their performance, have been developed in recent years. At present in Germany, only lab-based tests are officially established to determine the removal efficiency of Total Suspended Solids (TSS), and in situ monitoring is still lacking. Furthermore, the fine fraction of TSS with particle sizes less than 63 µ m (TSS63) have been established as a new design parameter in Germany, because of their substitute characteristics of adsorbing pollutant substances. For research and evaluation purposes continuous data of urban stormwater runoff quantity and quality at the in- and outflow of two different DS at two different sites were collected. Turbidity is used as a surrogate for TSS. Continuous turbidity data and time proportional sampling served to obtain (i) regression coefficients and (ii) to determine the TSS removal efficiency of DS. For a wide range of events the total removal efficiency of DS1 was 29% for TSS and 19% for TSS63 and for DS2 19% for TSS and 16% for TSS63. An event-based data analysis revealed a high variability of the efficiencies and its uncertainties. Moreover, outwash of still suspended or remobilization of already deposited material was observed at individual events. At both sites TSS63 dominates urban stormwater runoff as indicated by the mean ratios of TSS63 to TSS of 0.78 at the inflows and 0.89 at the outflows of both DS. A significant shift of TSS63 ratio from inflow to outflow demonstrates that TSS63 particles were removed less efficiently than coarser particles by DS1, for DS2 data was too heterogeny. It clarifies that common sedimentation methods can only contribute to a small extent to the reduction of solid emissions if the stormwater runoff contains mainly fine-particle solids. The findings suggest that treatment of urban stormwater runoff with high TSS63 pollution requires additional techniques such as a proper filtering to retain fine particles more effective.
Introduction
The water quality, ecology, and microbiology of receiving rivers are influenced by separate and combined sewer outlets, in addition to direct street runoff (e.g., [1][2][3]). Urban runoff transports high loads of particles that also act as main vector for particle-bound pollutants [4][5][6]. High concentrations and annual loads of heavy metals (Zn > Cu > Pb) have been detected in urban storm water which originated from vehicle brake emissions, tire wear, roof covering materials or atmospheric deposition [4,5,[7][8][9][10]. With decreasing particle size, the loads of heavy metals rise [11] and correlate significant to fine fraction of Total Suspended Solids (TSS63, with particles sizes of 0.45 µm < TSS63 < 63 µm) [12]. The characteristic of fine particles as TSS63 is classified by Hilliges (2017) according to ISO 14688 as a divide between settleable and non-settleable particles. Based on studies of Hilliges et al. (2017), Dierschke and Welker (2015), Zhao et al. (2010) and Selbig (2015) that focus on distribution and pollution of particles in road runoff, TSS63 was implemented in 2020 as a new design parameter in German stormwater management regulation [12][13][14][15][16].
Conventional sewer systems convey the stormwater runoff from different polluted areas to central treatment facilities, such as stormwater treatment tanks. As runoff of different areas with different levels of pollution becomes concertedly treated, even runoff from areas with low pollution must be treated. Additionally, highly polluted runoff gets diluted by low polluted runoff, meaning that the treatment system requires a higher hydraulic capacity and retention. As a consequence, the less polluted stormwater is missing in the catchment area for infiltration and evaporation, which interrupts the natural water cycle. Furthermore, hydraulic pressure and less pollution load can reduce the efficiency of treatment. Therefore, decentralized stormwater treatment systems that ensure treatment close to pollution sources provide a chance to efficiently reduce stormwater related emissions to the receiving water.
In recent decades, various decentralized stormwater treatment systems have been developed (e.g., [17,18]). These systems vary in shape and size, from road gullies, swales, manholes, and multiple chamber tanks, to sewer conduits of up to 12 m length (e.g., [17]). They often aim to combine treatment mechanisms, such as hydraulic retention, sieving, sedimentation, light fluid separation, filtration, and retention of dissolved heavy metals (e.g., [17]).
To ensure a quality control of their performance, various testing and approval procedures have been internationally developed that require standardized laboratories or several representative events under in situ conditions [18][19][20][21][22]. The determination of in situ pollutant removal is conducted with automatic sampling and based on flow proportional sampling [18][19][20]. However, little is known about the in situ efficiencies of these systems under long-term operation, especially for the new design parameter TSS63. Furthermore, the removal efficiency of sedimentation technologies in DS is of fundamental importance in view of the dominant fine particles and associated pollutants in stormwater runoff and their importance for catchment-wide stormwater management strategies [12].
Much progress has been made in the continuous monitoring of stormwater runoff quality using UV-vis-spectrometers or turbidity sensors to study intra-event pollutant dynamics and to estimate event pollutant loads [23][24][25][26][27][28][29]. These studies showed site-and event-specific characteristics of the occurrence and composition of pollutants, and revealed the highly stochastic nature of the build on and wash off processes, which may influence the removal efficiencies of DS as well.
The present study therefore intends to contribute findings regarding the following key questions: (i) Is turbidity a sufficiently useful surrogate parameter for concentration and composition in the influent and effluent of DS? (ii) How high are the TSS and TSS63 removal efficiencies during in situ long-term operation and how do they vary? (iii) What are the deterministic and stochastic components of the efficiencies?
For this purpose, experimental investigations of two different DS at two different locations were carried out, and the results thereof are reported here.
Monitoring Sites
Measurements were conducted at two sites, both located in the city of Münster, Germany (see Table 1 and Figure 1 for details). The catchment "Stadtgraben" (SG) (2.63 ha, DS coordinates 51 • 57 32.152 N, 7 • 37 8.253 E(WGS84)) is located close to the city center and dominated by a high traffic road and surrounded by commercial and office buildings. Stormwater runoff is collected by a separate sewer with a diameter of 500 mm and 6‰ to 7‰ slope. The catchment "Canisiusgraben" (CG) ( two main roads as major pollution sources. The storm sewer system ends with a diameter of 800 mm and 2.5‰ slope. N, 7°35'43.407" E (WGS84)) is a residential area with flat and steep roofs and two main roads as major pollution sources. The storm sewer system ends with a diameter of 800 mm and 2.5‰ slope.
Decentralized Stormwater Treatment Systems
DS1 at site SG is a SediPipe XL 600/12 (Fränkische Rohrwerke Gebr. Kirchner GmbH & Co. KG; Königsberg, Germany) that was installed in 2017. The pipe with 600 mm diameter and 12 m length operates as a permanently filled sedimentation unit with counter gradient. A special grate near the bottom prevents detachment of already settled sediments. An immersion tube in front of the outlet retains floating materials and light liquids (cf. schematic in Appendix A, Figure A1). In case of an event, stormwater is constantly pumped into DS1 at the rate of 6 L/s by a peristaltic pump (P-50-classic twin, Ponndorf, Kassel, Germany,). This corresponds to the runoff maximum of 0.4 ha impervious area recommended by the manufacturer.
DS2 was installed in 2018 at site CG and is a ViaTub 18R63 Lamella Clarifier (Mall GmbH, Donaueschingen, Germany,). The circular concrete tank with a diameter of 3 m contains a lamella separator for enhanced sedimentation in surcharged conditions (cf. schematic in Appendix A, Figure A2). The inflow from the storm sewer is limited to 35 L/s by a flow-controlled throttle valve. The lamella clarifier is operated according to valid regulations and is equipped with a side weir for discharges above the critical discharge level.
Decentralized Stormwater Treatment Systems
DS1 at site SG is a SediPipe XL 600/12 (Fränkische Rohrwerke Gebr. Kirchner GmbH & Co. KG; Königsberg, Germany) that was installed in 2017. The pipe with 600 mm diameter and 12 m length operates as a permanently filled sedimentation unit with counter gradient. A special grate near the bottom prevents detachment of already settled sediments. An immersion tube in front of the outlet retains floating materials and light liquids (cf. schematic in Appendix A, Figure A1). In case of an event, stormwater is constantly pumped into DS1 at the rate of 6 L/s by a peristaltic pump (P-50-classic twin, Ponndorf, Kassel, Germany,). This corresponds to the runoff maximum of 0.4 ha impervious area recommended by the manufacturer.
DS2 was installed in 2018 at site CG and is a ViaTub 18R63 Lamella Clarifier (Mall GmbH, Donaueschingen, Germany,). The circular concrete tank with a diameter of 3 m contains a lamella separator for enhanced sedimentation in surcharged conditions (cf. schematic in Appendix A, Figure A2). The inflow from the storm sewer is limited to 35 L/s by a flow-controlled throttle valve. The lamella clarifier is operated according to valid regulations and is equipped with a side weir for discharges above the critical discharge level.
Measurement Equipment
The continuous stormwater quality measurements at the in-and outlet of a DS comprise turbidity, electrical conductivity, and pH (with sensors VisoTurb 700 IQ, TetraCon 700 IQ and SensoLyt 700 IQ, respectively all Xylem Analytics Germany Sales GmbH & Co. KG, WTW, Weilheim in Oberbayern, Germany). Turbidity value is used as a surrogate for TSS. Electrical conductivity (EC) is measured to validate the start of an event, monitor the use of deicing salts during the cold season, and support analysis of heavy metals at DS due to the fact that higher EC correlates significantly with an increase in total Zn [12]. The pH value is targeted as a supporting parameter for the interpretation of dissolved heavy metals. In particular, at DS with backwater, the pH value can change during longer dry periods due to redissolution of particle bound pollutants in anaerobic areas [30]. Measurement of the primary parameter turbidity is based on 90 • scattered light measurement according to DIN EN ISO 7027 [31] and is expressed in Formazine-Nephelometric Units (FNU) equal to a Nephelometric Turbidity Unit (NTU). The turbidity-sensor measurement range is from 0 to 4000 FNU, and resolution ranges between 0.001 FNU to 1 FNU and depends on the current measured value. Process variation coefficient is less than 1% in the range up to 2000 FNU according to DIN 38402-51 [31]. In-and outlet sensors are connected to a central transmitter (MIQ/TX 2020 XT, WTW, Weilheim, Germany) with data values logged in 1-min time steps.
Sewer water level sensors (OCL-L1/DSM, NIVUS; Eppingen, Germany) with uncertainty u ≤ ±0.5% of final value or ±5 mm [32] and combined sensors for flow velocity and water level (POA-V2H1/V2U1/CSM-D, NIVUS, Eppingen, Germany,) with velocity u = ±0.5% to 1% of final value and pressure u ≤ 0.5% of final value [32], are installed on sites. The water level value is used to trigger automatic sampling (ASP station, Endress + Hauser, Rheinach, Switzerland,) and the start of pumping at site SG. At the site SG the rain gauge (Pluvio2, OTT Hydromet, Kempten, Germany,) is installed on a flat roof (cf. Figure 1C, RG Paulinum) and collects data with a 0.01 mm threshold [33], logged in 1-min time steps. Precipitation data are recorded with the same system and configuration on a flat roof of one of the university buildings (cf. Figure 1B, RG FHZ) and used for site CG.
Sampling Method and TSS Analysis
The water level value (threshold criteria: 3 cm at SG and 5 cm at CG, each with 0.5 cm hysteresis) started and ended automatic time continuous sampling. The sampling procedure consisted of 200 mL sample shots every 2 min and a merging of 5 shots into one composite sample of 1 L for a maximum of 12 polyethylen (PE) bottles per event (for rain event statistics see Appendix A Table A1). Sample storage sections on site and in laboratory were constantly cooled to 4 • C. In the laboratory, each composite sample was analyzed for turbidity (VisoTurb 700 IQ) and TSS according to DIN 38409-2 1987 [34] comparable to the Environmental Protection Agency (EPA) Method 160.2 [35] and Standard Method 2540D [36] with a 0.45 µm cellulose nitrate membrane filter (Typ11306, Sartorius, Goettingen). For turbidity measurement, a black cylindrical polyethylene high density (PE-HD) bottle was used, and the sample was homogenized with a magnet stirrer at 450 rpm as described in [23]. TSS was divided into the two compartments TSScoarse (2 mm > x > 63 µm) and TSS63 (63 µm > x > 0.45 µm) by sieving (2 mm and 63 µm test sieve, Retsch GmbH, Haan, FRG). Because a standard operating procedure for separation and analysis of TSS63 [37] was missing, the method recommended by [14] was applied.
Data Processing and Management
For data import, backup, and analysis OSCAR, a data management system developed by one of the authors, was used [38]. Measurement data per site and sensor were stored using an open-source time series platform (InfluxData, San Francisco, 2020 [39]) and visualized with a web interface (Grafana Labs, New York, 2020 [40]).
For both sites, the following data processing and analysis were conducted with reference to [41]: (i) verification, (ii) correction, (iii) transformation (linear regression turbidity to TSS), (iv) event selection and (v) calculation of event parameters. Event statistics (description, duration, intensities) were calculated for rainfall, runoff and loads. The event selection criteria were minimum rainfall depth H > 2 mm and maximum rainfall intensity in 60 min Imax60 > 2 mm/h, and additionally at site CG, bypass volume < 1 m 3 per event.
TSS composite samples were checked for distribution and their effect on the goodness of fit of linear turbidity to TSS regression. In particular, the TSS63 ratio was considered and evaluated regarding the effect on regression coefficients. Further values with relative residuals > 3 σ and diagnosed using the function influence measures in R [42] were determined as outliers.
Determination of Load Removal Efficiencies
To determine the event-specific and site-specific TSS load B E (kg), continuous turbidity measurements were converted into TSS concentration c TSS (mg/L) with a linear regression model, using Equation (1) according to [27,43,44]. B E for in-and outflow was obtained from the product of c TSS,i and discharge Q i (m 3 /s), and multiplied with the measuring interval ∆t (i.e., 1 min) according to Equation (2). The TSS removal efficiency η E,B is determined with Equation (3).
where i = index of the time series, n = number of time steps of an event TSS removal efficiency (%): where B E,out = event TSS load in the outflow, B E,in = event TSS load in the inflow. TSS63 concentration c TSS63 (mg/L) as a fraction of TSS cannot be derived directly from turbidity data. Therefore, as a first approximation to estimate a TSS63 removal efficiency η E,B,TSS63 (%), the mean value of the gravimetrically determined ratio of TSS63 to TSS from composite samples was considered. The ratio f (-) of c TSS63 and c TSS was obtained with Equation (4). The η E,B,TSS63 was calculated with Equation (5). For the opposite fraction of TSS63, a first estimation of its removal efficiency η E,B,TSScoarse (%) was calculated using Equation (6). (5) with f out,mean = mean ratio of TSS63 to TSS in the outflow, f in,mean = mean ratio of TSS63 to TSS in the inflow
Determination of Uncertainties
Two key values are subject to uncertainty estimations according to ISO/IEC Guide 98-3:2008 [45]. Of major interest are (i) the uncertainty of the ratio f of TSS63 and TSS Water 2021, 13, 908 6 of 23 resulting from lab analysis and (ii) the uncertainty of the removal efficiency η E caused by turbidity measurements as a proxy for TSS.
Relative uncertainty of TSS63 ratio (−): The ratio f of c TSS63 and c TSS and its uncertainty u f and relative uncertainty u * f are expressed with Equations (4), (7) and (8).
The uncertainty of the concentrations u c,TSS (-) and u c,TSS63 (-) were estimated using the Type B method [45]. From previous analytical quality assurance, it can be assumed that the lab results for c TSS63 and c TSS are normally distributed with a 95%-confidence interval of ± 10%. The relative standard uncertainty according to [45] is therefore u * c,TSS = u * c,TSS63 * = 0.03876. To study the single uncertainty effect of turbidity measurement as proxy for c TSS the discharge uncertainty u Q and time series covariance were set to zero. For each time interval t i (min) the mass flow rate . m i (kg/s) and its uncertainty u m,i (-) were calculated by Equation's (9), (10) and (11). The calculation of uncertainty of the event-specific TSS load u B,E is expressed by Equation (12).
Mass flow rate (kg/s): Uncertainty of mass flow rate (−): where i = index of the time series, Q i = discharge at time i.
where i = index of the time series, n = number of time steps of an event, and ∆t = measuring interval The removal efficiency η E,B and its uncertainty u η E,B and relative uncertainty u * can be calculated from the TSS loads of the DS inflow B E,in and outflow B E,out and their uncertainties u B,E,in and u B,E,out t , respectively, by Equations (13) and (14).
Relative uncertainty of removal efficiency (−): The concentrations were calculated using a regression model with Equation (1). The uncertainty of the regression model is increased by the uncertainties of turbidity measurement and TSS analysis. Therefore, the residuals ∆c (mg/L) between the model-based concentrations c calc (mg/L) and the lab analysis c lab (mg/L) therefore result from multiple sources and can be used to estimate the uncertainties of the concentration c. The residuals ∆c and the relative residuals ∆c * can be obtained with Equations (15) and (16).
Relative Residuals (−): ∆c * = ∆c c calc (16) Water 2021, 13, 908 7 of 23 The relative uncertainty u * c can be estimated using the Type A method [45] from the standard deviation s ∆c * of the relative residuals ∆c * of i = n data pairs (c calc,i ; c lab,i ) by u * c = s ∆c * . The uncertainty u c,i of the concentration time series data can be estimated by: It is important to note that the above-mentioned procedure only aims to identify the influence of uncertainties of turbidity as a proxy for TSS concentration data. It therefore sets uncertainties of discharge and covariances to zero. Both must be considered for total uncertainty calculations.
Results
Presentation of the results follows the process of sample evaluation, starting with the analysis of composite samples. Next, the regression model is shown that is based on TSS analysis. With the determined coefficients the turbidity data are converted to TSS for calculation of loads and removal efficiencies. The calculation is followed by a further examination of the uncertainties. In the last step a mass balance is determined based on composite samples.
Analysis of TSS and TSS63
Sampling was carried out from January 2018 to July 2019. To derive the regression function at the site "Stadtgraben" (SG) 210 inflow and 91 outflow composite samples of 1 L were analyzed. At the site "Canisiusgraben" (CG) 221 samples were taken from the inflow and 189 from the outflow. Table 2 lists the descriptive statistics of the evaluated TSS concentrations for 2018 (all seasons), for 2019 (without fall and winter) and a summary of all samples (rain statistics are shown in Appendix A Table A1, TSS statistics are shown in Appendix A Table A2). At site SG, a previous catchment study recorded TSS concentrations in sewers with median TSS concentration in 2015 (67.1 mg/L) between observed values of 2018 (123 mg/L) and 2019 (32.1 mg/L) [23]. The previous study did not cover TSS63 concentrations. Table 2. Descriptive statistics of TSS concentration and TSS63 ratio from composite samples per site, sample position, and year with raw data. SG data are compared to previous sewer data from [23]. At all sites and sample positions, concentration statistics in 2018 were above the values in 2019. Median values decreased from inflow to outflow at both sites. The overall mean ratios of TSS63 to TSS concentration at SG and CG in the in-and outflow were 0.78, 0.89, 0.79 and 0.82, respectively. Ratios of TSS63 to TSS increased per site from inflow to outflow Water 2021, 13, 908 8 of 23 by 0.11 at SG and 0.03 at CG. A single factor analysis of variance (ANOVA, with alpha = 0.05) was conducted for TSS63 ratio data samples for in-and outflows per site. The resulting p-value of 5.12 ×10 −6 at SG indicates that the TSS63 ratios differed significantly. The result for CG with a p-value of 6.2 ×10 −3 , indicates no significant shift. Figure 2 shows the distribution function of determined TSS concentrations. The lower graphs of SG inflow and outflow illustrate the higher TSS pollution of the catchment, in comparison to CG. The progressions of in-and outflow graphs per site are narrow. This is covered by the descriptive TSS statistic. Table A1).
Site
The TSS63 ratio shift between sample positions and over sampling time is illustrated in Figure 3. Only events with more than six of 12 composite samples were considered (with inflow n = 20 events and outflow n = 8 events, total samples n = 281). Not enough samples were present for the first composite sample at the outflow due to a water level deficit at the beginning of the event. During sampling, the TSS63 ratio at the inflow rose. The increase in TSS63 ratio between inflow and outflow samples is likely to occur due to better removal of coarse particles. Table A1).
The TSS63 ratio shift between sample positions and over sampling time is illustrated in Figure 3. Only events with more than six of 12 composite samples were considered (with inflow n = 20 events and outflow n = 8 events, total samples n = 281). Not enough samples were present for the first composite sample at the outflow due to a water level deficit at the beginning of the event. During sampling, the TSS63 ratio at the inflow rose. The increase in TSS63 ratio between inflow and outflow samples is likely to occur due to better removal of coarse particles.
Relation between TSS and Turbidity
The correlation between turbidity and TSS concentration was determined using linear regression. Table 3 lists the regression coefficients and goodness-of-fit obtained for each site and sample position. In experiments with variable concentration and particle size distribution of quartz flour, the influence of particle size distribution on the resulting turbidity was clarified [43]. It could be shown that samples at constant concentration, but with increasingly higher fine content, cause higher turbidity. For a conversion from TSS data into turbidity this results in a higher slope, whereas a conversion from turbidity data into TSS results in a lower slope as higher the TSS63 ratio is. The TSS63 ratio shift between sample positions and over sampling time is illustrated in Figure 3. Only events with more than six of 12 composite samples were considered (with inflow n = 20 events and outflow n = 8 events, total samples n = 281). Not enough samples were present for the first composite sample at the outflow due to a water level deficit at the beginning of the event. During sampling, the TSS63 ratio at the inflow rose. The increase in TSS63 ratio between inflow and outflow samples is likely to occur due to better removal of coarse particles. At sites, the average mean TSS63/TSS ratio was close to 0.8 with sd~0.2. To dampen the influence of potential outliers we set mean ± 1.5 × sd as the selection criterion with a resulting TSS63 ratio range for regression of 0.5 to 1.0. This ensures that the final correlation represents most events reliably. Because of the neglect of lower TSS63 ratio samples, the slope coefficient may decrease, which was only recorded for the coefficients of site SG.
After data preparation, the coefficients of R 2 (adjusted) were above 0.75 at the site "Stadtgraben" and above 0.86 at the site "Canisiusgraben". The standard error for slope has decreased, whereas R 2 (adjusted) improved. For site SG, sewer turbidity and TSS concentrations data from a previous study of the catchment [23] are available, with b = 0.97, a = 7.93 and R 2 = 0.68 based on 85 samples from 16 events. Figure 4 shows the linear relationship between TSS and turbidity that can be observed (A) with and (B) without samples that meet the selection criterion. The similar gradient coefficients also emphasize a similar particle characteristic per site. Both DS treat the polluted runoff mainly through sedimentation.
TSS Load Removal Efficencies
The long-term in situ performance of the two DS was evaluated based on continuous turbidity time series. Table 4 lists the determined TSS loads and removal efficiencies of both sites for two periods. Period 1 was the first monitoring cycle that started on November 2017 for SG, on May 2018 for CG and continued until July 2019. In period 1, 2018 with a low rain total was covered. Period 2 covers the prolonged monitoring time until November 2020. The , , was 7.9 kg (n = 51) and 7.6 kg (n = 91) at site SG and was comparable over both periods. At site CG, , , decreased in period 2 probably because of fewer valid events with high pollution due to sensor failure.
TSS Load Removal Efficencies
The long-term in situ performance of the two DS was evaluated based on continuous turbidity time series. Table 4 lists the determined TSS loads and removal efficiencies of both sites for two periods. Period 1 was the first monitoring cycle that started on November 2017 for SG, on May 2018 for CG and continued until July 2019. In period 1, 2018 with a low rain total was covered. Period 2 covers the prolonged monitoring time until November 2020. The B E,in,median was 7.9 kg (n = 51) and 7.6 kg (n = 91) at site SG and was comparable over both periods. At site CG, B E,in,median decreased in period 2 probably because of fewer valid events with high pollution due to sensor failure. 1 In monitoring period 1, 33 events were valid, and in period 2, 51 events were valid, of which 7 and 9 events, respectively, showed inflows above the critical rainfall bypass discharge volume of >1 m 3 . These events were excluded to determine the correct load efficiencies for DS2.
At site SG, the sum of inflow load of the 91 events was B in = 1413 kg, and 407 kg was retained by sedimentation in the DS. This corresponds to a removal efficiency of η B,SG = 29% (40% in period 1). For site CG, loads of period 2 were calculated as B in = 556 kg and B out = 452 kg. The removal efficiency of 42 events in period 2 was η B,CG = 19%, as before in period 1.
Long-term removal efficiency estimations for TSScoarse (2 mm > x > 63 µm) and TSS63 are given in Table 5. For TSScoarse, the estimated removal efficiencies were η B,coarse,SG = 59% and η B,coarse,CG = 28%. For TSS63, the estimated removal efficiencies were lower, with η B,63,SG = 19% and η B,63,CG = 16%. Values under the separation line indicate a positive removal efficiency, whereas values above the line indicate a negative efficiency. At site SG, events with negative efficiencies are visible, which occurred in period 2 and therefore have a decreasing effect on the removal efficiency in comparison to period 1. At site CG, events with negative efficiencies occurred in periods 1 and 2. Removal efficiency ranged between −120% and 100% at site SG, and −35% to 96% at site CG. TSS load mean values decreased at each site from inflow to outflow, from 16.4 to 11.2 kg at site SG, and 13.9 to 11.0 kg at site CG. By applying the rain event selection criteria (H > 2 mm and Imax60 > 2 mm/h) on the monitored rain sum (2126 mm) the effective rain sum was determined (1653 mm). For each site, the event rain sum per year was divided by the effective rain sum per year.
The resulting rain treatment ratios are listed together with the seasonal event distribution in Table 6. The highest value per year and total treatment ratio were at site SG, with values of 0.61 (2018) and 0.49 (total). Due to a later monitoring start in 2018 and fewer events in 2020, the treatment ratios for DS2 at site CG were lower, with the highest being 0.36 (2019) and a total of 0.27. At both sites, the seasonal event distribution showed fewer events in spring. Most events were recorded in winter and fall at site SG. At site CG most events were recorded in fall and summer.
Uncertainty of TSS Load Removal Efficency
After data processing, the relative residuals ∆c * were analyzed for in-and outflow of both DS. The standard deviations given in Table 7 were used as estimates for the uncertainties u * c = s ∆c * per sample position. The uncertainty u η of the removal efficiency for two selected runoff events at both sites in Table 8 show low values at site SG and moderate to large values at site CG. This corresponds to the standard deviations of the relative residuals ∆c * . Because the removal efficiencies η are low in all cases the relative uncertainties u * η are high and extremly high for event CG-109 with a very low removal efficiency.
Mass Balance
A calculation of mass balance was conducted due to time proportional sampling and constantly pumped discharge at the site SG. The calculation is based on TSS concentration samples for the first two hours of an event. Only data pairs of in-and outflow concentrations were used. Table 9 lists the results of the mass balance for 75 sample pairs out of nine events. The DS1 achieved a removal efficiency of 47% during the sampling time frame of 2 h. The TSS63 ratios per sample were summarized for inflow and outflow to note the TSS63 shift of 5.3% due to treatment.
Relation between TSS and Turbidity
For long-term monitoring, a consistent TSS63 ratio is crucial to obtain a sufficient conversion of turbidity to TSS. One key factor is the correct installation of turbidity-sensors at the in-and outflow of a DS. DS of smaller size might lack enough space for installation, which is crucial to ensure correct turbidity measurements without signal interference from DS walls or tubes.
For all sample positions, reliable regression coefficients were determined. The TSS analysis at both sites showed intra-event runoff with variable TSS concentrations but with consistent high proportions of TSS63 (cf. Appendix A Table A2) in the range of 0.78 to 0.89 in average. During data preparation, samples with TSS63 ratio < 0.5 and high leverage were excluded for linear regression. Selection of samples with higher TSS63 ratio for regression of turbidity to TSS can lead to a lower slope, whereas exclusion of high leverage samples can have an effect in both directions. A lower slope for conversion of turbidity data to TSS would lead to a lower load in total at the specific sample position, which contributes direct to the long-term removal efficiency. Slope decreased only for SG inflow from 1.21 to 0.96. At site CG, slope increased from 0.84 to 0.91 (inflow) and 0.94 to 0.99 (outflow). Values of regression were comparable for slope in the range of 0.91 to 0.99. Against this background, a higher influence of the turbidity on the slope due to a higher TSS63 ratio at the respective DS outflows was expected but could not be confirmed by the coefficients. The intercept deviated from 2.31 (in) to −0.41 (out) at SG and from −1.94 (in) to −3.96 (out) at CG. Because these values are negative, an effect with changed boundary conditions (0/0) on slope and final uncertainties must be investigated further.
For The total number of used samples per regression ranged from 83 to 182. The magnitude of the difference between raw data samples and prepared data samples was the lowest at 8 (SG, outflow) and the highest at 40 (CG, inflow). The relative sample reduction during data processing was 9% to 18%, with an average of 13%. After data processing, the R 2 (adjusted) was higher by 0.33 on average, except for SG outflow that remained the same with R 2 (adjusted) of 0.75. The chosen statistical criteria to ensure a better fit of linear regression over the long-term cam be recommended. Due to the reliable fit of the individual regressions per sample position, the parameter turbidity can be used as a surrogate for TSS to monitor long-term removal efficiency of DS.
TSS Load Removal Efficencies and Uncertainties
The analysis of TSS load removal for DS1 showed a long-term efficiency of η B,TSS = 40% in period 1 (n = 51 events) and η B,TSS = 29% (n = 91 events) over the full monitoring cycle.Further removal efficiency estimations for TSS63 and TSScoarse are η B,TSS63 = 16% and η B,TSScoarse = 59%. The composite sample-based mass balance for period 1 (n = 75 samples, nine events) resulted in η B,TSS = 47% for the first two event hours and effected a TSS63 ratio shift of 5.3%. A DS performance rating based on mass balance or monitoring results of period 1 would overestimate the long-term efficiency, because 2018 was a dry year with the highest observed TSS concentrations in urban stormwater runoff. Therefore, the result of period 2 (2018-2020) allowed a better assessment. Uncertainty evaluation of selective events at SG showed variable u η of 3% to 5% with relative uncertainty u * η of 9% to 30%. Further evaluation is necessary to achieve reliable uncertainties according to longterm removal efficiency. The estimated higher removal efficiency for TSScoarse particles displayed a theoretical in situ performance peak of the DS, when influent contained no particles < 63 µm. However, the lower removal efficiency for fine particles indicates an achievable value in the case of a high TSS63 ratio. Due to variance in the ratio of TSS63 to TSS and seasonal difference in temperature, organic pollution compartments, and therefore, particle density and potential to aggregate to conglomerates, additional factors have an influence on removal efficiency if reduction is only targeted by sedimentation [47]. For example, no remobilization or outwash was recorded in the first period at DS1, but in period 2. Visual analysis of such an event with a negative efficiency in fall showed a high rest turbidity in the outflow from the previous event (criteria to separate events = rain stop > 4 h). One assumption is, that the high rest turbidity occurred due to a high ratio and concentration of TSS63 combined with a low density of particles. Remobilization due to varying hydraulic inflow can be excluded due to constant pump inflow of 6 L/s. Investigations of environmental conditions and DS settings of these event phenomena would be informative. To obtain more clarity of event concentration even after monitoring implementation, a time-integrated full event composite sample backup would be beneficial. Furthermore, the DS1 was inspected and cleaned in July 2019 and October 2020 according to the maintenance instructions.
At site CG, for DS2 the long-term removal efficiency was η B,TSS = 19%, whereas for TSS63 and TSScoarse they were η B,TSS63 = 16% and η B,TSScoarse = 28%, respectively. The DS2 TSS removal efficiency for periods 1 and 2 remained the same. This constant long-term efficiency could represent a consistent treatment. Nonetheless, the lack of difference was because fewer new events in period 2 were monitored. Additionally, the events in period 2 had a lower runoff pollution of TSS and, therefore, had a lower leverage on the overall removal efficiency. Uncertainty evaluation of selective events at CG showed high variable u η of 9% to 19% with high and extremely high relative uncertainty u * η of 30% to 353%. The monitoring of DS2 resulted in a low long-term efficiency with high uncertainties. One variable that has contributed to the low efficiency is the low TSS concentration. The median inflow TSS concentration of 25.4 mg/L was close to minimum TSS concentration of 20 mg/L from [19] to be considered as an assessable pollution event. In addition, the DS2 manufacturer declares a separation only for particles with a size range down to 0.1 mm [48]. Therefore, the DS2 is not applicable for urban stormwater runoff treatment with such a high TSS63 ratio. The observed removal efficiencies were only conclusive for the configured throttle discharge of max 35 L/s. Negative removal efficiencies might be caused by high proportions of fine particle material or too high flow rates. Under these site-specific conditions, the DS operator should check the throttle function and examine the option of throttle reduction to improve DS performance.
In addition to the focused TSS removal of particles with a maximum size of 2 mm, DS2 removed more granular particles with its lamella clarifier. During maintenance observation in November 2020, mud levels up to 60 and 30 cm were observed at the in-and outflow chambers, respectively. The high mud level in the outflow chamber amplifies the theory of strong remobilization due to high discharges that were able to transport even coarse material through and over the lamella clarifier to the outflow chamber and from there to the river. The mud consisted mostly of sand, leaves, sticks, and cigarette butts. The exact composition of the mud must be analyzed further.
TSS and TSS63 in Urban Stormwater Runoff
In both monitored DS, the stormwater runoff is primarily treated by sedimentation of particles. The effectiveness was tested and demonstrated for both DS according to the lab based test [21] with the inorganic TSS-surrogate quartz flour "Milli Sil W4". Therefore, first, a TSS concentration reduction in terms of lower mean and median values, and, second, a change in the particle size distribution, were expected from inflow to outflow, because coarser particles tend to sediment rather than finer particles.
A TSS reduction between inflow and outflow can be recognized for DS1 by lower mean, median, and sd values. For DS2, the mean and sd values were lower in the inflow than in the outflow. However, the median value was higher in the inflow than in the outflow as expected. The inequality of mean values in composite samples could occur from a low TSS concentration combined with remobilization, which might have been covered due to sampling. The previously recorded TSS concentration values of the sewer at site SG in 2015 are lower than our recorded values. A loss of TSS due to pumping can therefore be discounted.
The mean TSS63 ratios were similar in the inflow at both sites (0.78 SG and 0.79 CG). Other studies report equal TSS63 ratios for road runoff, with values of 0.82 and 0.85 [12,16]. After treatment, the ratios shifted differently. At SG, the mean TSS63 ratio in the outflow increased to 0.89, which was reported as a significant difference by ANOVA. This shift indicates a more effective reduction of TSScoarse with particle size > 63 µm. At CG, the increased outflow TSS63 ratio mean of 0.82 was smaller and the difference is not significant.
The urban stormwater runoff pollution differed in concentration between sites. Because wash off processes were not considered in this study, no detailed statements can be made, except that the daily average vehicle and cover of the impervious area were higher at SG than at CG. Selbig 2015 [16] shows with a particle size distribution (PSD) of urban stormwater runoff a median particle diameter (d 50 ) of 8, 32, 43, 50, 80, and 95 µm for six different types of land use, with a collector street yielding the lowest d 50 . As an outcome, it must be assumed that TSS63 contributes with a minimum of 50% to the urban stormwater runoff pollution and even higher for streets.
Indications for Further Planning and Operation of DS
Urban stormwater runoff consists of high proportions of TSS63, which can significantly affect the ecological quality of receiving waters. During rain events with a high stormwater runoff concentration of TSS, as recorded in 2018, increased removal efficiencies were observed. Before installation of a new DS, the pollution characteristics of catchments should be evaluated, for the concentrations and ratios of TSS63 and TSScoarse over several events. It must be considered whether an exclusive sediment treatment is sufficient for TSS loads with high TSScoarse ratio or if filtration is more suitable according to a dominant TSS63 ratio. Sansalone (1997) shows that heavy metals, such as Zn, Cd, and Cu, are mainly dissolved and their proportion of the total concentration underlies seasonal influences [11]. This observation indicates that a filtering technique should be considered in the design of DS if treatment of high polluted urban stormwater runoff is targeted. Manufacturers already invented exchangeable filter-cassettes targeting this demand.
Improvement of DS Evaluation
Results from the lab and in situ tests differed. The treatment and removal of fine particles should be the main objective of DS. This could be tested simply in a laboratory by implementation of two further tests. The tests could be conducted by manufactured predefined critical discharge and two fractions of TSS surrogate with (a) particles only >63 µm and (b) with particles only <63 µm. This would show the range of possible removal efficiency values for the TSS shares.
To improve and validate our in situ monitoring approach, it would be beneficial to validate the turbidity measurements of in-and outflow during lab test. As an outcome, this could provide a further standard for the installation of sensors at a DS. Obligatory turbidity measurements during laboratory tests would greatly facilitate and deepen the process analysis of the in situ data.
In situ investigations of DS are regarded to be mandatory for performance assessments due to the highly variable processes of pollutant build-up, wash-off, and transport processes of storm water runoff in terms of time and place.
Conclusions
Two different decentralized stormwater treatment systems (DS) at two different sites were monitored for 2 to 3 years. Continuous turbidity data and composite samples from inflow and outflow were used to evaluate the in situ TSS and TSS63 removal efficiency. From the analysis it can be concluded that:
•
In situ treatment is not at par with lab results. Even with constant hydraulic conditions, high variability of efficiency was observed.
•
For the two monitored sites with a high mean TSS63 ratio of 0.8, sedimentation treatment does not meet the treatment objective to adequately protect the receiving water and exposed organisms.
•
The significant TSS63 ratio shift from inflow to outflow indicated that DS with an exclusive sedimentation treatment can achieve better removal efficiencies with respect to coarser particles (>63 µm).
•
The TSS63 as a new stormwater design parameter and especially its fraction of TSS should be given more attention to characterize stormwater quality as it affects the correlation to turbidity.
•
It is well known that using turbidity as a proxy of TSS introduces uncertainties. However, as the explorative results indicated, further investigations especially when evaluating the removal efficiency of DS are strongly recommended.
•
Due to the physics of sedimentation and its limits, treatment of urban stormwater runoff with high TSS63 pollution requires additional techniques, such as filtering to retain fine particles more effective.
Decentralized stormwater treatment systems are complementing measures for efficient stormwater management. With the introduced online-monitoring concept in situ insights of the performance were gained.
Author Contributions: C.L.: paper coordination and writing, data analysis, visualization. D.L.: project concept and funding, methodology, data management and analysis, supervision, and contributions. M.U.: project concept and funding, uncertainty analysis, supervision, and contributions. All authors have read and agreed to the published version of the manuscript.
Funding:
The research work and software developments are part of the research project "Leistungsfähigkeit großer dezentraler Niederschlagswasserbehandlungsanlagen unter realen Betriebsbedingungen" (DezNWBA Phase 1 and Phase 2) which has been supported by the Ministry for Environment, Agriculture, Conservation and Consumer Protection of the State of North Rhine-Westphalia (MULNV NRW).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
Restrictions apply to the availability of these data. Data was obtained for MULNV NRW and are available from the authors with the permission of MULNV NRW. Sampling data is contained within the article Appendix A.
Acknowledgments:
The project "DezNWBA" was funded by the Ministry for Environment, Agriculture, Conservation and Consumer Protection of the State of North Rhine-Westphalia (MULNV NRW). The authors would like to thank the staff of MULNV NRW for their professional support and the City of Münster for their support and cooperation at the monitoring sites.
Conflicts of Interest:
The authors declare no conflict of interest. | 10,535 | 2021-03-26T00:00:00.000 | [
"Engineering"
] |
Digital Transformation and R&D Innovation: The Moderating Effect of Marketization Degree
. With the continuous development of socialist market economy, more and more enterprises use digital technology to realize digital transformation. Digital transformation has an important impact on enterprise R&D and innovation. Based on the data of Shanghai and Shenzhen A-share listed companies from 2016 to 2020, this paper empirically tests the impact of enterprise digital transformation on R&D innovation. It is found that digital transformation has a significant positive impact on R&D innovation, and the degree of this impact is related to regional differences and the nature of equity. Digital transformation has a more significant impact on enterprise R&D innovation in high marketization areas. This effect is generally significant in areas with low degree of marketization, but the degree of influence is lower than that in areas with high degree of marketization. The digital transformation of non-state-owned enterprises has a more significant impact on R&D innovation; The influence of state-owned enterprises on the whole is significant, but the influence degree is lower than that of non-state-owned enterprises. Therefore, it is a feasible path to enhance the level of enterprise R&D innovation and strengthen digital investment to realize the digital transformation of enterprises. The government should also support the digital transformation of enterprises according to the location and nature of enterprises.
Introduction
Tapscott et al. first proposed digital economy (Don Tapscott et al., 1999). After the 21st century, with the popularization of Internet and other technologies, digital economy became hot in China. After the reform and opening up, China's economy has developed rapidly, and people's income and consumption level have risen significantly. After 2015, with the supply-side structural reform and the target of "overcapacity reduction" proposed, enterprises need to locate more accurate consumer demand, correct production, and resolve excess capacity. Faced with huge and diverse consumer demands, enterprises need to acquire and integrate a large amount of market information, and analyze and position market demands based on this, so the demand for digital technology arises at the right moment. The report of the 19th National Congress of the Communist Party of China proposed the construction of "digital China". The Central Committee of the Communist Party of China has attached great importance to the digital transformation of enterprises, and it is an irresistible trend for enterprises to apply digital technology to realize digital transformation.
Digital technology includes information technology, big data technology and Internet technology. Through digital transformation, enterprises can integrate existing technologies and form new products or technologies, that is, "portfolio innovation". Digital technology has redefined the relationship between enterprises and consumers, channels and the value chain of enterprises, and innovated the logical paradigm for enterprises to create value, namely "business model innovation". Digital technology enables faster and more convenient information transmission, promotes organizational reform, pays more attention to equal communication, extended training, career planning and other ways, and improves employees' sense of honor and sense of belonging, namely corporate culture innovation (Qi Yudong, CAI Chengwei, 2019). In recent years, the global "core shortage tide" has caused the security of supply chain worries. Europe and the United States have expressed concerns about the excessive concentration of the semiconductor industry in Japan, South Korea and Taiwan, and put forward a series of stimulus plans to attract the return of the chip industry. The localization of China's chip industry chain is also accelerating (China Newsweek, 2021). This shows that it is very important for enterprises to intensify research and development and improve the level of science and technology to get rid of technological dependence and cultivate core competitive advantages.
Some papers have studied the relationship between digital transformation and enterprise innovation, but few papers consider the impact of digital transformation on enterprise innovation under different marketization processes, which provides an opportunity for the study of this paper. This paper first examines the impact of digital transformation on enterprise innovation, and then further analyzes the impact of digital transformation on enterprise innovation under different marketization processes. It verifies that digital transformation can improve resource allocation efficiency and promote enterprise R&D behavior by complementing the deficiency of regional marketization degree. It also plays a synergistic role in promoting the development of areas with high marketization degree. Starting with digital transformation and enterprise R&D and innovation, this paper establishes the model of "Enterprise digital transformation --R&D and innovation", analyzes the promotion effect of enterprise digital transformation on R&D and innovation from the perspective of heterogeneity, and analyzes the difference of the promotion effect of enterprise digital transformation on R&D and innovation from the perspective of equity nature. In addition, suggestions are put forward for enterprises to improve the level of R&D innovation and economic benefits, which has reference significance for enterprises in different regions with different degree of marketization and different ownership nature to improve their independent innovation ability.
Literature Review
Since the innovation-driven development strategy was put forward in the 18th CPC National Congress, innovation has become a key factor for enterprises to improve market competitiveness, and R&D innovation is an important content of the broad concept of innovation. Driving factors on enterprise innovation, Liu Wei (2015) argues that the market structure, government support, enterprise scale, the ownership structure of four environmental factors significantly influence the innovation efficiency of high and new technology enterprise, and the less government support, enterprise denationalization proportion is higher, the higher the degree of market concentration, enterprise scale, the greater the enterprise innovation efficiency is higher. Shen Guobing etc. (2022) from the perspective of the protection of intellectual property rights and innovation model research pharmaceutical industry research and development innovation, think to strengthen the protection of intellectual property rights can significantly enhance pharmaceutical enterprise innovation level, British and American innovation model more than continental Europe innovation mode and Chinese innovation mode can promote the research and development innovation, it has reference significance to China. As for the changes brought about by enterprise innovation, Jonker et al. (2006) drew a conclusion that R&D innovation was significantly positively correlated with output performance through the study of mechanical paper industry. Chen Heng and Hou Jian (2016) also believe that R&D personnel investment in high-tech industries has a significantly positive effect on improving industrial science and technology performance than technology introduction, and is limited by the regional knowledge accumulation level.
Under the trend of enterprises applying digital technology to realize digital transformation, many scholars have observed the changes brought by digital transformation to enterprises, especially the impact on enterprises' innovation behavior. Yi Loulou etc. (2021) through text analysis to obtain the text associated with the digital transition on the number of word frequency and logarithmic treatment after as measure of the strength of digital transformation, the study found that the impact on corporate performance digital transformation, and the impact characteristics by internal and external economic policy uncertainty. Liu Zheng et al. (2020) believe that digital transformation of enterprises promotes organizational change and causes organizations to weaken the power of senior executives and increase the power of grass-roots level, and that specialized knowledge and agency cost play a key role. He Fan and Liu Hongxia (2019) believe that under the influence of digital economy policies, enterprises' digital transformation has deepened, and digital transformation has significantly BCP Business & Management
FEMS 2022
Volume 19 (2022) 318 improved the economic benefits of real enterprises. Moreover, international digital transformation policies have reference significance for promoting the digital transformation of domestic real enterprises.
As for the impact of digital transformation on enterprise innovation, the existing literature mainly focuses on its mechanism. Xu Meng (2020) believes that digital transformation is the external motivation to promote enterprise innovation, and enterprise innovation is the internal demand to realize digital transformation. Lou Runping et al. (2022) believe that the total amount of enterprise digital investment has a statistically significant positive effect on enterprise innovation performance, digital software investment has a significant positive effect on innovation performance of technologyintensive manufacturing enterprises, and human capital has a partial intermediary effect. Chi Maomao et al. (2022) believe that digital transformation is a key and necessary condition for enterprise innovation performance, and digital transformation has a complete mediating role between IT capability and product innovation performance. Xiao Jinghua etc. (2018) to build from the perspective of enterprise and the consumer co-evolution ordinary consumers to participate in research and development of the large data environment theory framework, think consumers is an important participant of enterprise innovation, and the participation can be divided into two kinds of research and development innovation model, namely the user oriented form of data driven and the model of designer guide formation of the data to support. In the above study on the influence of enterprise digitization on enterprises, enterprise digitization will promote enterprise innovation from different aspects. However, Wu Fei et al. (2021) proved that fiscal science and technology expenditure has a promoting effect on enterprise digital transformation, and this effect is affected by enterprise attributes and regional differences. At present, there are few articles from the perspective of regional marketization degree to study the positive impact of enterprises' digital transformation under the difference of marketization degree in different regions. This paper aims to study whether enterprises' digital transformation has a complementary or synergistic effect on regional marketization.
Theoretical Analysis and Hypothesis Formulation
Digitization helps improve the level of internal corporate governance of enterprises (He Quanquan, 2021).Businesses use big data technology, artificial intelligence, such as block chain innovation, help enterprises to set up management cockpit, easy access to the enterprise core data, combined with industry data efficient sequential, insight into business, early warning and abnormal data in time, reduce the risk of enterprise development, reduce the uncertainty of enterprise management, mass transfer efficiency, help enterprise to establish the core competitiveness, Lay a solid foundation for enterprise development. Digitization helps improve the investment efficiency of enterprises (Zhai Shuping et al., 2022)."Digitization" is conducive to timely acquisition of customer demand by enterprises, which can reflect the external environment much faster than traditional enterprises, enabling them to survive in the fierce competition and fickle market environment and maintain continuous competitiveness. Therefore, digital enterprises are easier to obtain market demand, actively carry out research and development of new products, and grasp market opportunities. Digital transformation requires enterprises to invest in digital equipment. As for the use of equipment, talents with more digital knowledge reserves are required to operate and design the use scheme. Digital human capital can provide more innovation resources for enterprises with its own knowledge, increase the number of patents of enterprises and promote R&D and innovation. By using digital technology, enterprises can more quickly collect market information, integrate resources, and analyze market pain points. Based on this, enterprises can use R&D and innovation to inject technology into products and services, so as to better meet consumer needs, expand the market and improve economic benefits. Therefore, this paper argues that enterprise digital transformation can promote enterprise R&D and innovation.
H1: Enterprise digital transformation has a positive impact on enterprise R&D and innovation.
Digital technology needs to acquire a large amount of market information and analyze it. Areas with high marketization degree often have developed economy, dense population and large number of enterprises, and there is a large amount of complex information in the market. In such areas, enterprises need to correctly deal with market information in order to adapt to the market. Simple market research is difficult to locate the large-scale market. Therefore, it is easier for enterprises to accurately locate market demand by using digital technology to process market information. With the popularization of digital technology among enterprises, market analysis is also popular, and the advantage of market information processing in enterprise competition is no longer obvious. In order to gain competitive advantages, enterprises will invest in research and development according to market analysis, apply new technologies, develop new products and increase the added value of products to better meet consumer demand, or even create new consumer demand by using new technologies and new products. However, in areas with low marketization degree, where population density and enterprise density are not as high as those with high marketization degree, the competitive pressure of the local market is relatively small. Enterprises can accurately position the market by using digital technology to process local market information, and enterprises are less motivated to further increase research and development to expand the market. Therefore, this effect is not obvious in areas with low degree of marketization. Therefore, this paper argues that in regions with high degree of marketization, enterprise digital transformation has a high degree of impact on enterprise R&D innovation, and vice versa.
H2: In regions with different degrees of marketization, digital transformation of enterprises has different degrees of influence on R&D and innovation of enterprises, and this influence is more obvious in regions with high degrees of marketization.
Data Sources
This paper selects the data of A-share listed companies in Shanghai and Shenzhen from 2016 to 2020 as the analysis object. The original data comes from CSMAR database, and the marketization degree index comes from Yan Meng et al. (2021). The marketization degree is divided by the median of factor marketization index ranking of each province. In order to improve the regression quality, this paper processed the data as follows: First, eliminate the listed companies with ST, *ST and terminated listing status; Secondly, enterprises with missing key data should be eliminated and only the majority of samples without missing data should be retained. Third, in order to reduce the interference degree of outliers, all continuous variables at the micro level are tailed 1% and 99%. Finally, 17,296 samples were obtained in this paper.
Variable Setting
(1) Explained variables Enterprise Research and Development Innovation (ERDI). The relevant characteristics of enterprise R&D innovation are highlighted in enterprise R&D investment. Referring to the research of Shen Guobing et al. (2022), this paper chooses R&D density (RDS), an indicator closely related to R&D investment, to measure the level of enterprise R&D innovation. R&D density is the ratio of an enterprise's R&D investment to its total operating income (Guellec et al.,2001). In order to make the data more intuitive, this paper represents R&D density as the result of 100 times itself. With the help of CSMAR, we obtained the R&D investment and operating income of A-share listed companies in Shanghai and Shenzhen from 2016 to 2020, and the ratio of the two was used to obtain the data of enterprise R&D density.
(2) Core explanatory variables Enterprise Digital Transformation (DT). When studying the degree of enterprise digital transformation, this paper refers to the research of Wu (3) Adjustment variables In order to verify the influence of marketization degree on the relationship between enterprise digital transformation and R&D innovation, the moderating variable of marketization degree (MD) was set up in this paper. The degree of marketization (MD) is described by "0-1" variable. The enterprise in the region with high marketization degree is 1, and the enterprise in the region with low marketization degree is 0. (2015), the control variables selected in this paper include return on equity (ROE), asset-liability ratio (AL), capital intensity (CI) and government subsidies (GS).
Model Setting
In order to verify the positive impact of enterprise digital transformation on R&D innovation, and to study the degree of influence of regions with different degrees of marketization on the positive impact of enterprise digital transformation on R&D innovation, the following "enterprise digital transformation --R&D innovation" model is set and tested. , = 0 + 1 , + 2 , + 3 , × , + 4 , + 5 , + 6 , + 7 , + , Among them, R&D density (RDS) is used to measure the explained variable enterprise R&D innovation (ERDI); Enterprise digital transformation (DT) is the core explanatory variable.MD is the digital transformation of the moderating variable; ROE, AL, CI and GS were control variables. Is the random error term.In addition, the model uses firm individual and annual fixed effects to control firm time-invariant factors and annual effects.
In order to reduce variable measurement errors, the robustness test is carried out by substituting the explained variable and the core explanatory variable. The substitution variables of the explained variables are ENTERPRISE R&D intensity (RDST) and number of R&D personnel (RDP), and the substitution variables of the core explanatory variables are the occurrence of five digital transformation keywords of the enterprise respectively. The occurrence is regarded as 1, and the absence is regarded as 0. Due to limited space, only partial robustness test results are listed in this paper. Table 2 shows the descriptive statistical results of the whole country. From the perspective of R&D density, the mean value of RDS is 1.640, the standard deviation is 4.271, the minimum value and maximum value are 0 and 22.90 respectively, indicating that the R&D investment of enterprises is not high in general, and the R&D investment density of different enterprises varies greatly. The mean value of DT is 10.55, the standard deviation is 22.29, and the minimum value and maximum value are 0 and 133 respectively, indicating that the degree of digital transformation of enterprises is not high in general. There are still some enterprises that have not used digital technology, and the degree of digital transformation of different enterprises also varies to some extent. The mean value of MD is 0.787, indicating that the number of enterprises in areas with high marketization degree is larger than that in areas with low marketization degree. The mean value of ROE is 0.0632, standard deviation is 0.132, minimum value and maximum value are -0.646 and 0.364 respectively, indicating that the profitability of different enterprises varies greatly, and some enterprises show a loss state. The mean value of AL was 0.421, the standard deviation was 0.209, and the minimum and maximum values were 0.927.Asset-liability ratio reflects the long-term debt paying ability of enterprises. If the differences between industries are not taken into account, enterprises with small asset-liability ratio have low long-term debt paying pressure and can invest more funds to promote the digital transformation of enterprises. The mean value of CI is 3.377, the standard deviation is 5.673, and the minimum value and maximum value are 0.425 and 42.22 respectively. This variable reflects the difference between industries. There is an obvious difference in capital intensity between different industries and enterprises. The likelihood of transforming companies digitally to achieve higher profits is also higher. The mean value of GS is 16.07, the standard deviation is 3.217, and the minimum value and maximum value are 0 and 20.49 respectively. Before taking the logarithm of this variable, there is a great difference in the government subsidies obtained by enterprises, and the scale of government subsidies obtained by enterprises is relatively large in general. Table 3 and Table 4 respectively show the descriptive statistical results of enterprises in regions with high and low marketization degree. By contrast, the number of enterprises in areas with high marketization degree is much more than that in areas with low marketization degree. In terms of research and development density and degree of digital transformation, high degree of marketization of the mean value, maximum value is greater than the low marketization degree of region, and the standard deviation, on the other hand, shows that in high degree of marketization, enterprise development density and marketization degree is higher than generally low marketization degree, low marketization degree in enterprise development density and the digitized transformation degree is generally low. In terms of control variables ROE, AL, CI and GS, the mean value, standard deviation, maximum value and minimum value of enterprises in regions with high and low marketization degree are not significantly different. Table 4, Table 5 and Table 6 respectively show the correlation analysis results of enterprises in the whole country, areas with high marketization degree and areas with low marketization degree. Pearson correlation analysis is in the lower left corner and Spearman correlation analysis is in the upper right corner. From the perspective of correlation analysis, digital transformation DT and R&D investment density RDS are significantly positively correlated, which preliminarily verifies hypothesis 1.The correlation coefficients between explanatory variables and control variables were all less than 0.5, which preliminarily proved that there was no serious collinearity between variables. In addition, the correlation coefficient between DT and RDS of digital transformation of enterprises in areas with high marketization degree is greater than that in areas with low marketization degree, which initially verifies hypothesis 2.
Baseline Regression
This part conducts benchmark regression to investigate whether the digital transformation of enterprises has a positive impact on the digital transformation and the difference in the impact degree of different regions. Table 8 report the regression results, of which the column (1) (2) the report did not control the year respectively, individual and individual control of the year, under the fixed effects digital transformation of enterprises and the relationship between the innovation, it can be seen that can promote innovation enterprise digital transformation, both show significant positive correlation, hypothesis 1 is proved. Column (3) reported after joining regulating variable digital transformation of enterprises and the relationship between the innovation, on the basis of the column (3), column (4) the inspection of joined by a DT * MD, from the column (3) (4) the regression results can be seen, be explained variables RDS and adjust MD and handed in by DT * MD exist significant correlation, This indicates that the positive impact of digital transformation on enterprise R&D innovation is influenced by the marketization degree of moderating variables, which further verifies hypothesis 2.
Robustness Test
In order to further verify the influence of marketization degree on the relationship between digital transformation and enterprise R&D innovation, this paper grouped the total data according to the marketization degree of enterprises' regions, and conducted a robustness test on the influence of digital transformation on enterprise R&D innovation. As shown in Table 9, columns (3) and (4) respectively report the regression results of uncontrolled and controlled Individual fixed effect of enterprises located in areas with high marketization degree. Columns (5) and (6) respectively report the regression results of uncontrolled and controlled Individual fixed effect of enterprises located in areas with low marketization degree. It can be seen that there is a significant positive relationship between digital transformation and enterprise R&D innovation in areas with high marketization degree, and a significant positive relationship between digital transformation and enterprise R&D innovation in areas with low marketization degree without controlling individual fixed effect. Without control individual fixed effects under the condition of high degree of marketization in DT coefficient is greater than the low degree of marketization in DT coefficient, indicating that high degree of marketization is digital transition area and the influence degree of the enterprise innovation relations is greater than the low marketization degree of regional digital transformation and the influence degree of the enterprise innovation relations, more to verify the hypothesis 2.
This paper argues that a higher degree of marketization can effectively promote the free flow of capital, labor and other factors between regions (Gao Haifeng, 2022). Regions with a high degree of marketization have obvious geographical advantages such as resource, market and transportation due to the concentration of enterprises, and the government subsidies obtained by enterprises can be used more for R&D and innovation. And in low degree of marketization of the region, the enterprise needs to use government subsidies in such aspects as resources, market, traffic condition of to make up for these disadvantages, on research and development innovation to some less stable, Therefore, in regions with low degree of marketization, the impact of government subsidies on promoting R&D and innovation in enterprises' digital transformation is far less than that in regions with high degree of marketization. In order to improve the robustness of the test results, the explained variables and core explanatory variables were replaced and the robustness test was carried out. Due to space limitation, this paper only lists the regression results after the replacement of two variables, that is, the number of subdivision indicators of blockchain technology (BCT) and big data technology (BDT) appearing in the annual report of the enterprise instead of the core explanatory variable digital transformation (DT).The rest of the substituted variables have also been verified and the results are similar to those listed. Table 10, column (1) (2) respectively report to all of the samples to replace the core variable regression results, column (3) (4) report a high degree of marketization of area of samples to replace the core variable regression results, column (5) (6) samples reported low degree of marketization of the region, to replace the core variable regression results. From the point of view of regression results, it is basically consistent with the results of the benchmark regression results report. It can be clearly seen that in the overall sample, digital transformation and enterprise R&D innovation are significant at 1% level; In grouping samples, the samples of marketization of high and low marketization areas without control individual under the condition of fixed effects, digital transformation and enterprise innovation significant at the 1% level, under the condition of fixed effects in the control of individuals, marketization of high area digital transformation of enterprises and enterprise innovation significant at the 5% level, low marketization areas there are no significant correlation, Hypothesis 2 is further verified. Table 10. Robustness test 2
Heterogeneity Test
In the process of digital transformation, enterprises of different natures may have different degrees of reflection of R&D innovation. Therefore, in accordance with the classical research method, this paper will divide enterprises into state-owned enterprises and non-state-owned enterprises according to the nature of ownership, and conduct tests respectively to verify the difference between enterprises with different attributes in the positive impact of enterprise digital transformation on R&D innovation. In Table 4, columns (1), (2) and (3) respectively report the degree of influence on R&D innovation of all non-state-owned enterprises in the sample, non-state-owned enterprises in areas with high marketization degree and non-state-owned enterprises in areas with low marketization degree. Columns (4), (5) and (6) respectively report the degree of influence on R&D innovation of all State-Owned enterprises in the sample, state-owned enterprises in areas with high marketization degree, and state-owned enterprises in areas with low marketization degree. It can be seen that the impact of non-state-owned enterprises' digital transformation on R&D innovation is much higher than that of state-owned enterprises. As can be seen from the internal comparison results of the two types of enterprises with equity nature, the regression results of non-state-owned enterprises are more consistent with the conclusions of hypothesis 1 and Hypothesis 2, but the regression results of stateowned enterprises are not. This paper argues that the political relevance for the state-owned enterprises to provide the institutional protection, compared to the private enterprises, state-owned enterprises are more likely to use commercial credit access to bank financing, by improving the management to improve enterprise management status, and use the political relevance policy information and market information in advance, in order to establish market dominance (national, Huang Dongya, 2018), Therefore, for state-owned enterprises, digital transformation has less impact on their R&D investment. State-owned enterprises is the state rather than the private ownership, has "the stateowned capital advantage", can rapidly to form a stronger competitive advantage position in stateowned enterprises in the industrial chain, state-owned enterprises advantage mainly originates from monopoly and preferential policies (Nie Huihua, 2015), the difficulty of make big profits is low, on research and development innovation in order to improve the competitive power is insufficient, Therefore, the digital transformation of state-owned enterprises has less impact on R&D innovation than non-state-owned enterprises. And dominant position of the state-owned enterprises in the industrial chain is more noticeable in every region, and in high degree of marketization, the factors affecting the innovation of state-owned enterprises, more digital transformation impact on innovation is not outstanding, therefore low marketization degree areas appear even the impact of digital transformation of innovation is higher than the high degree of marketization of the phenomenon of region, But this was only in a small number of samples, and most of the samples still followed the general pattern.
Conclusions and Recommendations
Based on the study of Shanghai and Shenzhen two city of a-share listed companies from 2016 to 2020 data, pioneering combine enterprise digital transformation and innovation, enterprise digital transformation -R&D innovation model is set up and carry on the empirical analysis and economic interpretation, but also the heterogeneity of robustness test and inspection, to improve the rigour of the study. In this paper, the following conclusions are drawn: First, when enterprises use digital technology, they also carry out digital transformation. Regardless of regional differences, enterprises' digital transformation has a positive impact on enterprises' R&D innovation, and this conclusion is still valid after the robustness test and heterogeneity test. Second, considering the regional differences, different regional economic development level to the enterprise the influence degree of the digital transformation of enterprise innovation, in a high degree of marketization of digital transformation of enterprises to the enterprise innovation degree is higher, the influence of the low degree of marketization of digital transformation of enterprises to lower the influence degree of the enterprise research and development innovation. This conclusion was verified again in robustness test and heterogeneity test. Third, the degree of influence of ownership nature on enterprise digital transformation on enterprise R&D innovation is different. Non-state-owned enterprise digital transformation has a higher degree of influence on R&D innovation, while state-owned enterprise digital transformation has a lower degree of influence on R&D innovation. This paper proposes the following policy implications: First, digital transformation is the general trend and can improve the level of R&D innovation. Under the national innovation driven development strategy, enterprises should increase investment in digital, using artificial intelligence technology, block chain technology, cloud computing, big data technology, such as digital technology, digital talent, digital management, digital production, complete digital transformation, realize the luxuriant turned, improve the level of enterprise innovation, and improve enterprise economic benefits. Second, the degree of marketization of higher areas, top clustering enterprises, the market competition pressure, digital transformation can improve the level of enterprise innovation, improve product technology content, into the higher level of the market, and expand market share, improve market competitiveness, but also can prevent the backward production capacity and elimination. In regions with a relatively low degree of marketization, enterprises have a wide space for development. Although enterprises in this region can improve their R&D and innovation level through more ways, it is clear that improving their R&D and innovation level through digital transformation is usually a feasible way. In short, whether in the marketization degree is high or low, the digital transformation for an enterprise to raise the level of innovation is a feasible path, for state-owned enterprises, to deepen the reform of state-owned enterprises, on the premise of guarantee the state-owned capital holding as much as possible into social capital, improve the state-owned enterprise vitality, better by implementing the digital transformation to raise the level of innovation; For non-state-owned enterprises, enterprises should take the initiative to carry out digital transformation. Meanwhile, the government should also encourage, support and guide non-state-owned enterprises to realize digital transformation according to the specific conditions of different regions, improve market vitality, promote the better development of socialist market economy, and build a strong country in scientific and technological innovation. | 7,303.8 | 2022-05-31T00:00:00.000 | [
"Business",
"Economics",
"Computer Science"
] |
Influence of Acids and Alkali as Additives on Hydrothermally Treating Sewage Sludge: Influence on Phosphorus Recovery, Yield, and Energy Value of Hydrochar
: The high moisture content present in sewage sludge hinders the use of sewage sludge in incineration or energy application. This limitation of moisture present in sewage sludge can be obviated by using the hydrothermal carbonization (HTC) process. In sewage sludge management, the HTC process requires less energy compared to other conventional thermo–chemical management processes. The HTC process produces energy-rich hydrochar products and simultaneously enables phosphorus recovery. This study investigates the influence of organic acids, inorganic acid, and alkali as additives on phosphorus transformation, yield, proximate analysis and the heating value of subsequently produced hydrochar. The analysis includes various process temperatures (200 ◦ C, 220 ◦ C, and 240 ◦ C) in the presence of deionized water, acids (0.1 M and 0.25 M; H 2 SO 4 , HCOOH, CH 3 COOH), and alkali (0.1 M and 0.25 M; NaOH) solutions as feed water. The results show that phosphorus leaching into the process-water, hydrochar yield, proximate analysis, and the heating value of produced hydrochar is pH- and temperature-dependent, and particularly significant in the presence of H 2 SO 4 . In contrast, utilization of H 2 SO 4 and NaOH as an additive has a negative influence on the heating value of produced hydrochar.
Introduction
The management of sewage sludge produced from wastewater treatment plants is an important global issue due to the presence of high moisture content, harmful pathogens, and poor dewaterability. Conventional sewage sludge management involves the direct application on farmland as fertilizer. However, sewage sludge has attracted greater attention as a feedstock for nutrient recovery and renewable biofuels production [1,2]. In the year 2018, about 23% of sewage sludge produced in Germany was managed by applying directly on farmland, and about 65% of the produced sewage sludge was incinerated [3]. Since 2017, new regulation was placed by the German sewage sludge ordinance (AbfKlärV) based on enabling principles of the Circular Economy Act [4] on sewage sludge management. This new regulation is not only making it mandatory to recover phosphorus from sewage sludge in Germany but also prohibits the direct use of sewage sludge on farmland [5]. According to AbfKlärV, sewage sludge must undergo mandatory phosphorus recovery if the phosphorus content is ≥20 g/kg total dry matter (DM) or ≥2% DM. The thermal pretreatment of sewage sludge is still possible; however, the subsequent recovery of phosphorus in the produced incinerated ash or the carbonaceous residue has to be guaranteed. This new obligation applies from January 2029 for the wastewater treatment plants with size >100,000 populations equivalent (PE). The treatment facilities with >50,000 PE must The results showed that during the HTC of sewage sludge, metal cations and pH played vital roles in the transformation of phosphorus. The observation made by Ekpo et al. (2016) shows that 94% of the phosphorus in feedstock was recovered into the process-water after hydrothermal treatment of pig manure with a sulfuric acid additive at 170 • C. Reza et al. (2015) [15] studied the influence of using acid and alkali additives on the HTC of wheat straw. However, the behavior of phosphorus transformation greatly differs from the types of biomass, and HTC process conditions and techniques.
The main purpose of this study is to investigate and compare the influence of organic acid (acetic acid formic acid), inorganic acids (sulfuric acid), and alkali (sodium hydroxide) as additives on the hydrothermal treatment of sewage sludge. Despite the primary objective of this study being to understand the influence of different additives on the P transformation during HTC of sewage sludge, this study also shed light on the effect of additives on dewaterability, yield, and heating value of hydrochar.
Material
The sewage sludge used in this study was obtained directly from the wastewater treatment plant, Rostock, Germany. The central wastewater treatment plant in Rostock treats both industrial (1/3) and municipal wastewater (2/3) with the capacity to treat wastewater from 320,000 inhabitants [22]. The freshly digested and dewatered sludge was collected in an airtight specimen container and transported immediately to the laboratory. The sewage sludge, after being received in the laboratory, is refrigerated at 4 • C before use. The refrigerated representative samples were directly taken for HTC investigation and respective additives of deionized water, organic acid, inorganic acids, and alkali with known concentration were added and mixed to make a homogeneous slurry. The respective additive solution of 0.1 and 0.25 M concentration was prepared by diluting acetic acid (100%, p.a.), formic acid (≥98.0%, p.a.), sulfuric acid (1 M), and sodium hydroxide (≥97.0% (T), pellets) in the deionized water. The produced additive solution was used on the same day of preparation. The ultimate analysis of the sewage sludge was performed using an organic elemental analyzer by following EN ISO 16948, 2015. Proximate analysis was performed using a LECO Thermogravimetric Analyser (TGA) unit TGA701 to determine moisture content, volatile organic compound, fixed carbon and ash content. The heating value of the sewage sludge and resulting char was determined by Parr 6400 calorimeter (Parr Instruments Inc., Moline, IL, USA) following the method described in EN 14918, 2010. Total phosphate in the obtained sewage sludge was analyzed in an external laboratory following the method described in EN ISO 11885, 2009. All measurements were made in duplicate, and the mean value is reported.
Hydrothermal Carbonisation Treatment
Hydrothermal carbonization of sewage sludge was carried out in a Parr 4523 reactor (Parr Instrument (Deutschland) GmbH, Zeilweg 15, Frankfurt, Germany) at an autogenic pressure. The processing unit 4523 consists of a reaction vessel of 1-L capacity that can withstand a maximum pressure of 138 bar, a heating jacket equipped with a 2 kW heating coil, a temperature and a pressure sensor, and a stirrer with an attached motor. The reactor temperature and the speed of the stirrer were controlled using a Parr 4848 PID reactor controller. Figure 1 provides an overview of the experimental methodology. The analysis was carried out by charging the reactor with 297.00 g raw sewage sludge (23.5% DM) and it was topped up with 402.00 g of deionized water or additive solution of acetic acid, formic acid, sulfuric acid, or sodium hydroxide in 0.1 M or 0.25 M concentration. The sewage sludge and additives were mixed homogeneously inside the reactor before starting the investigation, and the initial pH of mixed feedstock slurry was noted using WTW pH 3310 m. The defined ratio of sewage sludge to additives was used to produce a homogeneous slurry of 10% DM. The mixture was hydrothermally carbonized at autogenic pressure with a constant heating rate of 4 K/min. The investigation was carried out with the varying Processes 2021, 9, 618 4 of 14 temperature of 200 • C, 220 • C, and 240 • C for a retention time of 2 h while keeping the stirrer switched on during the entire process. Later, the reactor was allowed to cool down to room temperature without any additional cooling mechanism.
Processes 2021, 9, 618 4 of 14 investigation, and the initial pH of mixed feedstock slurry was noted using WTW pH 3310 m. The defined ratio of sewage sludge to additives was used to produce a homogeneous slurry of 10% DM. The mixture was hydrothermally carbonized at autogenic pressure with a constant heating rate of 4 K/min. The investigation was carried out with the varying temperature of 200 °C, 220 °C, and 240 °C for a retention time of 2 h while keeping the stirrer switched on during the entire process. Later, the reactor was allowed to cool down to room temperature without any additional cooling mechanism.
Product Recovery and Analysis
The final pH of the HTC-slurry obtained after HTC of sewage sludge was noted, and the resulting hydrochar and process-water were separated by using a vacuum filtration apparatus. Vacuum filtration was carried out at the constant process conditions using a top-feeding procedure in a Büchner funnel. The following rules were kept constant for solid-liquid separation of the HTC-slurry and analyzing the dry matter concentration of hydrochar: (1) entire content HTC-slurry after carbonization was poured into the Büchner funnel, (2) the vacuum pump was switched on to generate the vacuum pressure for solidliquid separation, (3) the solids (hydrochar) thus obtained using vacuum filtration were oven-dried at 105 °C for 24 h and stored in sealed containers for further analysis or usage. Similarly, process-water produced after filtration was collected and stored in a volumetric flask and refrigerated at 4 °C until it was analyzed for total phosphorus (TP) concentration and conductivity.
The yield of the produced hydrochar is calculated as explained in Equation (3). The lower heating value (LHV) of the hydrochar was determined in a similar way to sewage sludge using a Parr 6400 calorimeter following the method described in EN 15170, 2010. TP in the process-water was analyzed spectrophotometrically after acid hydrolysis and oxidation following EN ISO 6878, 2004. The conductivity of the process-water was measured using a Hach HQ 40 d multifunction meter. By determining the conductivity, it was possible to understand the variations of salt content in the process-water produced at different process parameters. Triplicates of all analyzed results were obtained and the mean value was reported.
The Fraction Phosphorus Recovered on Hydrochar
The TP recovered on the hydrochar was mathematically calculated using the experimental data obtained on TP concentration in sewage sludge and process-water, and the total yield of the hydrochar after HTC. TP recovered from the hydrochar ( ( ) ) can be mathematically defined as follows:
Product Recovery and Analysis
The final pH of the HTC-slurry obtained after HTC of sewage sludge was noted, and the resulting hydrochar and process-water were separated by using a vacuum filtration apparatus. Vacuum filtration was carried out at the constant process conditions using a top-feeding procedure in a Büchner funnel. The following rules were kept constant for solid-liquid separation of the HTC-slurry and analyzing the dry matter concentration of hydrochar: (1) entire content HTC-slurry after carbonization was poured into the Büchner funnel, (2) the vacuum pump was switched on to generate the vacuum pressure for solidliquid separation, (3) the solids (hydrochar) thus obtained using vacuum filtration were oven-dried at 105 • C for 24 h and stored in sealed containers for further analysis or usage. Similarly, process-water produced after filtration was collected and stored in a volumetric flask and refrigerated at 4 • C until it was analyzed for total phosphorus (TP) concentration and conductivity.
The yield of the produced hydrochar is calculated as explained in Equation (3). The lower heating value (LHV) of the hydrochar was determined in a similar way to sewage sludge using a Parr 6400 calorimeter following the method described in EN 15170, 2010. TP in the process-water was analyzed spectrophotometrically after acid hydrolysis and oxidation following EN ISO 6878, 2004. The conductivity of the process-water was measured using a Hach HQ 40 d multifunction meter. By determining the conductivity, it was possible to understand the variations of salt content in the process-water produced at different process parameters. Triplicates of all analyzed results were obtained and the mean value was reported.
The Fraction Phosphorus Recovered on Hydrochar
The TP recovered on the hydrochar was mathematically calculated using the experimental data obtained on TP concentration in sewage sludge and process-water, and the total yield of the hydrochar after HTC. TP recovered from the hydrochar (TP (h) ) can be mathematically defined as follows: where TP (pw) and TP ( f s) is the TP content in process-water and the initial feedstock slurry, respectively, and Y (pw) is the total yield of the process-water after filtration, which is calculated as shown in Equation (2): where m is the total weight of the feedstock, m o is the dry weight of the feedstock, DM (h) is the dry matter percentage of hydrochar after filtration, and Y (h) is the yield (%) of produced hydrochar and was calculated as the applied formula.
where m h is the total dry weight of produced hydrochar.
Characteristic of Sewage Sludge
The results of the proximate and ultimate analysis of sewage sludge are presented in Table 1. The moisture content of sewage sludge was determined to be 76.53%, leaving behind the total solids content of 23.48%. The analysis also demonstrates noticeably lower ash content of 32.83% DM and higher volatile solids (VS) of 61.46% DM, which was inconsistent with the previous investigation ranges [6,23]. The ultimate analysis of the sewage sludge specified the typical C-H-N-S-O content for sewage sludge in Germany [24] with C: 32.5; H: 5.0; N: 4.98; S: 1.50; and O: 21.4 on a dry basis. The dry sewage sludge is known to contain a higher concentration of phosphorus and a relatively higher heating value. The TP content in the feedstock was determined to be 36.1 g/kg, accounting for 3.6% of total dry sludge, and the heating value was observed to be relatively higher with 13.56 MJ/kg (LHV) in comparison with previous studies [6,23,24]. One possible explanation for increased LHV can be the presence of higher volatile solids and lower ash content. Nevertheless, the overall characteristics of the feedstock have the typical composition of sewage sludge in Germany. Figure 2 compares the total yield (%) of hydrochar produced at different temperatures using various additives. An increase in the process temperature from 200 • C to 240 • C has decreased the hydrochar yield on average by about 10%, which agrees with the earlier investigation results demonstrating a decrease in hydrochar yield with an increase in reaction temperature [25,26]. The maximum hydrochar yield was observed with the carbonization Processes 2021, 9, 618 6 of 14 method using inorganic acid as additives in comparison with the carbonization method using organic acid, alkali, and deionized water as additives. The maximum hydrochar yield of 69.09% was achieved using a 0.25 M H 2 SO 4 additive in feedstock (pH 3.78), with the carbonization temperature of 200 • C and 2 h retention time. In contrast, the same reaction temperature and retention time, using 0.25 M CH 3 COOH (pH 5.44), HCOOH (pH 5.38), NaOH (pH 10.68) and deionized water (pH 7.8) additives has resulted in the hydrochar yield of 62.74%, 63.79%, 55.47%, and 59.66%, respectively. Nevertheless, it is interesting to see that at the lower additive concentration (0.1 M), despite having comparatively similar initial pH range (5.8-6.3) of sewage sludge slurry prepared using CH 3 COOH (pH 6.3), HCOOH (pH 6.2), and H 2 SO 4 (pH 5.8), hydrochar yield was significantly higher with using H 2 SO 4 as an additive in comparison with other organic acids. Figure 2 compares the total yield (%) of hydrochar produced at different temperatures using various additives. An increase in the process temperature from 200 °C to 240 °C has decreased the hydrochar yield on average by about 10%, which agrees with the earlier investigation results demonstrating a decrease in hydrochar yield with an increase in reaction temperature [25,26]. The maximum hydrochar yield was observed with the carbonization method using inorganic acid as additives in comparison with the carbonization method using organic acid, alkali, and deionized water as additives. The maximum hydrochar yield of 69.09% was achieved using a 0.25 M H2SO4 additive in feedstock (pH 3.78), with the carbonization temperature of 200 °C and 2 h retention time. In contrast, the same reaction temperature and retention time, using 0.25 M CH3COOH (pH 5.44), HCOOH (pH 5.38), NaOH (pH 10.68) and deionized water (pH 7.8) additives has resulted in the hydrochar yield of 62.74%, 63.79%, 55.47%, and 59.66%, respectively. Nevertheless, it is interesting to see that at the lower additive concentration (0.1 M), despite having comparatively similar initial pH range (5.8-6.3) of sewage sludge slurry prepared using CH3COOH (pH 6.3), HCOOH (pH 6.2), and H2SO4 (pH 5.8), hydrochar yield was significantly higher with using H2SO4 as an additive in comparison with other organic acids. The increases in reaction temperature will directly influence eliminating the moisture content in the biomass structure as the effect of hydrolysis reaction and simultaneously foster biomass degradation; this, in turn, decreases hydrochar yield [26,27]. Further, the investigation conducted by Jaruwat et al. (2018) has shown that a longer retention time will increase the yield of the hydrochar as the result of repolymerisation of decomposed biopolymers.
Effect of Additives and Reaction Temperature on Yield of Hydrochar
Similar to reaction temperature and retention time, the addition of additives also influences the yield of the hydrochar. Temperature undoubtedly has a greater influence on the mass yield of hydrochar. Nevertheless, similar to temperature, despite having similar pH, retention time, and reaction temperature, using inorganic acid has increased the hydrochar yield in comparison to using organic acids or alkali as additives. An increase in the hydrochar yield can be co-related with the higher molecular mass of H2SO4 and changing pH due to strong acid additive utilization in comparison with the utilization of CH3COOH, HCOOH, NaOH, and deionized water as an additive.
Effect of Additives and HTC Process Conditions on Solid-Liquid Separation
The dry matter concentration of the various hydrochar residue after filtering the process-water using a vacuum filter at the constant process conditions (top-feeding procedure with a Büchner funnel) is depicted in Figure 3. The HTC treatment was advantageous The increases in reaction temperature will directly influence eliminating the moisture content in the biomass structure as the effect of hydrolysis reaction and simultaneously foster biomass degradation; this, in turn, decreases hydrochar yield [26,27]. Further, the investigation conducted by Jaruwat et al. (2018) has shown that a longer retention time will increase the yield of the hydrochar as the result of repolymerisation of decomposed biopolymers.
Similar to reaction temperature and retention time, the addition of additives also influences the yield of the hydrochar. Temperature undoubtedly has a greater influence on the mass yield of hydrochar. Nevertheless, similar to temperature, despite having similar pH, retention time, and reaction temperature, using inorganic acid has increased the hydrochar yield in comparison to using organic acids or alkali as additives. An increase in the hydrochar yield can be co-related with the higher molecular mass of H 2 SO 4 and changing pH due to strong acid additive utilization in comparison with the utilization of CH 3 COOH, HCOOH, NaOH, and deionized water as an additive.
Effect of Additives and HTC Process Conditions on Solid-Liquid Separation
The dry matter concentration of the various hydrochar residue after filtering the process-water using a vacuum filter at the constant process conditions (top-feeding procedure with a Büchner funnel) is depicted in Figure 3. The HTC treatment was advantageous to sludge dewatering. The dry matter concentration of hydrochar residue after solid-liquid separation increased significantly after the HTC reaction and the use of H 2 SO 4 as an additive, significantly favored dewatering. When 0.25 M H 2 SO 4 solution was used as an additive, the dry matter of hydrochar residue was 27.68-31.75%, which was significantly higher in comparison with using deionized water as an additive (20.70-24.83%). The influence of H 2 SO 4 in enhancing the dewaterability of sewage sludge has also been explained previously [3]. The use of organic acids as an additive did not show any greater Processes 2021, 9, 618 7 of 14 difference in the dry matter of hydrochar residue (20.68-26.38%) in comparison with using deionized water as an additive. In contrast, the use of NaOH as an additive had considerably decreased the dry matter of hydrochar residue (1. .51%) at the lower reaction temperature (200 • C); however, at the higher reaction temperatures (220 • C and 220 • C), dry matter of hydrochar residue was higher (27.27-28.82%).
The extracellular polymeric materials in the sewage sludge contain viscous protein material that is extremely hydrophilic [28]. The effective way to enhance the sludge dewatering performance is by breaking the cell wall and destroying the sludge flocs to release and hydrolyze the organic matter present in sewage sludge. This phenomenon can be effectively achieved alongside the higher temperature and pressure that occur in the HTC process. The reduction in the binding force of the sludge particles achieved during the HTC process improves the dewatering performance after HTC and is significantly enhanced using H2SO4 in the reaction medium. In contrast, the use of NaOH additive at lower reaction temperature (200 °C) was not effective in hydrolyzing the organic matter. This could have influence in retaining of the viscous protein material in sewage sludge, making the HTC-slurry hard to dewater. Nevertheless, using NaOH additive at higher reaction temperatures (220 °C and 240 °C) was effective in hydrolyzing the organic matter present in sewage sludge.
Effect of Additives on Hydrochar Properties: Proximate Analysis and Heating Value
Following HTC, sewage sludge was carbonized into a brownish-grey solid hydrochar with a nutlike smell. The physical appearance of produced hydrochar implied that hydrochar had a uniform composition and could be readily molded into dense pellets. The proximate analysis and LHV were determined to understand the fuel characteristics of the produced hydrochar. Table 2 represents the results comprising volatile matter, ash content, fixed carbon, and LHV of various hydrochar produced at different process conditions. The hydrochar produced using various additives in this investigation had LHV The extracellular polymeric materials in the sewage sludge contain viscous protein material that is extremely hydrophilic [28]. The effective way to enhance the sludge dewatering performance is by breaking the cell wall and destroying the sludge flocs to release and hydrolyze the organic matter present in sewage sludge. This phenomenon can be effectively achieved alongside the higher temperature and pressure that occur in the HTC process. The reduction in the binding force of the sludge particles achieved during the HTC process improves the dewatering performance after HTC and is significantly enhanced using H 2 SO 4 in the reaction medium. In contrast, the use of NaOH additive at lower reaction temperature (200 • C) was not effective in hydrolyzing the organic matter. This could have influence in retaining of the viscous protein material in sewage sludge, making the HTC-slurry hard to dewater. Nevertheless, using NaOH additive at higher reaction temperatures (220 • C and 240 • C) was effective in hydrolyzing the organic matter present in sewage sludge.
Effect of Additives on Hydrochar Properties: Proximate Analysis and Heating Value
Following HTC, sewage sludge was carbonized into a brownish-grey solid hydrochar with a nutlike smell. The physical appearance of produced hydrochar implied that hydrochar had a uniform composition and could be readily molded into dense pellets. The proximate analysis and LHV were determined to understand the fuel characteristics of the produced hydrochar. Table 2 represents the results comprising volatile matter, ash content, fixed carbon, and LHV of various hydrochar produced at different process conditions. The hydrochar produced using various additives in this investigation had LHV in the range of 14.24-15.63 MJ/kg, which is similar to the results of earlier studies demonstrating fuel characteristics of hydrochar produced using sewage sludge [1].
The breaking down of biomass at higher temperatures to influence aromatization, polymerization, and condensation to produce hydrochar can be a reason for the increase in fixed carbon content (FC) with increasing reaction temperature [29]. Fixed carbon can be defined as combustible residue present in the char after the volatile matter burned. In general, biomass before carbonization contains high VS content and low FC, but high Processes 2021, 9, 618 8 of 14 moisture content [30]. Previously, several studies showed a strong correlation between FC content and calorific value; an increase in the FC content in char can directly increase the heating value of the char [30,31]. The use of H 2 SO 4 and NaOH as an additive has negatively influenced the LHV of the produced hydrochar in comparison with the hydrochar produced using organic acids and deionized water as an additive. Table 2. Proximate analysis and heating value of hydrochar produced using various additives and reaction conditions. (AC-Additive concentration; RT-Reaction temperature; Initial and final pH represents the pH of feedstock slurry before and after HTC. All hydrochars were produced at 2 h retention time.) The hydrochar produced using organic acids and deionized water as an additive increased the FC content (7.72-8.94%) in comparison to the FC content of the initial feedstock (5.71%). Here, the increase in the FC content in hydrochar might have influenced increasing the LHV (14.99-15.75 MJ/kg). However, the use of H 2 SO 4 as an additive at higher concentrations (0.25 M) negatively influenced the FC content of produced hydrochar (4.41-5.27%). Similarly, the use of NaOH as an additive at lower reaction temperatures (200 • C and 220 • C) had no noticeable influence on FC% (5.81-6.87%) in comparison with initial feedstock. On other hand, it was also observed that there was a significant decrease in the VS content and increase in the ash content after HTC of sewage sludge. The decrease in VS content can be attributed to the reaction severity and dissolution of organic material into the liquid phase, and an increase in the ash content can be correlated with the decrease in the mass percentage of VS composition of the hydrochar. However, it is interesting to perceive that the ash content in the hydrochar produced using H 2 SO 4 (0.25 M) as an additive is offset more by decreasing FC content than VS content; similar phenomena can also be seen with the hydrochar produced using NaOH as an additive at 200 • C and 220 • C. The decrease in the FC content with the use of H 2 SO 4 at higher concentrations and NaOH at lower reaction temperatures (200 • C and 220 • C) as an additive can explain the lower LHV in the respectively produced hydrochars. Figure 4 is the graphical representation of the conductivity of the process-water produced using different acids and alkali as additives. The conductivity measurement, in general, provides a reliable means to understand the ion concentration of a solution. The maximum conductivity of 24.10 µS/cm was observed in the process-water produced using alkali additive. Among acid-based additives, the utilization of inorganic acid (H 2 SO 4 ) as an additive had process-water with higher conductivity (17.4-20.12 µS/cm) in comparison with organic acid additives (CH 3 COOH and HCOOH). Figure 4 is the graphical representation of the conductivity of the process-water produced using different acids and alkali as additives. The conductivity measurement, in general, provides a reliable means to understand the ion concentration of a solution. The maximum conductivity of 24.10 µS/cm was observed in the process-water produced using alkali additive. Among acid-based additives, the utilization of inorganic acid (H2SO4) as an additive had process-water with higher conductivity (17.4-20.12 µS/cm) in comparison with organic acid additives (CH3COOH and HCOOH).
Effect of HTC Organic Acids, Inorganic Acids, and Alkali Additive on P-Transformation
3.6.1. The pH of Feedstock Slurry, before and after HTC Table 2 depicts the pH of the feedstock slurry before and after HTC at various temperatures, additives, and additive concentrations. The HTC process comprises hydrolysis, dehydration, decarboxylation, aromatization, and condensation polymerization [32]. During HTC, the pH of the feedstock slurry decreases as the result of the degradation of mac- 3.6. Effect of HTC Organic Acids, Inorganic Acids, and Alkali Additive on P-Transformation 3.6.1. The pH of Feedstock Slurry, before and after HTC Table 2 depicts the pH of the feedstock slurry before and after HTC at various temperatures, additives, and additive concentrations. The HTC process comprises hydrolysis, dehydration, decarboxylation, aromatization, and condensation polymerization [32]. During HTC, the pH of the feedstock slurry decreases as the result of the degradation of macromolecular organic matter into an acidic substance (viz., volatile fatty acids) and subsequent dissolution into the liquid phase. Further, the reaction time and temperature also influence the pH of the sludge hydrolysate. The use of organic acids and inorganic acid as additives resulted in the feedstock initial pH between 3.4 and 6.4. In contrast, the use of NaOH additive resulted in the feedstock initial pH ranging from 9.9-11.0. In the baseline condition, the deionized water additive has an initial pH of 7.8. The final pH represents the pH of the feedstock slurry after HTC. The experimental observation demonstrates that, regardless of variations in the initial pH, the final pH value after HTC always tends to move towards neutral. The obtained results were consistent with an idea that the acids formed during hydrolysis were subsequently decomposed or repolymerized at a higher temperature, which influences the pH of feedstock slurry after HTC [28]. Further, it is also possible that the buffering function of the sewage sludge might have a significant effect on the final pH. The obtained results of the shift in final pH towards neutral agree with several earlier HTC studies carried out on sewage sludge [6], swine manure [33], and wheat straw [15]. Figure 5 shows the concentration of TP in the process-water produced after the HTC of feedstock slurry at various temperatures. For each experiment, TP in the process-water was analyzed spectrophotometrically after acid hydrolysis and oxidation of the process-water sample. Further, the TP in the hydrochar was calculated mathematically using Equation (1). Figure 6 depicts the influence of additives, additive concentration, and pH of the feedstock slurry on the recovering TP from the raw feedstock into hydrochar after HTC at various temperatures. In brief, the results show that even at a similar pH, higher leaching of TP into process-water is achieved by the utilization of inorganic acid (H 2 SO 4 ) as an additive in comparison with organic acids. condition, the deionized water additive has an initial pH of 7.8. The final pH represents the pH of the feedstock slurry after HTC. The experimental observation demonstrates that, regardless of variations in the initial pH, the final pH value after HTC always tends to move towards neutral. The obtained results were consistent with an idea that the acids formed during hydrolysis were subsequently decomposed or repolymerized at a higher temperature, which influences the pH of feedstock slurry after HTC [28]. Further, it is also possible that the buffering function of the sewage sludge might have a significant effect on the final pH. The obtained results of the shift in final pH towards neutral agree with several earlier HTC studies carried out on sewage sludge [6], swine manure [33], and wheat straw [15]. Figure 5 shows the concentration of TP in the process-water produced after the HTC of feedstock slurry at various temperatures. For each experiment, TP in the process-water was analyzed spectrophotometrically after acid hydrolysis and oxidation of the processwater sample. Further, the TP in the hydrochar was calculated mathematically using Equation (1). Figure 6 depicts the influence of additives, additive concentration, and pH of the feedstock slurry on the recovering TP from the raw feedstock into hydrochar after HTC at various temperatures. In brief, the results show that even at a similar pH, higher leaching of TP into process-water is achieved by the utilization of inorganic acid (H2SO4) as an additive in comparison with organic acids. Following the HTC of sewage sludge, the highest TP leaching into the process-water (326 mg/L) was observed by using H2SO4 as an additive (pH 3.78), which represents about 93% of TP being recovered from raw feedstock into consequently produced hydrochar. Irrespective of process temperature, using deionised waste as an additive did not have any significant influence on the TP leaching. TP leaching into process-water using deionized water as an additive was observed to be 63-101 mg/L, which represents about 97.7-98.7% of TP being recovered from raw feedstock into consequently produced hydrochar. Following the HTC of sewage sludge, the highest TP leaching into the process-water (326 mg/L) was observed by using H 2 SO 4 as an additive (pH 3.78), which represents about 93% of TP being recovered from raw feedstock into consequently produced hydrochar. Irrespective of process temperature, using deionised waste as an additive did not have any significant influence on the TP leaching. TP leaching into process-water using deionized water as an additive was observed to be 63-101 mg/L, which represents about 97.7-98.7% of TP being recovered from raw feedstock into consequently produced hydrochar. The TP concentration in the process-water following the treatment at various temperatures and organic acid additives-formic and acetic acid-was observed to be in the range of 58.3-126 mg/L and 59.3-86.6 mg/L, respectively. Likewise, using NaOH as an additive also had comparatively similar TP leaching (66.2-154 mg/L) into the process-water after HTC at various temperatures. The obtained results suggested that organic acids and alkali had a very limited impact on extracting TP from raw feedstock into the process-water, which agrees with the similar results demonstrated by earlier studies [6,33].
Effect of Additives on Phosphorus Transformation
During HTC, the extraction of phosphorus into the process-water was generally lower with the utilization of organic acids as additives in comparison with an inorganic acid, regardless of temperature. An increase in H 2 SO 4 additive concentration from 0.1 M to 0.25 M increased the TP leaching into process-water by about 3-fold from 101-127 mg/L to 264-326 mg/L, respectively. However, increasing the concentration of organic acid additives from 0.1 M to 0.25 M obviously decreased the pH of the resulting feedstock slurry, but it did not greatly influence TP leaching into process-water.
The TP concentration in the process-water following the treatment at various temperatures and organic acid additives-formic and acetic acid-was observed to be in the range of 58.3-126 mg/L and 59.3-86.6 mg/L, respectively. Likewise, using NaOH as an additive also had comparatively similar TP leaching (66.2-154 mg/L) into the process-water after HTC at various temperatures. The obtained results suggested that organic acids and alkali had a very limited impact on extracting TP from raw feedstock into the process-water, which agrees with the similar results demonstrated by earlier studies [6,33]. The initial pH of feedstock slurry produced using CH 3 COOH and HCOOH additive was~6.3 and~5.6, and~6.2 and~5.3, respectively, at 0.1 and 0.25 M concentration. The TP in the process-water was observed to be in the range of 65.5-126 mg/L and 66-105 mg/L when produced using CH 3 COOH additive at 0.1 and 0.25 M concentration. Similarly, the TP in process-water was in a similar range with 59.3-66.9 mg/L and 62-86.3 mg/L when produced using HCOOH additive at 0.1 and 0.25 M concentration.
Factors influencing the TP immobilization during the HTC process include treatment conditions (temperature, reaction time, and additive properties), and the feedstock itself [28]. The formation of phosphorus salts (calcium phosphate, magnesium ammonium phosphate, and magnesium phosphate) are known to immobilize phosphorus into the hydrochar and this immobilization is influenced by the presence of higher inorganic content of the feedstock (such as the level of Ca, Mg, and others), pH, temperature and additives during HTC.
The element composition of the feedstock, particularly the presence of phosphate precipitating metals (viz., Fe, Al, and Ca) has a higher potential in deciding the phosphate retention in the hydrochar product [34]. During HTC of sewage sludge, the presence of a higher concentration of multivalent metal ions such as Al 3+ , Ca 2+ , Fe 3+ , and Mg 2+ are responsible for forming phosphate with low solubility and in turn enabling the phosphate to be retained in subsequently produced hydrochar. However, the previous studies indicated that the treatment using H 2 SO 4 as an additive tends to reduce the level of Ca, Fe, and Mg in hydrochar [33]. Analyzing the conductivity aids in understanding the metal ion concentration in the process-water, and the experimental analysis indicated higher conductivity in the process-water following the use of H 2 SO 4 additives in comparison with other organic acids as additives (see Figure 4). The presence of increasing metal ion concentration can explain the higher level of P immobilization into the process-water, particularly with H 2 SO 4 additives. Nevertheless, despite having relatively higher conductivity following the use of NaOH as an additive, TP concentration in the process-water was comparatively less. One explanation for increased conductivity following the use of alkali additive can be simultaneously induced ionic salts with NaOH additive utilization. The investigated results suggest that HTC of sewage sludge significantly immobilizes phosphorus into hydrochar in all but mineral acid additives. Results are consistent with another study carried out by Ekpo et al. (2016) demonstrating lower TP leaching into the process-water during HTC of swine manure in the presence of CH 3 COOH, HCOOH, and NaOH as additives.
Conclusions
The influence of organic acids, an inorganic acid and alkali as additives on phosphorus mobilization, energy value, yield, and dewaterability by hydrothermally carbonizing sewage sludge was analyzed. Phosphorus extraction into the process-water is pHdependent and particularly significant in the presence of inorganic acid (H 2 SO 4 ). The use of H 2 SO 4 and NaOH as additives has decreased the FC content of produced hydrochar, which negatively influences the heating value of the consequently produced hydrochar. A relatively higher reduction in the binding force of the sludge particles was observed during HTC using H 2 SO 4 in the reaction medium; this, in turn, improved the hydrochar dewatering performance in comparison with other additives. In conclusion, if the HTC of sewage sludge is designated to leach the phosphorus into the process water, the use of inorganic acid at a higher concentration is favorable; however, compromises will be made in the fuel characteristic of the hydrochar. Funding: This research received no external funding.
Data Availability Statement:
The authors confirm that the data supporting the findings of this study are available within the article. | 8,710.8 | 2021-03-31T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Classical Electromagnetic Field Theory in the Presence of Magnetic Sources
Using two new well-defined four-dimensional potential vectors, we formulate the classical Maxwell field theory in a form which has manifest Lorentz covariance and SO(2) duality symmetry in the presence of magnetic sources. We set up a consistent Lagrangian for the theory. Then from the action principle we obtain both Maxwell's equation and the equation of motion of a dyon moving in the electromagnetic field.
Introduction
Recently there has been an increasing interest in the study of electromagnetic (EM) duality symmetry, because it plays a fundamental role in superstring and brane theory [1] [2]. From Maxwell's equations we know that general EM duality implies the existence of magnetic source ( magnetic charge (monopole) and currents). However, when considering the quantum dynamics of particles carrying both electric and magnetic charges (dyons) one faces the lack of a naturally defined classical field theory despite of the fact that a consistent quantum field theory does exist [3]. This issue was analyzed in recent contributions of many authors [5] [6] [7][8] [9]. In our recent paper [14], we gave an alternative formulation of electric-magnetic field theory in the presence of magnetic source. The advantages of our formulation are the following. First, we introduce two new potential vectors that have no singularities and we do not need to use the concept of Dirac String; secondly, from the present paper we can set up a consistent Lagrangian theory from which we can get all the information about classical electromagnetic field theory which returns to the usual Maxwell's field theory when only electric source is considered; thirdly it has manifest Lorentz covariant and SO(2) duality symmetry. Finally it seems that our formulation can be quantized directly, an issue that will be reported in a forthcoming article.
The aim of this paper is to present the details of the construction of a Lagrangian for the EM field theory in the formulation of [14]. From the action principle we expect to get the Maxwell's equation and also the equation of motion of a dyon moving in the electro-magnetic field. We also explain also why our formalism has manifestly SO(2) duality symmetry. The paper is organized as follows. In the next section we give a brief review of our formulation of classical electromagnetic field in the presence of magnetic source, where two well defined 4-dimensional potential vectors are introduced and the Maxwell's equations are written in a Lorentz covariant way. In the third section we show how manifest SO(2) duality symmetry arises in the present approach. In section 4, we give the Lagrangian form of our formulation. In section 5, from the action principle of the system of a dyon, we get both Maxwell's equation and the equation of motion of the dyon. Some conclusion remarks are given in the last section.
Two potential vectors formulation
Let us first give a brief review of the two 4-vector potentials formulation of the electromagnetic field in the presence of magnetic source [14]. Besides the usual definition of 4-dimensional potential which we called A 1 µ , i.e.
where φ 1 and A 1 are usual electric scalar potential and magnetic vector potential in electrodynamics, while the newly introduced potential φ 2 is the scalar potential associated to the magnetic field and A 2 is a vector potential associated to the electric field. It should be stressed that these two 4-potentials have no singularities around the magnetic charges (monopoles). Using these potentials, the electric field strength E and the magnetic induction B are then expressed as: In the magnetic source free case, φ 2 and A 2 are expected to be zero, so the above equation returns to the usual magnetic source free case. Now we introduce two field tensors as Then, choosing Lorentz gauge ∂ µ A I µ = 0, the Maxwell's equation in the case of existing both electric and magnetic sources, can be recast as and In this formulation the currents are manifestly conserved: In the next section, we will see that the index I is the SO(2) index, so our formulation above has manifestly SO(2) duality symmetry which is related to the general gauge transformation A I µ → A I µ + ∂ µ χ I . The fields E, B, the field tensors in (2.5) and Maxwell's equations (2.8) are all invariant under the transformations.
Let us stress also that in the expressions above neither F 1 µν nor F 2 µν have the same matrix form as the usual electromagnetic tensor. From (2.5) and the definition (2.3) (2.4), we find that, and So it is convenient to define a new field tensor as where F µν is exactly the Hodge star dual of F µν . As we shall see, using these new field tensors we can easily express the duality symmetry in a compact fashion. It is easy to see that F µν is the analog to the usual electromagnetic tensor defined in classical electrodynamics, because they have exactly the same matrix form in terms of the field strengths. Since the vector potentials in our formalism have no singularities one has ∂ µ * F I µν = 0, so Maxwell's equations can also be written as One can find that (refer to section 4), from F µν or F µν defined above, we can easily build a Lagrangian such that the Maxwell's equation (2.15) can be derived from the action principle.
SO(2) duality symmetry
The SO(2) duality symmetry of Electromagnetic field theory has been discussed in many papers [6] [8] [7] [10]. In our previous paper [14], we explained in detail why the general duality symmetry is the SO(2) symmetry, but there still exists something not very clear. For example, under the general dual transformation for F µν , F µν , i.e.
why should the same transformation hold simultaneously for J µ1 , J µ2 ? One can make the same question concerning the dual transformations of (E, B) and (q, g), (J e , J m ), etc. Why all these dual transformations must be the same? In our formulation, we can shed light on this issue. So in this section we would first like to give answers to these questions and then explain in detail why our formulation has manifestly SO(2) duality symmetry, i.e. we will see that the index I of potentials A I µ is the SO (2) index. Let us first solve the Maxwell's equation in our formalism. It is easy to check that the potential functions defined in the section above satisfy the differential equations [14]: In the static case, i.e. when the sources do not depend on time t, we can write where I = 1, 2 represent I = e, m respectively. Then exactly as it is done in the standard classical electrodynamics (magnetic source free case) [11], the solution of equation (3.2) is given by where r = |x − x ′ |, then from equations (2.3) and (2.4) we find that the field strengths have the following representation and Now we can give the answer to the question in the beginning of this section. Because E i = F 0i , B i = F 0i , then we know that if F µν , F µν have a transformation given by equation (3.1), which led to the field strengths E, B having the same transformation, and because the field strengths are related to the sources by equation (3.6) and (3.7), so the sources ρ e , ρ m and J e , J m must change in the same way. The same transformation must be satisfied by the 4-dimensional currents J µ1 , J µ2 . That is why once one chooses the transformation for F µν , F µν given by (3.1), then the corresponding field strengths and the the sources must obey the same transformation. If we impose that the Maxwell's equations (2.6) and (2.7) are invariant under these transformations of field strengths and the sources, we obtain a = d and b = −c. Moreover, if we also impose that the energy density 1 2 (E 2 + B 2 ) and the Poynting vector E × B are invariant under this transformation we get a 2 + b 2 = 1. It is then natural to introduce an angle α such that a = cos α and b = sin α. Hence the general duality transformation matrix coincides with the general rotation matrix in two dimensions. Thus it becomes apparent that the general electromagnetic duality symmetry is the SO(2) symmetry. Under the special case, α = π/2, the transformation (3.1) coincides with the replacement F µν → F µν , F µν → −F µν and the same replacements must be taken simultaneously, i.e. E → B, B → −E, ρ e → ρ m , ρ m → −ρ e and J e → J m , J m → −J e , etc. This corresponds to the usual special electro-magnetic duality symmetry. Now we would like to point out that the index I of the potentials A I µ is the SO(2) index. Under the general dual transformation, i.e. SO(2) transformation, from the discussion above we know that the sources ρ I and J I change as where R(α) is the SO(2) rotation matrix. Then from equations (3.4) and (3.5), we know that the potentials A I µ = (φ I , −A I ) should have the same SO(2) transformation, that is to say that the index I of potential A I µ is the SO(2) index. So our formulation in section (2) has manifestly SO(2) duality symmetry.
Lagrangian formulation of the field
In this section we will give a lagrangian for the electromagnetic field which gives the right Maxwell's equation in the presence of magnetic source. We will also see that from this lagrangian one can deduce the right Lorentz force formula (refer to the next section).
The Lagrangian of the field is given by where the * A I µ is defined through From a simple calculation we find Notice that * A I µ is related to the derivative of A I α , and also take into account the conservation conditions of the currents (2.10). Then the Euler-Lagrange equation of the Lagrangian defined in (4.1) gives the Maxwell's equation (2.15).
Action principle of the field
In this section, we would like to give a consistent action principle formulation for the classical electro-magnetic field theory in the presence of the magnetic sources. We can consider a system of dyons which interact with the electro-magnetic field. From the action of the system we expect to get both field equation (2.15) and also the equations of motion of the dyons.
For simplicity, we consider here one dyon with electric charge q and magnetic charge g, which moves in the electro-magnetic field (the extension to the many dyons system can be easily done). The action of this system consists of three parts, i.e, where, is the free action of the dyon, and is the term of interaction between the dyon and the electro-magnetic field around it. The J µI in the above equation are the currents for one particle dyon which have the form The last term of the action is nothing but the action of the electro-magnetic field. Let us now vary the potentials as We can check that Noticing that ∂ * A I µ ∂ǫ I ′ = 0 and ∂Sp ∂ǫ I = 0, we then have Because B 1 µ and B 2 µ are arbitrary, then from the equations (5.10) and (5.11) above we reobtain the Maxwell's equation (2.15).
Further if we change the coordinate of the dyon in the form we find Since y α is arbitrary, then from ∂S ∂ǫ 0 | ǫ 0 =0 = 0 we obtain This is just the equation of motion of a dyon moving in the electro-magnetic field. From this equation we can find that the Lorentz force the dyon acquired in the magnetic field can be represented in terms of field strengths as We would like to stress that the general Lorentz force has also the SO(2) electromagnetic duality symmetry.
Conclusion remarks
The main results of this paper are as follows. First we use the formulation of reference [14] to explain why the classical electro-magnetic field theory in the presence of a magnetic source has exactly the SO(2) duality symmetry. Then we find a proper Lagrangian formulation for the theory, and at last using the action principle of the system of dyons we derived both Maxwell's equation and the equation of motion for the dyon. From this equation of motion we got the general Lorentz force for a dyon moving in the electromagnetic field.
As a consistency check of our formulation we see that for g = 0 and J µ2 = 0 (no magnetic sources), from equation (3.2) we can set A 2 µ = 0, and so F µν = F 1 µν + * F 2 µν ⇒ F 1 µν . This means that our formulation contains standard electrodynamics as a particular case. For q = 0 and J µ1 = 0 (no electric sources), one has A 1 µ = 0, and then F µν = F 1 µν + * F 2 µν ⇒ * F 2 µν , and the lagrangian becomes: Thus in this case the formulation is completely parallel to the magnetic source free case. | 3,155.6 | 2001-09-18T00:00:00.000 | [
"Physics"
] |
In vitro study of the potential protection of sound enamel against demineralization
Background The objective of this study was to study the potential protection effect of different treatments against sound enamel demineralization around orthodontic brackets. Methods This is an in vitro randomized controlled study; artificial enamel demineralization of human premolars was created and compared with reference to control. The three materials used for enamel treatment were resin infiltrate (ICON), fluoridated varnish (Clinpro), and the self-etch primer system (Transbond Plus Self-Etch Primer). Fifty premolars divided equally into five groups were included in the study for quantitative surface micro-hardness assessment using a micro-hardness tester (MHT). Qualitative assessment of the enamel demineralization with a polarized light microscope (PLM) was also used. Enamel was demineralized by subjecting the specimens to cycling between artificial saliva solution and a demineralizing solution for 21 days. Results The mean Vickers hardness in kgf/mm2 was as follows: intact enamel = 352.5 ± 13.8, demineralized enamel = 301.6 ± 34.0, enamel treated with Clinpro = 333.6 ± 18.0, enamel treated with SEP = 370.7 ± 38.8, and enamel treated with ICON = 380.5 ± 53.8. Conclusions ICON, Clinpro, and Transbond Plus Self-Etch Primer (TPSEP) increased enamel resistance to demineralization. Attempting to protect the enamel around the orthodontic brackets could be done by applying a preventive material before bonding, if not compromising the bond strength, the orthodontic brackets.
Background
Demineralization and appearance of white spot lesions around brackets and bands have become a major concern during orthodontic treatment especially with the potential of these lesions to develop into caries when oral hygiene maintenance is compromised. During orthodontic treatment, an acidic environment develops at the periphery of the orthodontic brackets and bands due to accumulation of bacterial plaque. Enamel demineralization has been repeatedly reported [1][2][3], and efforts continue to develop ways to prevent development of white spot lesions that are not only unaesthetic but also remineralize poorly increasing the risk of developing carious lesions [4,5]. Increasing the resistance of the enamel in these areas would control development of white spot lesions.
Multiple preventive agents had been tested over years to evaluate their effectiveness in prevention and treatment of white spot lesions associated with orthodontic treatment [6][7][8][9][10][11]. It was assumed that the most efficient method of delivering preventive agents during orthodontic treatment would be one independent of patient compliance and specific to those areas most susceptible to demineralization [12]. These included adhesives and cements containing fluoride (F), casein phosphopeptideamorphous calcium phosphate (CPP-ACP), or amorphous calcium phosphate (ACP); fluoride applications; and sealants. Fluoride has been proven to be effective in fighting demineralization [13,14] that Rølla et al. [13] considered it the only factor that explains the caries reduction in recent years with a synergistic effect from the improved oral hygiene. Fluoride has been found to be effective in reducing the development of white spot lesions associated with fixed orthodontic treatment [15,16]. Therefore, incorporating preventive agents in orthodontic bonding composite was considered a potential method to reduce white spot lesions during orthodontic treatment even with the general agreement that it is not simple to predict what might occur when the bonding adhesive is used in the demanding environment of the oral cavity [17][18][19]. Clinpro, a fluoridated varnish, has been introduced to the market and supposed to be most beneficial in a neutral pH environment. Sealants were suggested as protective enamel agents that do not require patient cooperation, but sealants on areas adjacent to brackets are subject to physical challenges as tooth brushing and acid attacks that limit their effect [20,21]. Recently, resin infiltrants were found to decrease the dissolution of enamel and so limit the appearance of white spot lesions. In an in vitro study to compare a conventional adhesive, a caries infiltrant (ICON), and a combination of both in resisting demineralization, it was found that in both sound enamel and artificial caries lesions, the application of the caries infiltrant was effective in protecting the enamel against dissolution [22].
Systematic reviews of previous studies found a lack of reliable evidence on a protocol or a method to protect the enamel against development of white spot lesions during orthodontic treatment or to remineralize postorthodontic white spot lesions [23,24]. The objective of this study was to study the potential protection of different treatments against sound enamel demineralization around orthodontic brackets.
Methods
This is an in vitro randomized controlled study; artificial enamel demineralization of human premolars was created and compared with reference to control. Quantitative surface micro-hardness assessment of the specimens was done with a digital display Vickers micro-hardness tester (MHT) (Model HVS-50, Laizhou Huayin Testing Instrument Co., Ltd., China). The study also used a qualitative polarized light microscope (PLM) (Olympus dual stage polarized light microscope, Model BH-2, Dualmont Corporation, Minneapolis, MN).
The materials used in this study together with the study design are given in Table 1. The three materials used for enamel treatment were Clinpro (3M Unitek, Monrovia, CA, USA), a fluoridated varnish containing 5 % sodium fluoride; ICON (DMG, Hamburg, Germany), a resin infiltrant; and Transbond Plus Self-Etch Primer (TPSEP) (3M Unitek, Monrovia, CA, USA). All materials were used according to the manufacturers' instructions.
The software EpiCalc 2000 version 1.02 (Brixton Books, Brixton, UK) indicated 10 specimens for each group to be a reliable sample size at power 80 % and confidence interval 95 %. For micro-hardness testing, 50 specimens were prepared and then randomly assigned to 5 groups of 10. A similar sample was prepared for the PLM part of the study. Specimens' preparation included separation of the crown from the root, removing any calculus or debris, and polishing with non-fluoride prophylaxis. The crown was then sectioned mesiodistally with a diamond separating disc leaving only a thin layer of the underlying dentin. For PLM examination, 140-to 160-μm-thick sections were prepared from each tooth segment.
To create artificial carious lesions of the enamel, artificial saliva solution [19] was prepared consisting of 20 mmol/L NaHCO 3 , 3 mmol/L NaH 2 PO 4 , and 1 mmol/L CaCl 2 at neutral pH. Alongside, a demineralizing solution consisting of 2.2 mmol/L Ca 2+ , 2.2 mmol/L PO4 3− , and 50 mmol/L acetic acid at pH 4.4 was prepared. Deionized water was used in the preparation of the two solutions. Solutions were measured using a pH/mV meter (Accumet Portable, Fisher Scientific, Pittsburgh, PA) and calcium electrode (Thermo Electron Co., Beverly, MA). The specimens were placed in the prepared artificial saliva solution for 12 h before subjecting them to the demineralizing solution. The specimens were subjected to cycling between the artificial saliva and the demineralizing solutions for 21 days [25].
Surface micro-hardness of the specimens was determined using MHT with a Vickers diamond indenter and a ×20 objective lens. A load of 200 g was applied to the surface of the specimens for 10 s. Three indentations equally placed over a circle and each no closer than 0.5 mm to the adjacent indentations were made on the surface of each specimen. The diagonals' length of the indentations was measured by a built-in scaled microscope, and Vickers values were converted into microhardness values. Micro-hardness was obtained using the following equation: HV = 1.854 P/d 2 , where HV is Vickers hardness in kgf/mm 2 , P is the load in kgf, and d is the length of the diagonals in mm. For the qualitative part of the current study, the prepared specimens were analyzed with the PLM. Each specimen was wetted with deionized water and then was oriented longitudinally on a glass cover slide and mounted, and the stage rotated to allow maximum illumination. The area of demineralization was centered in the field of view and photographed under maximum illumination at ×10 magnification. Statistical analysis of the results of micro-hardness testing included one-way ANOVA followed by least significant difference (LSD) post-hoc analysis for comparisons between groups.
Results
Descriptive statistics of the enamel hardness of each group are presented in Table 2 and Fig. 1. The demineralized untreated enamel group followed by the Clinpro group showed the lowest hardness values of the enamel surface. The ICON group, the SEP group, and the intact undemineralized enamel group showed the highest hardness values of the enamel surface in descending order.
The one-way ANOVA (Table 2) indicated a significant difference in the enamel hardness between groups (P = 0.009). There was no significant difference between the intact enamel and the demineralized enamel groups (P = 0.09), the Clinpro group (P = 0.52), the SEP group (P = 0.19), and the ICON group (P = 0.10). However, there was a significant difference between the demineralized enamel group and both the SEP group (P = 0.004) and the ICON group (P = 0.001).
On the other hand, there was no significant difference between the two treated groups: Clinpro group and SEP group (P = 0.05), and Clinpro group and ICON group (P = 0.24). The SEP group and ICON group were not significantly different (P = 0.71).
A representative photomicrograph for each of the five groups from polarized light microscopy is shown in Fig. 2. The photomicrographs showed that all the three groups of ICON, Clinpro, and SEP were less affected by the demineralization, with smaller lesions and a bright color.
Discussion
Micro-hardness is a linear function of the local calcium content [26] that can be used not only as a comparative measure of hardness but also as a direct measure of mineral gain or loss as a consequence of demineralization and remineralization [27,28]. The method was proved as valid and useful for assessment of the changes in enamel surface demineralization [29,30].
In the current study, the mean enamel hardness of the control group was 352.5 ± 13.8 kgf/mm 2 , while that of the demineralized enamel was 301 ± 34.0 Kgf/mm 2 . Applying the fluoridated varnish Clinpro to the enamel surface before demineralization helped to preserve the enamel hardness to some extent as the enamel hardness in this group remained less than that in the control group; the mean enamel hardness in this group was 333.6 ± 18.0 kgf/mm 2 . Clinpro™ is a white varnish with tri-calcium phosphate (TCP) ingredient to deliver fluoride, phosphate, and calcium. According to the manufacturer, a protective barrier is created around this ingredient during manufacturing, and as the varnish flows on the teeth, it comes into contact with saliva, breaking down the protective barrier and so makes calcium, phosphate, and fluoride ions available to the teeth to decrease the demineralization. Although enamel treated with Clinpro™ was not as hard as the intact untreated enamel, the change might be clinically significant taking into consideration the long duration of orthodontic treatment that on average ranges from 2 to 3 years. Multiple applications of Clinpro over the period of orthodontic treatment may be recommended for further enhancing its protective effect.
The increased surface hardness of the enamel in the ICON group (380.5 ± 53.8 kgf/mm 2 ) could be interpreted in light of the mode of action of this material. The low-viscosity light-cured resin material infiltrates the etched enamel surface creating a barrier on the enamel surface, and it is this superficial layer that increases the enamel surface hardness and subsequently increases the resistance to surface demineralization and the development of white spot lesions. Earlier studies as the study of Yetkiner et al. [31] found that the use of low-viscosity caries infiltrant ICON increased sound enamel resistance to demineralization. The mean enamel hardness in group 5 in which the specimens were treated with TPSEP and then subjected to the same protocol of demineralization used in the other groups was 370.7 ± 38.8 kgf/mm 2 . This group, therefore, presented the second highest mean enamel hardness in the current study. TPSEP is a F-releasing self-etch primer, and it acts as other self-etch primers through three mechanisms to stop the etching process and complete the priming: (1) the acid groups attached to the etching monomers are neutralized by forming a complex with the calcium from the hydroxyapatite; (2) the viscosity increases with the air drying, slowing the transport of acid groups to the enamel interface; and (3) the primer polymerizes and the transport of acid groups to the interface stops [32]. The results of the current study suggest that it is not only the effect of the F that increased the enamel hardness but the polymerized surface layer also contributed to this effect. Both group 3 (ICON) and group 5 (SEP) showed increased enamel hardness compared to group 2 (demineralized enamel) and even to group 1 (untreated enamel); this could be contributed by the polymerized surface layer. On the other hand, the polarized light microscope, with its unique ability to deliver information about the submicroscopic structure of the material under examination utilizing polarized light to form highly magnified images, could qualitatively show the areas of mineral loss and mineral gain represented by areas with different porosities and birefringence [33,34]. The results of the PLM photomicrographs, as shown in Fig. 2, supported the micro-hardness results; the three groups of ICON, Clinpro, and SEP showed better resistance to enamel demineralization. The photomicrographs showed smaller lesions in the three groups where the enamel was treated with ICON, Clinpro, and SEP compared to the demineralized untreated enamel.
Attempting to protect the enamel around orthodontic brackets could be done through different techniques that each could be effective. This could be by applying a preventive material before bonding the orthodontic brackets or around the brackets after bonding. Regarding the three materials used in the current study, the use of self-etch primers for orthodontic bracket bonding is well documented and has proven not to compromise bond strength [35][36][37]. A previous study which tested the effect of using ICON and Clinpro before bonding orthodontic brackets with self-etch primer and conventional adhesive systems on the shear bond strength found no significant effect [38].
However, although the observed increase in the enamel hardness would logically suggest more resistance to demineralization, it is not possible to simply expect a solution to the problem of developing white spot lesions during orthodontic treatment with the use of these materials. A systematic review of in vivo studies on the caries-inhibiting effect of preventive measures during orthodontic treatment with fixed appliances considered the effect significant if only it was over 50 % [24].
Conclusions
ICON, Clinpro, and TPSEP increased enamel resistance to demineralization. Attempting to protect the enamel around the orthodontic brackets could be done by applying a preventive material before bonding, if not compromising the bond strength, the orthodontic brackets. | 3,365.6 | 2015-05-22T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Note on quantum entanglement and quantum geometry
In this note we present preliminary study on the relation between the quantum entanglement of boundary states and the quantum geometry in the bulk in the framework of spin networks. We conjecture that the emergence of space with non-zero volume reflects the non-perfectness of the $SU(2)$-invariant tensors. Specifically, we consider four-valent vertex with identical spins in spin networks. It turns out that when $j = 1/2$ and $j = 1$, the maximally entangled $SU(2)$-invariant tensors on the boundary correspond to the eigenstates of the volume square operator in the bulk, which indicates that the quantum geometry of tetrahedron has a definite orientation.
I. INTRODUCTION
Recently more and more evidences have been accumulated to support the conjecture that the geometric connection of spacetime is just the emergent phenomenon of the quantum entanglement style of matter, which has been becoming an exciting arena for the interaction of quantum information, quantum gravity and condensed matter physics [1][2][3][4][5][6]. In particular, in AdS/CFT approach, the relation between the minimal surface in the bulk and the entanglement entropy for boundary states has been quantitatively described by the Ryu-Takayanagi formula, which is recently understood from the quantum error correction (QEC) scenario as well [7]. In this approach the perfect tensor network plays a key role in mimicking the function of QEC for hyperbolic space [8]. Here the notion of perfectness means that the entanglement entropy could saturate the maximal value which is given by the local degrees of freedom on the boundary, for any bipartition of particles in which the smaller part contains particles no more than half of the total particles. Among all the kinds of tensor networks, perfect tensor network exhibits the strongest ability of QEC, in the sense that information can always be recovered by pushing it from the bulk towards the boundary in all directions. Unfortunately, the tensor network built with perfect tensors always exhibits a flat entanglement spectrum, which is not consistent with the holographic nature of AdS space, which is characterized by the non-flat entanglement spectrum. Recent work in [9,10] indicates that in order to have a non-flat entanglement spectrum one has to sacrifice the ability of tensors for QEC, which implies that the tensors in network should not be perfect if a non-flat entanglement spectrum is expected to achieve.
Based on the above progress, it is quite intriguing to investigate the relation between quantum entanglement of boundary states and the geometric structure of the bulk in a non-perturbative way, since the holographic nature of gravity has widely been accepted as the fundamental principle for the theory of quantum gravity. Preliminary explorations on the entanglement entropy of boundary states in the framework of spin networks have appeared in literature [11][12][13][14][15][16][17][18][19]. In this framework, gauge invariant quantum states play a key role in describing the quantum geometry of polyhedrons. In particular, intertwiners as SU (2)-invariant tensors are basic ingredients for the construction of spin network states, which is proposed to describe the quantum geometry of space time as well as the quantum states of gravitational field in four dimensions. To investigate the QEC in AdS space which is supposed to be described by quantum geometry at the microscopic level, it is quite interesting to discuss the perfectness of the boundary states in the framework of spin networks. Recently, it is shown in [13] that bivalent and trivalent tensors can be both SU (2)-invariant and perfect which are uniquely given by the singlet state or 3j symbols. However, for n-valent tensors when n is four or more than four, it is not possible to construct a SU (2)-invariant tensor that is perfect at the same time (unless the spin j is infinitely large, which is called asymptotically perfect tensors in [13,14]). This is a very interesting result because it is well known in spin network literature that the volume operator has non-zero eigenvalues only when acting on vertices with four or more edges [20][21][22]. That is to say, when a SU (2)-invariant tensor is perfect, the corresponding volume of space must be vanishing. Based on this fact, we conjecture that the emergence of the space with non-zero volume is the reflection of the non-perfectness of SU (2)-invariant tensors.
In this note we intend to find more features of SU (2)-invariant tensors and then disclose the relation between the quantum entanglement of boundary states and the quantum geometry in the bulk. In particular, we propose a quantity to measure the non-perfectness of a single SU (2)-invariant tensor. For a boundary state, we define the sum of the entanglement entropy over all the possible bipartition as S tot . Then the non-perfectness of any tensor can be evaluated by the difference between S tot and that of a perfect tensor S p , which is uniquely determined by the number of degrees of freedom on the boundary. We may denote it as ∆S. The corresponding state with the maximal value of S tot is called as the maximally entangled state. If δ = ∆S/S p is tiny, then this maximally entangled state may be called as nearly perfect tensor 1 . In this note we intend to find these maximally entangled states on the boundary and consider their relations with the quantum states in the bulk for the simple spin network which only contains a single vertex with four dangling edges, describing a quantum tetrahedron geometrically. Correspondingly, the boundary state is a 4-valent tensor state.
Our main result is that when j = 1/2 and j = 1, the maximally entangled SU (2)invariant tensors on the boundary correspond to the eigenstates of the square of the volume operator in the bulk, which indicates that the geometry of quantum tetrahedron has a definite orientation. This paper is organized as follows. In next section we present the setup for four-valent SU (2)-invariant tensors and give the boundary states with the maximal entanglement entropy for j = 1/2 and j = 1, while the detailed derivation of these states 1 A similar notion for random invariant tensors rather than a single tensor is introduced in [14].
is presented in Appendix. Then the relation between these states and the quantum states of the tetrahedron in the bulk is given in section III. Our numerical results on the relations between the entanglement entropy and the expectation of the volume for general states is given in section IV. Section V is the conclusions and outlooks.
TROPY
The setup is given as follows. We consider a 4-valent tensor associated with a single vertex, which can be diagrammatically sketched as To be perfect or almost perfect for any bipartition, we only consider the case that all the external legs are identically labelled by spin j, namely j i = j(i = 1, 2, 3, 4), then a 4-valent tensor can be written as where m i = −j, −j + 1, ..., j − 1, j. To be SU (2)-invariant, we know |ψ 4 must be a singlet satisfying i j i |ψ 4 = 0. As a result, we find the tensor states must have the following form where C Next we consider the entanglement entropy with bipartition. Since the entanglement entropy for the (1,3) bipartition is trivial which is identically ln(2j + 1), we only need to consider the bipartition with equal legs in each part. If two external legs of 4-valent tensor are combined and labelled by a single index, then tensors can be treated as matrices. For instance, if j 1 and j 2 are paired, then the reduced density matrix is given by Since the tensor is a pure state, one has ρ 12 = ρ 34 . For four-valent tensor, there are three ways to pair the external legs. Thus, the corresponding entanglement entropy for bipartition can be calculated as In [13] it is proved that 4-valent SU (2)-invariant tensors can not be perfect, in the sense that it is not possible to construct a state |ψ 4 such that the entanglement entropy saturates the bound S 12 = S 13 = S 14 = 2 ln(2j + 1). In another word, if the entanglement entropy S 12 = S 13 = 2 ln(2j + 1), then the entanglement entropy S 14 must be less than 2 ln(2j + 1).
Based on this fact, then it is quite natural to ask what kind of SU (2)-invariant tensors could be nearly perfect, in the sense that it is maximally entangled among all the SU (2)invariant tensors. Next we intend to provide an answer to this issue by figuring out the SU (2)-invariant tensor with the maximal entanglement entropy for some specific spin j. For 4-valent tensors, Such a nearly perfect tensor is defined as the state with the maximal value for the sum of the entanglement entropy, namely S tot = S 12 + S 13 + S 14 .
Firstly, we consider the simplest case with j = 1/2. Our goal is to find α(0) and α (1) such that S tot takes the maximal value. In appendix, we analytically show that when α(0) = ±iα(1), the entanglement entropy takes the maximal value, which is given by For a perfect tensor, this value is expected to be S p = 3 ln 4. Thus we find the "deficit" of the entanglement entropy is ∆S = S p − S m tot = 3 ln(2/ We notice that the entanglement spectrum is not flat for these maximally entangled states indeed, unlike the perfect tensors. Ignoring the global phase factor, the corresponding states Next we consider the case of J = 1. In parallel, we find two states having the maximal entanglement entropy. The corresponding intertwiner J α(J) |J is given by The corresponding entanglement entropy is Thus the deficit of the entanglement entropy is ∆S = S p − S m tot = 6 ln 3 − ( 5 3 ln 2 + 9 2 ln 3) 0.49, and δ 0.075.
III. THE EIGENSTATES OF THE VOLUME OPERATOR ON SPIN NETWORKS
In this section we focus on the geometric interpretation of invariant tensors with the maximal entanglement entropy. A classical polyhedron in R 3 can be parameterized by the oriented face area vectors subject to the closure condition. Quantum mechanically, loop quantum gravity provides a well-known strategy to quantize the polyhedrons based on spin network states, which are SU (2)-invariant. The quantum volume operator can be defined by quantizing the classical expression of the volume for a three-dimensional region R, which is expressed in terms of Ashtekar variables as where a, b, c are spatial indices, while i, j, k are internal indices. In literature there exists two different strategies to construct the volume operator and discuss its action on spin networks.
Traditionally, one is called the internal algorithm proposed by Rovelli and Smolin[20,21,23], and the other one is the external algorithm proposed by Ashtekar and Lewandowski [24,25]. In this paper only 4-valent vertex is taken into account and these two versions are equivalent [26][27][28].
Before discussing the volume spectrum of 4-valent vertex, we firstly elaborate our conjecture, arguing that the emergence of the space with non-zero volume is the reflection of the non-perfectness of SU (2)-invariant tensors. It is well known that when the volume operator acts on any tri-valent vertex in spin networks, the eigenvalue has uniformly to be zero. That is to say, if a network only contains tri-valent vertices, the total volume of the space corresponding to this state must be zero as well. In this situation, perfect SU (2)-invariant tensors can in principle be constructed based on this spin network. Specifically, as investigated in [13], for a tri-valent vertex associated with three edges labelled by spins j 1 , j 2 , j 3 , then the SU (2)-invariant perfect tensor state is uniquely given by Wigner's 3j symbols Then the total SU (2)-invariant perfect tensor associated with a network can be constructed by considering the products of these individual perfect tensors associated with each vertex. However, if one intends to construct a space with non-zero volume with spin networks, four-valent or more valent vertex must be included. Following the results in [13], then the SU (2)-invariant tensor associated with this vertex has to be non-perfect. If some components or even a single vertex of the network become non-perfect, we know that the total SU (2)-invariant tensor based on the whole network can not be perfect, either. Therefore, the emergence of non-zero volume must accompany the non-perfectness of SU (2)-invariant tensors in this scenario. Obviously, all the cases considered in the remainder of this paper are subject to the conjecture that we have proposed, because for all states with non-zero volume the corresponding tensors are not perfect, indeed. More importantly, next we will push this qualitative conjecture forward by quantitatively demonstrating the relation between the value of the volume and the value of maximal entanglement entropy for 4-valent vertex.
For a 4-valent vertex, the action of the volume operator can be described asV = l 6 p |Ŵ |, where l p = √ 8πG is the planck length and for convenience we set it as unit in the remaining part of this note. Here we also remark that there is an overall coefficient which is undetermined in the volume operator, but one can choose appropriate coefficient such that the action of the volume operator has a semiclassical limit correctly, as discussed in [26,27].
Nevertheless, this overall coefficient does not affect our analysis in present paper on the relation between entanglement and geometry. The operatorŴ iŝ k +Ĵ (1) iĴ The eigenvalues ofŴ are ± √ 3/8, corresponding to the eigenstates |± = 1 2 , but the orientation of the tetrahedron is usually mixed. Only the eigenstates of the operatorŴ have a definite orientation. Therefore, in this simplest case with j = 1/2, we find that the boundary states with the maximal entanglement entropy correspond to the quantum states of the tetrahedron with definite orientation.
Moreover, it is interesting to understand the emergence of non-zero eigenvalue of the volume from the viewpoint of quantum information. In [13], it is shown that the tri-valent SU (2)-invariant tensors can be perfect, implying that the quantum information could be recovered by QEC with full fidelity. On the other hand, it is known that the action of the volume operator on any tri-valent vertex gives rise to the zero eigenvalue of the volume.
Once the volume of the polyhedron is non-zero, like the operator acting on four-valent vertex, then the SU (2)-invariant tensor can not be perfect any more, implying that the quantum information must sacrifice or lose its fidelity when teleporting through the vertex for some certain partitions. Or conversely, one can say that in order to guarantee the polyhedron, as the basic bricks of space, has non-zero volume, then as the channel of QEC, the SU (2)invariant tensor can not be perfect. In a word, the space with non-zero volume emerges as the deficit of the entanglement entropy, or the loss of the fidelity of QEC. This is the key observation in this note.
Next we consider the case of j = 1, then the intertwiner space is spanned with |J = 0 , |J = 1 , and |J = 2 . The matrix J |Ŵ |J reads aŝ It turns out thatŴ has two non-zero eigenvalues ± √ 3/2, corresponding to eigenstates as well as an eigenvalue 0, corresponding to the eigenstate Remarkably, we find that the boundary states with the maximal entanglement entropy obtained in previous section correspond to the quantum state of tetrahedron with definite orientation in the bulk. It is worthwhile to point out that for general intertwiner parameters, the states are not the eigenstates of the volume operator any more, but the expectation value can be evaluated. In next section we intend to investigate the relation between the entanglement entropy of boundary states and the volume or orientation of the tetrahedron by numerical analysis.
IV. NUMERICAL RESULTS
In this section we present the relation between the entanglement entropy of boundary states and the volume and orientation of tetrahedron in the bulk by randomly selecting the parameters in intertwiner space.
For j = 1/2, a general state |ψ 4 in intertwiner space can be expanded as where α, β are two complex numbers.
We know that the volume operatorV = |Ŵ |, andŴ |+ = In Fig.2, we show the relation between S tot and V by randomly selecting complex numbers α and β. From this figure, we justify that S tot does have the maximal value 3 ln(2 √ 3) when Ŵ = ± √ 3/8, which correspond to (α = 0, β = 1) and (α = 1, β = 0), respectively. We also notice that S tot takes the minimal value at the position with Ŵ = 0, which implies that the geometry is the coherence of two oriented tetrahedron states with equal probability, namely |α| = |β|. In is also interesting to notice that among all random states, a large proportion of states distributes in the vicinity of Ŵ = 0 with lower entanglement entropy.
With the increase of | Ŵ |, the proportion of states becomes small but the entanglement entropy becomes larger. The maximal value of entanglement entropy measures the ability of the vertex as the channel of QEC, which is not perfect and consistent with our conjecture.
Furthermore, as the channel of QEC, it sounds reasonable that the quantum tetrahedron with a definite orientation has the maximal entanglement entropy, because its deficit of entanglement entropy is the smallest such that it possesses the best fidelity for quantum information teleportation.
Similarly, we consider this relation for j = 1. The general state |ψ 4 reads as where α ± , α 0 are complex numbers. Then we have In Fig.3, we show the relation between S tot and Ŵ , V with random numbers in intertwiner space. Again, our statement that S tot takes the maximal value for the eigenstates of Ŵ is justified. In this case the relation between S tot and V becomes complicated. But it is true that the maximal value of S tot appears when the expectation value of the volume takes the largest value. In addition, when the expectation value of the volume is zero, S tot takes the minimum. In Fig.3, all the corresponding SU (2)-invariant tensors are not prefect, even for the states with zero volume. This result further indicates that the quantum information has to lose its fidelity when teleporting through a four-valent SU (2)-invariant tensor.
In the end of this section we point out that the minimal deficit of the entanglement entropy ∆S 0.43 for j = 1/2, with the eigenvalue of V = ( √ 3/8) 1 2 , is smaller than ∆S 0.49 for j = 1, with the eigenvalue of V = 2( √ 3/8) 1 2 . This indicates that to build a quantum space with larger volume, the minimal deficit of entanglement entropy has to become larger as well. Intuitively, it implies that more information has to be stored in the space to form a space with larger volume. This observation could extend our previous conjecture to a more quantitative version: The space with non-zero volume must be built with non-perfect tensors. Furthermore, the larger the volume is, the more deficit of entanglement entropy for non-perfect tensors one has to pay.
V. CONCLUSIONS AND OUTLOOKS
In this note we have investigated the relations between the entanglement entropy of the boundary states and the geometric property of the quantum tetrahedron in the bulk for a single 4-valent vertex in the framework of spin networks. Qualitatively, we have conjectured that the emergence of the space with non-zero volume is the reflection of the non-perfectness of SU (2)-invariant tensors. Based on this conjecture, we might ascribe the increase or decrease of the space volume to the change of the entanglement among particles on the boundary. Inspired by this conjecture, it is quite interesting to explore the dynamics of space from the side of the evolution of entanglement at the Planck scale, for instance, at the beginning of the universe or the cosmological inflation scenario where the quantum effect of geometry becomes severe. Quantitatively, we have found the relation between the maximally entangled states and the eigenstates of the volume square operator. Interestingly enough, we have found that for j = 1/2 and j = 1, the boundary SU (2)-invariant states with the maximal entanglement entropy correspond to the eigenstates of the operatorŴ , which implies that the quantum tetrahedron has a definite orientation. It is intriguing to ask whether this correspondence also holds for other spins j. Our preliminary attempt indicates that for j ≥ 3/2 there does not exist such simple relations between the states with the maximal entanglement entropy and the eigenstates of the operatorŴ . Their complicated relations deserve for further investigation.
Although j = 1/2 and j = 1 are just the specific cases for a four-valent states, this simple but elegant correspondence has significant implications for understanding the deep relations between the entanglement and the microscopic structure of the spacetime, particularly in a non-perturbative manner. As a microscopic scenario of quantum spacetime, representations of j = 1/2 and j = 1 are just like the ground state and the first excited state of the system, which should be dominantly occupied among all the possible distributions. This conjecture plays a key role in the original work on the microscopic interpretation on the entropy of black holes in terms of spin network states [29].
The most desirable work next is to investigate the relations between quantum entanglement and quantum geometry in the framework of spin networks with more general setup.
We expect to compute the entanglement entropy of a general boundary state, and explore its dependence on the orientation of the quantum polyhedrons in the bulk geometry. In this case, the main difficulty one faces is the involvement of holonomy along edges. Since the volume operator non-trivially acts only on intertwiner space at vertices, it is quite straightforward to discuss the geometric property of polyhedrons, but in general the entanglement of boundary states depends on the holonomy along edges, as previously studied in [19]. Our investigation on this topic is under progress.
A spin network state with boundary can be represented by |Γ, j e , j l , I v , n l , with e for inner edges, l for dangling edges where the magnetic quantum number n l is specified. The relationship between spin network representation and the connection representation is Once Γ and j e are specified, a spin network state with boundary can also be written as Thus {h e }, {h l }| |I v can be mapped to the right vector, |ψ({h e }, {h l }, {I v }) , which is given as {je,j l } {me,ne,m l } l δ m l n l , (27) with I being the identity matrix. Usually, due to the presence of the boundary, the gauge invariance is broken. In this paper, we consider a simple network which only contains a single vertex associated with four dangling edges, so there is no h e and only one I v involved.
The state is = n 1 n 2 n 3 n 4 ψ n 1 n 2 n 3 n 4 (I, I, I, I, where j i = j l i , n i = n l i (i = 1, ..., 4) and ψ n 1 n 2 n 3 n 4 (I, I, I, I, I v ) is a singlet and SU (2) invariant.
Now we consider the entanglement entropy for such a 4-valent state with spin j. Without loss of generality, we consider the reduced density matrix by tracing the first and second index, leading to (ρ 34 ) n 3 n 4 n 3 n 4 (h 1 , h 2 , h 3 , h 4 , I v ) = n 1 n 2 n 1 n 2 n 3 n 4 ψ n 1 n 2 n 3 n 4 (I, I, I, I, I v )R j 1 n 1 n 1 (h 1 )R j 2 n 2 n 2 (h 2 )R j 3 n 3 n 3 (h 3 )R j 4 n 4 n 4 (h 4 ) × n 1 n 2 n 3 n 4 ψ * n 1 n 2 n 3 n 4 (I, I, I, I, We remark that for a single vertex, the entanglement entropy does not depend on Due to the unitarity of CG coefficients, one can show that Then we derive the entanglement entropy as Similarly, |ψ 4 can also be expanded based on other basis in intertwiner space as which is very convenient for us to calculate the entanglement entropy for other bipartitions.
Specifically, we have Next we will determine the values of parameters α(J), β(J), γ(J) such that the sum of the entanglement entropy will take the maximal value among all the possible states. First of all, since α(J), β(J), γ(J) are parameters in different representations of the same state, they must be related to one another, we intend to derive their relations at first. One can easily find that ψ 4 |ψ 4 = 2j J=0 |α(J)| 2 = 2j J=0 |β(J)| 2 = 2j J=0 |γ(J)| 2 =: U , obviously U > 0. Therefore, the sum of entanglement entropy reads as From this equality, one can derive the following equation where N (j, J , J) satisfies Similarly, one can determine the intertwiner parameters for j = 1 and analytically derive SU (2)-invariant states with the maximal entanglement entropy. | 6,020 | 2019-07-02T00:00:00.000 | [
"Physics"
] |
GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information
While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.
Introduction
Large language models (LLMs) such as PaLM (Chowdhery et al., 2022) and GPT-4 (OpenAI, 2023) have shown great success on a wide range of general-domain Natural Language Processing (NLP) tasks.They also achieve state-of-the-art (SOTA) performance on domain-specific tasks like biomedical question answering (Singhal et al., 2022;Liévin et al., 2022;Nori et al., 2023).However, since there is no intrinsic mechanism for autoregressive LLMs to "consult" with any source of truth, they can generate plausible-sounding but incorrect content (Ji et al., 2023).To tackle the hallucination issue, various studies have been proposed to augment LLMs (Mialon et al., 2023) by either conditioning them on retrieved relevant content (Guu et al., 2020;Lewis et al., 2020;Borgeaud et al., 2022) or allowing them to use other external tools such as program APIs (Gao et al., 2022;Parisi et al., 2022;Schick et al., 2023;Qin et al., 2023).
In this work, we propose to teach LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI).NCBI provides API access to its entire biomedical databases and tools, including Entrez Programming Utilities (E-utils) and Basic Local Alignment Search Tool (BLAST) URL API (Altschul et al., 1990;Schuler et al., 1996;Sayers et al., 2019).Enabling LLMs to use NCBI Web APIs can provide easier and more precise access to biomedical information, especially for users who are inexperienced with the database systems.More importantly, Web APIs can relieve users from locally implementing functionalities, maintaining large databases, and heavy computation burdens because the only requirement for using Web APIs is an internet connection.
We introduce GeneGPT, a novel method that prompts Codex (Chen et al., 2021) to use NCBI Web APIs by in-context learning (Brown et al., 2020).GeneGPT consists of two main modules: (a) a specifically designed prompt that consists of documentations and demonstrations of API usage, and (b) an inference algorithm that integrates API calls in the Codex decoding process.We evaluate GeneGPT on GeneTuring (Hou and Ji, 2023), a question answering (QA) benchmark for genomics, and compare GeneGPT to a variety of other LLMs such as the new Bing1 , ChatGPT2 , and BioGPT (Luo et al., 2022).GeneGPT achieves the best performance on eight GeneTuring tasks with an
✅
In summary, our contributions are three-fold: 1. We introduce GeneGPT, a novel method that uses NCBI Web APIs to answer biomedical questions.To the best of our knowledge, this is the first study on augmenting LLMs with domain-specific Web API tools.
3. We conduct experiments to further characterize GeneGPT, including ablation, probing, and error analyses.We also contribute a novel GeneHop dataset, and use it to show that GeneGPT can perform chain-of-thought API calls to answer multi-hop genomics questions.
GeneGPT
In this section, we first introduce the general functions and syntax of NCBI Web APIs ( §2.1).We then describe two key components of GeneGPT: its prompt design for in-context learning ( §2.2) and the inference algorithm ( §2.3).
NCBI Web APIs
We utilize NCBI Web APIs of E-utils3 that provide access to biomedical databases and the BLAST tool4 for DNA sequence alignment.Web API calls are implemented by the urllib library in Python.
E-utils.It is the API for accessing the Entrez portal (Schuler et al., 1996), which is a system that covers 38 NCBI databases of biomedical data such as genes and proteins (Sayers et al., 2019).guage descriptions of the API functionality, general syntax, and argument choices.We include one for the E-utils API (Dc.1) and one for the BLAST tool (Dc.2).
3. Demonstrations (Dm.) are concrete examples of using NCBI Web APIs to solve questions.Based on questions in the GeneTuring tasks, we manually write four demonstrations that cover four functions (esearch, efetch, esummary, blastn) and four databases (gene, snp, omim, nt) of E-utils and BLAST.The API URLs and the call results are marked up by "[ ]", with a special "->" symbol inserted in between that serves as an indicator for API calls.
Test question:
The specific test question is then appended to the end of the prompt.
While the initial GeneGPT uses all documentations and demonstrations (denoted as GeneGPTfull in Table 2), we find through analyses in §4.1 that GeneGPT can work well with only two demonstrations (denoted as GeneGPT-slim) on all tasks.
Inference algorithm
The GeneGPT inference algorithm is briefly shown in Algorithm 1. Specifically, we first append the given question to the prompt (described in §2.2) and feed the concatenated text to Codex (code-davinci-002, Chen et al. ( 2021)) with a temperature of 0. We choose to use Codex for two reasons: (1) it is pre-trained with code data and shows better code understanding abilities, which Algorithm 1 GeneGPT inference algorithm is crucial in generating the URLs and interpreting the raw API results; (2) its API has the longest (8k tokens) context length among all available models so that we can fit the demonstrations in.
We discontinue the text generation process when the special "->" symbol is detected, which is the indication for an API call request.Then we extract the last URL and call the NCBI Web API with it.The raw execution results will be appended to the generated text, and it will be fed to Codex to continue the generation.When "\n\n", an answer indicator used in the demonstrations, is generated, we will stop the inference and extract the answer after the generated "Answer: ".
GeneTuring
The GeneTuring benchmark (Hou and Ji, 2023) contains 12 tasks, and each task has 50 questionanswer pairs.We use 9 GeneTuring tasks that are related to NCBI resources to evaluate the proposed GeneGPT model, and the QA samples are shown in Appendix B. The chosen tasks are classified into four modules and briefly described in this section.
Nomenclature: This is about gene names.We use the gene alias task and the gene name conversion task, where the objective is to find the official gene symbols for their non-official synonyms.
Genomics location:
The tasks are about the locations of genes, single-nucleotide polymorphism (SNP), and their relations.We include the gene location, SNP location, and gene SNP association tasks.The first two tasks ask for the chromosome locations (e.g., "chr2") of a gene or an SNP, and the last one asks for related genes for a given SNP.
Functional analysis: It asks for gene functions.We use the gene disease association task where the goal is to return related genes for a given disease, and the protein-coding genes task which asks whether a gene is a protein-coding gene or not.
Sequence alignment: The tasks query specific DNA sequences.We use the DNA sequence alignment to human genome task and the DNA sequence alignment to multiple species task.The former maps an DNA sequence to a specific human chromosome, while the latter maps an DNA sequence to a specific species (e.g."zebrafish").
Compared methods
We evaluate two settings of GeneGPT, a full setting (GeneGPT-full) where all prompt components are used, as well as a slim setting (GeneGPT-slim) inspired by our ablation and probing analyses ( §4.1) where only Dm.1 and Dm.4 are used.
Evaluation
For the performance of the compared methods, we directly use the results reported in the original benchmark that are manually evaluated.
To evaluate our proposed GeneGPT method, we follow the general criteria but perform automatic evaluations.Specifically, we only consider exact matches between model predictions and the ground truth as correct predictions for all nomenclature and genomics location tasks.For the gene disease association task, we measure the recall as in the original dataset but based on exact individual gene matches.For the protein-coding genes task and the DNA sequence alignment to multiple species task, we also consider exact matches as correct after applying a simple vocabulary mapping that converts model-predicted "yes"/"no" to "TRUE"/"NA" and Latin species names to their informal names (e.g., "Saccharomyces cerevisiae" to "yeast"), respectively.For the DNA sequence alignment to human genome task, we give correct chromosome mapping but incorrect position mapping a score of 0.5 (e.g., chr8:7081648-7081782 v.s.chr8:1207812-1207946), since the original task does not specify a reference genome.Overall, our evaluation of GeneGPT is more strict than the original evaluation of other LLMs in Hou and Ji (2023), which performs manual evaluation and might consider non-exact matches as correct.
Main results
Table 2 shows the performance of GeneGPT on the GeneTuring tasks in comparison with other LLMs.For GeneGPT, tasks with "*" in Table 2 are one-shot where one instance is used as API demonstration, and the other tasks are zero-shot.
For the compared LLMs, all tasks are zero-shot.
Nomenclature: GeneGPT achieves state-of-theart (SOTA) performance on both the one-shot gene alias task with an accuracy of 0.84 and the zero-shot gene name conversion task with an accuracy of 1.00.On average, GeneGPT outperforms New Bing by a large margin (0.92 v.s.0.76).All other GPT models have accuracy scores of less than 0.10 on the nomenclature tasks.
Genomic location: GeneGPT also achieves SOTA performance on all genomic location tasks, including the gene SNP association task (1.00) gene location task (0.66) and the SNP location task (1.00).While the New Bing is comparable to GeneGPT on gene location (0.61 v.s.0.66), its performance on the two SNP-related tasks is close to 0. Again, most other LLMs score less than 0.10.Notably, while all genomics location tasks are zeroshot for GeneGPT-slim, it performs comparably to GeneGPT-full which uses one gene SNP association demonstration.This indicates that API demonstrations have strong cross-task generalizability.
Functional analysis: The new Bing performs better functional analysis tasks than the proposed GeneGPT (average score: 0.91 v.s.0.84), which is probably because many web pages related to gene functions can be retrieved by the Bing search engine.We also note that other LLMs, especially GPT-3 and ChatGPT, perform moderately well and much better than they perform on other tasks.This might also be due to the fact that many gene-function-related texts are included in their pre-training corpora.Sequence alignment: GeneGPT performs much better with an average score of 0.66 than all other models including the new Bing (0.00), which essentially fails on the sequence alignment tasks.This is not very surprising since sequence alignment is easy with the BLAST tool, but almost impossible for an auto-regressive LLM even with retrieval augmentation as the input sequences are too specific to be indexed by a search engine.
Although evaluated under a more strict setting ( §3.3), GeneGPT achieves a macro-average performance of 0.83 which is much higher than other compared LLMs including New Bing (0.44).Overall, GeneGPT achieves new SOTA performance on all 2 one-shot tasks and 6 out of 7 zero-shot tasks and is outperformed by New Bing only on the gene disease association task.
Discussions
We have shown that GeneGPT largely surpasses various LLMs on the GeneTuring benchmark.In this section, we further characterize GeneGPT by studying three research questions (RQ): RQ1: What is the importance of each prompt component in GeneGPT?
RQ2: Can GeneGPT answer multi-hop questions by chain-of-thought API calls?
RQ3: What types of errors does GeneGPT make on each studied task?
For ablation tests, we remove each component from GeneGPT-full and then evaluate the prompt.The results are shown in Figure 2 (left).Notably, the performance on the DNA to genome and species alignment tasks is only significantly decreased without the BLAST demonstration (Dm.4), but not affected by the ablation of the BLAST documentation (Dc.2).While the ablations of other components decrease the performance, most only affect one relevant task (e.g., Dm.1 and gene name conversion), which indicates a high level of redundancy of the prompt components.
For the probing experiments, we evaluate GeneGPT with only one prompt component to study the individual capability.The results are shown in Figure 2 (right).Overall, GeneGPT with only one documentation (Dc.1 or Dc.2) fails on all tasks.Surprisingly, with only one demonstration of the gene alias task (Dm.1) in the prompt, GeneGPT is able to perform comparably to GeneGPT-full on all tasks except the alignment ones.On the other hand, GeneGPT with only the BLAST demonstration (Dm.4) performs well on the two alignment tasks, which is somehow expected.These results suggest that GeneGPT with only two demonstrations (Dm.1 and Dm.4) in the prompt can general-ize to all tasks in the GeneTuring benchmark.We denote this as GeneGPT-slim, and results in Table 2 show that with only two demonstrations, it outperforms the GeneGPT-full and achieves stateof-the-art overall results on GeneTuring.
RQ2: Multi-hop QA on GeneHop
Questions in the GeneTuring benchmark are singlehop and just require one step of reasoning, e.g., "Which gene is SNP rs983419152 associated with?".However, many real-world biomedical questions are multi-hop that need more steps to answer (Jin et al., 2022).For example, to answer "What is the function of the gene associated with SNP rs983419152?", the model should first get the associated gene name and then find its functions.To test GeneGPT's capability of answering multi-hop questions, we present GeneHop, a novel dataset that contains three new multi-hop QA tasks based on the GeneTuring benchmark: (a) SNP gene function, which asks for the function of the gene associated with a given SNP.(b) Disease gene location, where the task is to list the chromosome locations of the genes associated with a given disease.(c) Sequence gene alias, which asks for the aliases of the gene that contains a specific DNA sequence.Each task in GeneHop contains 50 questions, and the collection pipeline is detailed in Appendix C. For all tasks, we append the chain-ofthought instruction "Let's decompose the question to sub-questions and solve them step by step."after the test question (Wei et al., 2022b).Figure 3 shows an example of GeneGPT to answer Task (a).In this case, GeneGPT successfully decomposes the multi-hop question into two sub-questions, and the sub-question 2 is based on the answer of the sub-question 1.Interestingly, GeneGPT uses a shortcut to answer sub-question 2: instead of first calling esearch and then calling esummary, GeneGPT finds the gene id in the API call results of sub-question 1 and directly calls esummary.This capability is not shown in the prompt but elicited by chain-of-thought API calls.retmax=20&retmode=json&id=64877,1377, 7758276,100818277,100166185,10948718, 10849362,9700326,3180310,3180309, 1192676,1371,50945,10682,12892,1037, 4336951,100216262,380161,108700304] ->[ API call results ] Answer: Xq21.1 (correct) Figure 4 shows another example of GeneGPT answering Task (b), where GeneGPT successfully decomposes the multi-hop question and correctly calls the required APIs.Notably, the answering chain involves 3 sub-questions and 4 API calls, which are longer than all in-context demonstrations (1 single-hop question and 2 API calls at most).This ability to generalize to longer chains of thought is an important aspect of GeneGPT's flexi-bility and usefulness for real-world applications.We manually evaluate the results predicted by GeneGPT and compare it to the new Bing, which is the only baseline LLM that performs well on the single-hop GeneTuring benchmark due to its retrieval augmentation feature.The evaluation criteria are described in Appendix D. As shown in Table 3, while the new Bing outperforms GeneGPT on the disease gene location task, it is mostly using webpages that contain both the disease and location information without multi-hop reasoning.The new Bing fails to perform the other 2 tasks since the input information (SNP or sequence) is not indexed by Bing and can only be found in specialized databases.GeneGPT, on the other hand, performs moderately well on all 3 tasks, and achieves a much higher average score (0.50 v.s.0.24).
RQ3: Error analysis
We manually study all errors made by GeneGPT and classify them into five types.Table 4 shows the count of each error type on the evaluate tasks: E1: using the wrong API or not using APIs, e.g., using the gene instead of the omin database for diseases; E2: using the right API but wrong arguments, e.g., passing terms to id; E3: not extracting the answer in the API result, most commonly seen in gene function extraction; E4: right API call but results do not contain the answer, where the question is not answerable with NCBI databases; and O includes other unclassified errors.Specific error examples are shown in Appendix E.
Our results suggest that different tasks have specific and enriched error types: simple tasks (alias and location) fail mostly because of E4; E1 only happens in disease-related tasks; alignment tasks face more issues with BLAST interfaces and reference genomes (O); multi-hop tasks in GeneHop tend to have E2 and E3 in the reasoning chains.
Related work
Large language models: Recent studies have shown that scaling pre-trained LMs leads to performance improvement and potentially emergent abilities on various NLP tasks (Brown et al., 2020;Kaplan et al., 2020;Wei et al., 2022a;Chowdhery et al., 2022;OpenAI, 2023).However, such auto-regressive LLMs are still susceptible to hallucinations and generate erroneous content (Ji et al., 2023).Augmenting LLMs with external tools is a possible solution to this issue (Mialon et al., 2023).
Biomedical question answering:
It is an essential step in clinical decision support (Ely et al., 2005) and biomedical knowledge acquisition (Jin et al., 2022).LLMs have been successfully applied to various biomedical QA tasks that are knowledgeor reasoning-intensive (Singhal et al., 2022;Liévin et al., 2022;Nori et al., 2023).However, autoregressive LLMs fail to perform data-intensive tasks which require the model to precisely store and recite database entries, such as the GeneTuring benchmark (Hou and Ji, 2023).Retrieval augmentation also falls short since specialized databases are usually not indexed by commercial search engines.GeneGPT solves this task by tool augmentation.
Conclusions
We present GeneGPT, a novel method that teaches LLMs to use NCBI Web APIs.It achieves SOTA performance on 8 GeneTuring tasks and can perform chain-of-thought API calls.Our results indicate that database utility tools might be superior to relevant web pages for augmenting LLMs to faithfully serve various biomedical information needs.
C GeneHop collection
The GeneHop dataset contains three multi-hop tasks: SNP gene function, disease gene location, and sequence gene alias.We describe the collection of these tasks in this section.Table 6 shows several question-answer samples from the GeneHop dataset.
SNP gene function:
The question template for this task is "What is the function of the gene associated with SNP {snp}?Let's decompose the question to sub-questions and solve them step by step.".We re-use the 50 {snp} from the gene SNP association task in the original GeneTuring benchmark.The ground-truth answer of the gene function is manually annotated: For each SNP, we first get its corresponding gene from the annotations of the gene SNP association task.We then check the gene information page8 and select its functional summary as the ground-truth answer.
Figure 1 :
Figure 1: Left: GeneGPT uses NCBI Web API documentations and demonstrations in the prompt for in-context learning.Right: Examples of GeneGPT answering GeneTuring and GeneHop questions with NCBI Web APIs.
Figure 2 :
Figure 2: Performance changes of the ablation (left) and probing (right) experiments as compared to GeneGPTfull.
Figure 3 :
Figure 3: GeneGPT uses chain-of-thought API calls to answer a multi-hop question in GeneHop.
Figure 4 :
Figure 4: GeneGPT uses chain-of-thought API calls to answer a multi-hop question in GeneHop.
Figure 5 :
Figure 5: Documentation 1 (Dc.1) of the GeneGPT prompt.Dc.1 describes the functionality, general syntax, and argument choices of the NCBI E-utils API.
Figure 8 :
Figure 8: Demonstration 2 (Dm.2) of the GeneGPT prompt.The instance is chosen from the gene SNP association task in the GeneTuring benchmark.Links are actually called Web API URLs.Readers can directly click the link and get the API call result, which is inserted in the prompt.
Table 2 :
Performance of GeneGPT compared to other LLMs on the GeneTuring benchmark.*One-shot learning for GeneGPT.Bolded and underlined numbers denote the highest and second-highest performance, respectively.
GeneHop question (Disease gene location):List chromosome locations of the genes related to Cleft palate with ankyloglossia.Let's decompose the question to sub-questions and solve them step by step.
Table 3 :
Performance of multi-hop QA on GeneHop.We only compare GeneGPT with New Bing since other LLMs cannot even answer single-hop questions well.
Table 4 :
Counts of GeneGPT errors on different tasks.
Documentation 1 (Dc. 1) You can call Eutils by: "[https://eutils.ncbi.nlm.nih.gov/entrez/eutils/{esearch|efetch|esummary}.fcgi?db={gene|snp|omim}&retmax={}&{term|id}={term|id}]".esearch: input is a search term and output is database id(s).efectch/esummary: input is database id(s) and output is full records or summaries that contain name, chromosome location, and other information.Normally, you need to first call esearch to get the database id(s) of the search term, and then call efectch/esummary to get the information with the database id(s).Database: gene is for genes, snp is for SNPs, and omim is for genetic diseases.
BLAST maps a specific DNA sequence to its chromosome location among different species.You need to first PUT the BLAST request and then GET the results using the RID returned by PUT. Figure 6: Documentation 2 (Dc.2) of the GeneGPT prompt.Dc.2 describes the functionality, general syntax, and argument choices of the BLAST API.
Question: Which gene is SNP rs1217074595 associated with?[https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=snp&retmax=10& Figure 7: Demonstration 1 (Dm.1) of the GeneGPT prompt.The instance is chosen from the gene alias task in the GeneTuring benchmark.Links are actually called Web API URLs.Readers can directly click the link and get the API call result, which is inserted in the prompt. | 5,086.2 | 2023-04-19T00:00:00.000 | [
"Computer Science"
] |
INFLUENCE OF DIFFERENT PARAMETERS ON CRITICAL STRESSES IN CONCRETE PAVEMENT
Concrete pavements are largely constructed and promoted by government in recent years in India. Concrete pavements are preferred over bituminous pavements due to its low life cycle cost, durability and low maintenance. Design of plain jointed concrete pavement depends on the stress ratio (i.e. ratio of flexural stress to flexural strength). Flexural stress calculation is influenced by many parameters like stiffness of soil subgrade, vehicle axle loads, environmental loads, location of load placement, load arrangement, tyre pressure, material properties of concrete, design period, repetition of loads and slab size. It is very important to estimate errorless response of pavement for given practical conditions. A study has been carried out on important parameters that affects the design of plain jointed concrete pavement. Study shows that finite element analysis should be done to obtain the critical flexural stresses as many widely used guidelines are based on many assumptions.
Introduction
Rigid pavements are mainly subjected to vehicle load and temperature load. Pavement response vary primarily with the effective modulus of subgrade reaction, location, panel size, material properties, slab thickness and axle load configurations. [1] Research Scholar, Department of Applied Mechanics, Visvesvaraya National Institute of Technology, Nagpur, India E-mail<EMAIL_ADDRESS>[2] Professor, Department of Applied Mechanics, Visvesvaraya National Institute of Technology, Nagpur, India. E-mail<EMAIL_ADDRESS>Other parameters have less influence on critical stresses as studied below.
Effect of Loading and Geometric Configurations of Axle Loads
Tyre pressure does not vary a lot, for most commercial highway vehicles tyre inflation pressures ranges from about 0.7 MPa to 1.0 MPa [1]. For the concrete slabs with the thickness higher than 200 mm are not affected significantly by the variation of tyre pressure [1].
Effect of Stiffness of Soil Subgrade
Pavements are supported on foundation soil/subbase and hence it is necessary to maintain uniform and good subgrade. In India concrete slab are mostly supported on dry lean concrete (DLC) layer.
It was observed by Roesler et al. [5] that for thin slab pavements, lower subgrade stiffness lead to large deformations. However, the effect of subgrade stiffness has minor influence on the slabs with thicker sections. Generally most of the concrete pavements are provided with thickness below 350 mm, and response of these pavements greatly affected by the K-value of the soil foundation. Finite Element (FE) analysis of concrete slab without shoulder and with shoulder shows that for vehicle axle loads, critical edge stresses decreases with the increase in the effective modulus of subgrade reaction.
Regions where temperature loads are not critical and not considered in the design, subgrade with good stiffness function better. FE model considered for the analysis is shown in Fig. 1. Analysis of plain concrete pavement is done using 2D area element. Four noded shell element is used to model concrete slab that combines separate membrane and plate-bending behavior. Each node has 6 degrees of freedom at connected joint that is, translation and rotation in x, y and z direction. To achieve best results, aspect ratio of element is maintained by refining mesh size near to unity. Effective subgrade stiffness is assigned at each node as per the mesh area using Winkler model theory.
Results obtained from SAP 2000 (FE software) are validated with simplified approach [2] as well as PCA document [4]. Validation of the FE model with simplified approach [2] for single, tandem and tridem axle loads are shown in Table 2.
Most of the documents consider uniform soil subgrade stiffness for the calculation of critical edge stress in concrete pavement [1,3,4,6]. Few test using light weight deflectometer are conducted and found that stiffness of soil subgrade varies along width as well as length of the pavement. Variation in soil stiffness might be due to many reasons like compaction difference, variable soil density and moisture content. It is found that when temperature difference is applied along with the axle load on non uniform soil subgrade, it develops higher critical stresses than the stresses develop for uniform soil with lowest or largest subgrade stiffness.
Effect of Material Properties
Modulus of elasticity of concrete changes with grade of concrete. Concrete with higher characteristic compressive strength also shows higher modulus of elasticity [7,8]
Effect of Axle Load Placement
Edge location in the slab is considered critical in the design of concrete slab. Axle loads may be placed at the extreme edge location of slab, it will give higher edge stresses due to stress concentration below load. However, PCA [4] suggest axle loads are placed at 4 inches (10.2 cm) away from the edge. IRC 58 [1] does not suggest any offset for placement of wheel on edge. It can be seen from Fig. 2 and Fig. 3 that load is placed with an offset as specified by PCA [4] and without any offset. Analysis is carried out for few cases and it was found that there is large variation in critical edge stress can be seen. Table 3 shows comparison of critical stresses for single axle placed with and without offset. Results are also obtained by Westergaurd's modified stress equation for edge loading. As Westergaurd's and IRC 58 [1,3] does not mention about an offset for loading, results are considered without any offset. From Table 3. It is clear that Westergard's results are on highest among all methods considered. PCA [4] results are close to the results with 100 mm offset.
Effect of Slab Size
Plain jointed concrete pavement always constructed continuous and then saw cut joints are made by using saw cutting machine to form aggregate interlock joint. These joints are cut to 1/3rd to 1/4th depth from top of the slab. After application of repeated loads, this develops a gradually increasing crack. Slab is devided into panels of rectangular shape and devided by saw cut joints.
Widely used guidelines [1,3,4] use fixed size of 3.5 m x 4.5 m. Practically it is found that slab sizes are selected by the designers. When panels with different size are analysed, it is found that critical edge stress shows large variation in critical edge stresses.
Bradbury's temperature stress coefficients depend on length and width of the slab. Guidelines consider size effect while determining critical edge stress for temperature difference but not when subjected to axle load. Slabs with size more than 3.5 m x 4.5 m provides stresses on higher side and hence can be considered for the design of pavement for most of the cases. In few L/W ratios with specific radius of relative stiffness show higher stresses in shorter slabs than standard (3.5 m x 4.5 m).
Load transfer mechanism is also an important factor which will affect the design of concrete pavement. Critical stresses developed for jointed plain concrete pavement with tied concrete shoulder are less than that develop for slab without shoulder. Load transfer at the joint is better when slab is provided with shoulder. However, this parameter is not included in this study.
It is important to account all these design parameters while designing the concrete pavement when the parameters considered does not meet the guidelines assumption.
Conclusions
Following are the broad conclusions from the study of different design parameters and its effect on critical stresses in plain jointed plain concrete pavement.
l For the regions where temperature loads are considered along with axle loads on non-uniform soil subgrade, it produces higher critical stresses than the stresses develop for uniform soil with lowest or largest subgrade stiffness.
l Axle load placement at edge without any offset creates large amount of stresses than the loads placed with some offset.
l Slab size affects critical stresses even if temperature stresses are neglected. It is general assumption that shorter slabs have smaller critical stress value than that of standard size. However, from the study it is observed that for some cases shorter slabs may also develop large amount of critical stress. | 1,824.2 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Lung Cancer Chemopreventive Activity of Patulin Isolated from Penicillium vulpinum
Lung cancer is the most lethal form of cancer in the world. Its development often involves an overactivation of the nuclear factor kappa B (NF-κB) pathway, leading to increased cell proliferation, survival, mobility, and a decrease in apoptosis. Therefore, NF-κB inhibitors are actively sought after for both cancer chemoprevention and therapy, and fungi represent an interesting unexplored reservoir for such molecules. The aim of the present work was to find naturally occurring lung cancer chemopreventive compounds by investigating the metabolites of Penicillium vulpinum, a fungus that grows naturally on dung. Penicillium vulpinum was cultivated in Potato Dextrose Broth and extracted with ethyl acetate. Bioassay-guided fractionation of this extract was performed by measuring NF-κB activity using a HEK293 cell line transfected with an NF-κB-driven luciferase reporter gene. The mycotoxin patulin was identified as a nanomolar inhibitor of TNF-α-induced NF-κB activity. Immunocytochemistry and Western blot analyses revealed that its mechanism of action involved an inhibition of p65 nuclear translocation and was independent from the NF-κB inhibitor α (IκBα) degradation process. Enhancing its interest in lung cancer chemoprevention, patulin also exhibited antiproliferative, proapoptotic, and antimigration effects on human lung adenocarcinoma cells through inhibition of the Wnt pathway.
Introduction
Lung cancer is the leading cause of cancer death worldwide, with 1.69 million deaths in 2015. The high mortality rates are mostly due to late diagnosis, mainly occurring at metastatic stages. Tobacco smoke represents by far the main risk factor for lung cancer and directly accounted for 1.175 million deaths by respiratory tract cancers in 2015 [1]. Among the numerous other environmental risk factors for this cancer, the most important are exposure to ambient particulate matter air pollution (283,000 deaths), asbestos (155,000 deaths), and household air pollution from solid fuels (149,000 deaths). These data enlighten the preventability of this disease, the first lung cancer preventive measure being tobacco control. Despite important efforts in this domain, notably illustrated by the WHO Framework Convention on Tobacco Control and their impact on global smoking prevalence [2], the total number of deaths continues to rise because of increasing population numbers and ageing [1]. Therefore, developing both early diagnosis and lung cancer chemoprevention strategies would complement tobacco control efforts and help reduce lung cancer mortality.
Inflammation is a physiological response of the innate immune system to infection and tissue injury. It aims to deliver and activate leucocytes to the site of interest where they will be in charge
Patulin Isolation through Bioassay-Guided Fractionation
The screening of several extracts against NF-κB inhibition identified the crude EtOAc extract of P. vulpinum, which inhibited TNF-α-induced NF-κB activity by 99% at 20 µg/mL, and was therefore selected for bioguided fractionation. Among the many fractions that substantially inhibited NF-κB, F9 was selected based on activity, available amounts, and simplicity of the chromatographic profile. The major compound of this fraction was further purified and identified as patulin by comparison of its spectral data (High-Resolution Mass Spectrometry (HRMS) and NMR) to those reported in the literature [10]. The IC 50 value for NF-κB inhibition of patulin was 0.25 µM (Figure 1 and Figure S1), making worthy of investigation its mechanism of action and its effect on other important aspects of lung carcinogenesis, such as cell viability and migration. These further investigations were performed using a human lung adenocarcinoma model (A549 cell line), which is the most frequent form of lung cancers where induction of NF-κB is thought to contribute to tumor aggressiveness [11]. Fractions were tested for their ability to inhibit TNF-α-induced NF-κB activity in HEK293 cells at 20 μg/mL. Fractions able to inhibit more than 50% of NF-κB activity were considered active. F9 was the most interesting in terms of activity and available amount. Patulin was isolated from this fraction and inhibited NF-κB with an IC50 value of 0.25 ± 0.05 μM.
Patulin Inhibited NF-κB p65 Nuclear Translocation
To better understand how patulin affected the NF-κB pathway, its effect on TNF-α-induced p65 nuclear translocation was investigated in A549 cells using immunocytochemistry, with a concentration of patulin (1.5 μM) that inhibited roughly 90% of NF-κB activity in HEK cells , which was selected to allow a clear visualisation of the inhibitory effect. Patulin suppressed TNF-α-induced p65 nuclear translocation (Figure 2a), indicating that its target was a cytoplasmic step of the NF-κB activation pathway. Having ruled out a nuclear mechanism of action, investigations were focused on IKK, the most commonly described cytoplasmic target for NF-κB inhibitors. The effect of patulin treatment on TNF-α-induced IκBα phosphorylation was therefore evaluated using Western blots. Patulin had no impact on IκBα phosphorylation (Figure 2b), indicating that its action was independent from IKK activity. The target of patulin in the NF-κB activation cascade should therefore happen between IκBα phosphorylation and p65 nuclear translocation. The last important steps that needed to be investigated were IκBα ubiquitination and proteasomal degradation. Patulin did not impact TNF-α-induced IκBα degradation (Figure 2c), pointing towards nuclear translocation as the most probable target of patulin.
Patulin Triggered Apoptotic Cell Death in A549 Cells
Considering the implication of the NF-κB pathway in cancer cell proliferation and survival, the cytotoxicity of patulin was measured in A549 cells using the well-established sulforhodamine B (SRB) assay. Patulin exhibited dose-dependent cytotoxicity in those cells with an IC50 of 8.8 μM ( Figure S2). A significant dose-dependent increase in the percentage of annexin V-marked cells after patulin treatment pointed towards apoptosis as a possible mechanism involved in the observed cell death ( Figure 3). Fractions were tested for their ability to inhibit TNF-α-induced NF-κB activity in HEK293 cells at 20 µg/mL. Fractions able to inhibit more than 50% of NF-κB activity were considered active. F9 was the most interesting in terms of activity and available amount. Patulin was isolated from this fraction and inhibited NF-κB with an IC 50 value of 0.25 ± 0.05 µM.
Patulin Inhibited NF-κB p65 Nuclear Translocation
To better understand how patulin affected the NF-κB pathway, its effect on TNF-α-induced p65 nuclear translocation was investigated in A549 cells using immunocytochemistry, with a concentration of patulin (1.5 µM) that inhibited roughly 90% of NF-κB activity in HEK cells , which was selected to allow a clear visualisation of the inhibitory effect. Patulin suppressed TNF-α-induced p65 nuclear translocation (Figure 2a), indicating that its target was a cytoplasmic step of the NF-κB activation pathway. Having ruled out a nuclear mechanism of action, investigations were focused on IKK, the most commonly described cytoplasmic target for NF-κB inhibitors. The effect of patulin treatment on TNF-α-induced IκBα phosphorylation was therefore evaluated using Western blots. Patulin had no impact on IκBα phosphorylation (Figure 2b), indicating that its action was independent from IKK activity. The target of patulin in the NF-κB activation cascade should therefore happen between IκBα phosphorylation and p65 nuclear translocation. The last important steps that needed to be investigated were IκBα ubiquitination and proteasomal degradation. Patulin did not impact TNF-α-induced IκBα degradation (Figure 2c), pointing towards nuclear translocation as the most probable target of patulin.
Patulin Triggered Apoptotic Cell Death in A549 Cells
Considering the implication of the NF-κB pathway in cancer cell proliferation and survival, the cytotoxicity of patulin was measured in A549 cells using the well-established sulforhodamine B (SRB) assay. Patulin exhibited dose-dependent cytotoxicity in those cells with an IC 50 of 8.8 µM ( Figure S2). A significant dose-dependent increase in the percentage of annexin V-marked cells after patulin treatment pointed towards apoptosis as a possible mechanism involved in the observed cell death ( Figure 3).
Patulin Inhibited A549 Cell Migration
Cell migration represents another important aspect of carcinogenesis regulated by the NF-κB pathway. The effect of patulin on cell migration was assessed through the wound healing (scratch) assay. Patulin dose-dependently inhibited cell migration, with an estimated IC 50 below 5.5 µM ( Figure 4). Time-lapse videos were recorded for 24 h to make sure that the free space recovery was due to migration inhibition and not an anti-proliferative effect (Video S1 and S2).
Patulin Inhibited A549 Cell Migration
Cell migration represents another important aspect of carcinogenesis regulated by the NF-κB pathway. The effect of patulin on cell migration was assessed through the wound healing (scratch) assay. Patulin dose-dependently inhibited cell migration, with an estimated IC50 below 5.5 μM (Figure 4). Time-lapse videos were recorded for 24 h to make sure that the free space recovery was due to migration inhibition and not an anti-proliferative effect (Video S1 and S2).
Patulin Inhibited the Wnt Pathway
To better understand which other pathways, beside NF-κB, could be involved in the proapoptotic and antimigration activity of patulin, the expressions of several genes involved in both mechanisms were measured using quantitative real time PCR in A549 cells 24 h after treatment with increasing doses of patulin. The expression of Wnt inhibitory factor-1 (WIF-1) and Dickkopf-related protein 3 (Dkk-3) ( Figure 5), two endogenous inhibitors of the Wnt pathway, which are frequently downregulated in lung cancer, were upregulated. This correlated with the downregulation of Cyclin D1 expression, a target gene regulated by the Wnt pathway.
Patulin Inhibited the Wnt Pathway
To better understand which other pathways, beside NF-κB, could be involved in the proapoptotic and antimigration activity of patulin, the expressions of several genes involved in both mechanisms were measured using quantitative real time PCR in A549 cells 24 h after treatment with increasing doses of patulin. The expression of Wnt inhibitory factor-1 (WIF-1) and Dickkopf-related protein 3 (Dkk-3) ( Figure 5), two endogenous inhibitors of the Wnt pathway, which are frequently downregulated in lung cancer, were upregulated. This correlated with the downregulation of Cyclin D1 expression, a target gene regulated by the Wnt pathway.
Discussion
Patulin is a well-known mycotoxin produced by several fungi of the Aspergillus and Penicillium genera [12]. It is considered as a contaminant in apples and apple-derived products. Beside the numerous articles regarding the food safety concern it represents, recent work reported that patulin also possessed potent anticancer activity through apoptosis induction in cancer cell lines and in one in vivo model of melanoma cells-bearing mice [13,14]. The present study shed light on its lung cancer chemoprevention properties by studying its inhibitory activity on TNF-α-induced NF-κB activity, its proapoptotic and antimigration activity on A549 cells, as well as its ability to inhibit the Wnt pathway.
The NF-κB pathway is a complex cascade involving several steps that can each be the target of inhibitors. The majority of natural inhibitors act through the inhibition of IKK, NF-κB DNA binding, or IκBα proteasomal degradation [9]. In the present study, observations of p65 translocation inhibition by patulin led the list of potential targets being restricted to the cytoplasmic steps of the pathway. Absence of changes in both TNF-α-induced IκBα phosphorylation and degradation indicated that patulin's probable mechanism of action was a direct inhibition of p65 translocation. To our knowledge, this work presents the first report of the inhibitory activity of patulin against TNFα-induced NF-κB activity. Interestingly, Tsai et al. recently described NF-κB inhibitory properties of patulin that seemed specific to LPS induction [15]. Consistent with the present work, they observed no effect on TNF-α-induced IκBα degradation after patulin treatment. This was, however, interpreted as an inactivity of patulin on TNF-α-induced NF-κB activity, a hypothesis that should be reformulated in regard to the present observations. In the search for NF-κB inhibitors, IKK represented the main target of the pharmaceutical industry, owing to the expected specificity of its inhibitors as compared with proteasome or ubiquitination inhibitors [6]. However, recent findings such as IκBα-independent NF-κB activation through NFKBIA (the gene coding for IκBα) deletion in most glioblastomas [16] or resistance to proteasome inhibitors in in vivo models [17] highlight the potential interest of inhibitors that act downstream of the IkB phosphorylation/degradation process. Indeed, IKK and proteasome inhibitors would become ineffective in the case of a defective IκB production. Therefore, inhibitors of p65 nuclear translocation, such as patulin, could present a particular interest by being both specific and less prone to chemoresistance.
In addition to its NF-κB inhibitory properties, patulin showed antiproliferative, proapoptotic and antimigration activities. Each of these activities could at least be partly explained by patulin's ability to inhibit the Wnt pathway. Overactivation of the Wnt pathway through downregulation of its inhibitors is indeed common in lung cancer and has been associated with poor prognosis, chemoresistance and metastasis [18,19]. Consistently, inhibition of this pathway, for example through restoration of endogenous inhibitors such as WIF-1 and Dkk-3 resulted in tumor regression, increased apoptosis and reduced cell motility both in vitro and in vivo [18,20,21]. Interestingly, cigarette smoke extract was shown to induce the Wnt pathway in normal human bronchial epithelial cells, contributing
Discussion
Patulin is a well-known mycotoxin produced by several fungi of the Aspergillus and Penicillium genera [12]. It is considered as a contaminant in apples and apple-derived products. Beside the numerous articles regarding the food safety concern it represents, recent work reported that patulin also possessed potent anticancer activity through apoptosis induction in cancer cell lines and in one in vivo model of melanoma cells-bearing mice [13,14]. The present study shed light on its lung cancer chemoprevention properties by studying its inhibitory activity on TNF-α-induced NF-κB activity, its proapoptotic and antimigration activity on A549 cells, as well as its ability to inhibit the Wnt pathway.
The NF-κB pathway is a complex cascade involving several steps that can each be the target of inhibitors. The majority of natural inhibitors act through the inhibition of IKK, NF-κB DNA binding, or IκBα proteasomal degradation [9]. In the present study, observations of p65 translocation inhibition by patulin led the list of potential targets being restricted to the cytoplasmic steps of the pathway. Absence of changes in both TNF-α-induced IκBα phosphorylation and degradation indicated that patulin's probable mechanism of action was a direct inhibition of p65 translocation. To our knowledge, this work presents the first report of the inhibitory activity of patulin against TNF-α-induced NF-κB activity. Interestingly, Tsai et al. recently described NF-κB inhibitory properties of patulin that seemed specific to LPS induction [15]. Consistent with the present work, they observed no effect on TNF-α-induced IκBα degradation after patulin treatment. This was, however, interpreted as an inactivity of patulin on TNF-α-induced NF-κB activity, a hypothesis that should be reformulated in regard to the present observations. In the search for NF-κB inhibitors, IKK represented the main target of the pharmaceutical industry, owing to the expected specificity of its inhibitors as compared with proteasome or ubiquitination inhibitors [6]. However, recent findings such as IκBα-independent NF-κB activation through NFKBIA (the gene coding for IκBα) deletion in most glioblastomas [16] or resistance to proteasome inhibitors in in vivo models [17] highlight the potential interest of inhibitors that act downstream of the IkB phosphorylation/degradation process. Indeed, IKK and proteasome inhibitors would become ineffective in the case of a defective IκB production. Therefore, inhibitors of p65 nuclear translocation, such as patulin, could present a particular interest by being both specific and less prone to chemoresistance.
In addition to its NF-κB inhibitory properties, patulin showed antiproliferative, proapoptotic and antimigration activities. Each of these activities could at least be partly explained by patulin's ability to inhibit the Wnt pathway. Overactivation of the Wnt pathway through downregulation of its inhibitors is indeed common in lung cancer and has been associated with poor prognosis, chemoresistance and metastasis [18,19]. Consistently, inhibition of this pathway, for example through restoration of endogenous inhibitors such as WIF-1 and Dkk-3 resulted in tumor regression, increased apoptosis and reduced cell motility both in vitro and in vivo [18,20,21]. Interestingly, cigarette smoke extract was shown to induce the Wnt pathway in normal human bronchial epithelial cells, contributing to carcinogenesis through the upregulation of target genes such as Cyclin D1 [22]. Therefore, inhibitors of the Wnt pathway are particularly interesting for lung cancer chemoprevention and therapy, and clinical trials are being conducted for this purpose [23,24].
Altogether, these data support the interest for patulin in lung cancer chemoprevention and therapy, and should encourage the realization of further in vivo experiments, using, for example, chemically-induced lung carcinogenesis models closely mimicking tobacco-induced lung cancer in humans. Although previous in vivo experiments have not reported side effects upon treatment with active doses of patulin [13], one should remain aware of its toxicity [25] and adapt the doses consequently. Further investigations on patulin derivatives (e.g., precursors) could lead to the discovery of interesting active compounds with less toxicity.
Fungal Culture
The strain of Penicillium vulpinum (Cooke and Massee) Seifert and Samson was isolated in May 1999 in Lausanne (VD, Switzerland) on mature tomatoes stored at 4 • C and authenticated by molecular sequencing of the ITS region. The strain is maintained and stored (n • 932) in the Mycoscope dynamic mycotheca of Agroscope (www.mycoscope.bcis.ch) in Potato Dextrose Broth (PDB). P. vulpinum was cultivated in 74 glass bottles each containing 100 mL of PDB media supplemented with 100 µM suberoyl anilide hydroxamic acid (SAHA, Sigma-Aldrich, Saint-Louis, MO, USA). P. vulpinum was allowed to grow in static mode, for 15 days with artificial day/night alternation (12 h each) at 22 • C.
Extraction and Isolation of Patulin
The mycelial mat was filtered on Büchner and the culture media was extracted by liquid/liquid partition with equal volume of EtOAc. The extraction process was repeated three times. Reduced pressure evaporation of the EtOAc fraction afforded 6.9 g of crude extract (brown liquid gum). Vacuum Liquid Chromatography (VLC) filtration on 70 g of C18 silica and elution with methanol (MeOH):water (H 2 O) (90:10) afforded 6.5 g of extract. This extract was fractionated by flash chromatography on a Puriflash system (Interchim, Montluçon, France). Two 120 g columns of C18 were connected and the crude extract was dry-loaded using celite. A linear gradient of H 2 O + formic acid (FA) 0.1% and acetonitrile (ACN) + FA 0.1% from 98:2 to 2:98 was applied at 14 mL/min for 200 min and 264 fractions were collected. An aliquot (200 µL) of each fraction was sampled and plated into 96-well plates for analysis. The rest of the fractions were dried using a centrifugal evaporator (Genevac HT-4, SP Scientific, Gardiner, NY, USA). After UHPLC-MS/UV/ELSD analysis, fractions were further pooled into 41 fractions (F1-F41) and tested against NF-κB. F9 was selected based on bioactivity profile. F9 (130 mg) was loaded on an Armen LC system using a preparative Kinetex Axia Core-Shell C18 column (5 µm, 250 × 21.2 mm; Phenomenex, Torrance, CA, USA) with elution in isocratic mode (H 2 O + 0.1% FA:MeOH + 0.1% FA) at 20 mL/min and afforded 56.2 mg of a compound, which was identified as patulin (purity > 95%) based on comparison of its experimental NMR and HRMS spectra with the literature [10].
NMR and HRMS Measurements
The NMR spectroscopic data were recorded on a Bruker Avance III HD 600 MHz NMR spectrometer equipped with a QCI 5 mm Cryoprobe and a SampleJet automated sample changer (Bruker BioSpin, Rheinstetten, Germany). The NMR spectra were recorded in DMSO-d6. High-resolution mass spectrometric date were recorded on a Q-Exactive Plus mass spectrometer (Thermo Fisher scientific, Waltham, MA, USA) interfaced to a Thermo Dionex Ultimate 3000 UHPLC system, using a heated electrospray ionization (HESI-II) source. Full scans were acquired at a resolution of 35,000 FWHM (at m/z 200) and MS/MS scans at 17,500 FWHM both with a maximum injection time of 50 ms.
Measure of NF-κB Activity
NF-κB inhibitory activity was assessed using a HEK293/NF-κB-luc cell line as previously described [26]. Briefly, cells were incubated 1 h in FBS-free medium with 2.5 µM of Cell Tracker Green CMFDA (Thermo Fisher Scientific), a fluorescent dye used to measure cell viability, and seeded in 96-well plates (10 4 cells/well). After an overnight incubation, cells were treated with patulin or vehicle only (0.5% DMSO in culture medium) and stimulated with TNF-α (20 ng/mL) (Sigma-Aldrich) for 5 h. Then, the cells were lysed with reporter lysis buffer (Promega, Madison, WI, USA) and both fluorescence of the Cell Tracker Green CMFDA and luminescence of the firefly luciferase were read on a Cytation 3 imaging multimode reader (Biotek, Winooski, VT, USA). The luminescence signal was normalized by the fluorescence signal for each well, and relative NF-κB activity was quantified by comparing the normalized luminescence signal of sample-treated cells with the vehicle-treated cells. Nonlinear regression (with sigmoidal dose response) was used to calculate the IC 50 values using GraphPad Prism 6.05. Each compound was tested in duplicate and three independent experiments were performed. Parthenolide (Tocris Bioscience, Bristol, UK) was used as a positive control. Extracts and fractions were screened at 20 µg/mL. Patulin dose-response curve started at 10 µM.
Immunocytochemistry
A549 cells were seeded in clear-bottom black 96-well plates (Corning, New York, NY, USA) (10 4 cells/well). After overnight incubation, cells were treated with either a vehicle control (0.5% DMSO in culture medium), TNF-α only (20 ng/mL), or 1.5 µM patulin + TNF-α (20 ng/mL) for 20 min. Cells were then rinsed with DPBS (Thermo Fisher scientific), fixed with 4% paraformaldehyde for 10 min, and permeabilized 5 min with a 0.1% Triton X-100 DPBS solution (DPBST). Blocking was then performed for 30 min with 1% BSA in DPBST, and cells were incubated overnight at 4 • C with the rabbit anti-p65 antibody (Cell Signaling Technology, Danvers, MA, USA). After rinsing three times with DPBS, cells were incubated for 1 h at room temperature in the dark with an anti-rabbit antibody (Cell Signaling Technology), and counterstained with 0.1 µg/mL DAPI for 1 min. After rinsing three times in DPBS, fluorescent pictures were taken on the Cytation 3 imaging multimode reader (Biotek). NF-κB p65 nuclear translocation was quantified by measuring the fluorescence intensity of the secondary antibody in the nuclear zones defined by DAPI staining, using the Gen5 software 3.0 (Biotek).
Western Blot Analysis
A549 cells were seeded in 10 mm Petri dishes (2.2 × 10 6 cells/dish) and grown for 48 h until 80-90% confluency. Cells were then treated with either vehicle control (0.5% DMSO in culture medium), TNF-α only (20 ng/mL), or 1.5 µM patulin + TNF-α (20 ng/mL), during 5 min for observation of IκBα phosphorylation and 15 min for IκBα degradation. Cells were then rinsed, harvested with TryplE Express (Thermo Fisher Scientific), and cytoplasmic extraction was performed using the NE-PER extraction kit (Thermo Fisher Scientific). Protein concentration of each cytoplasmic extract was determined on a Qubit 3.0 fluorometer (Thermo Fisher Scientific). After protein denaturation, 20 µg of proteins were loaded on 12.5% bis-acrylamide gels, and electrophoresis was performed at 100 V. Proteins were then transferred to a PVDF membrane for 40 min at 10 V. After the transfer, membranes were blocked in 5% non-fat milk for 60 min under agitation, and incubated overnight with the primary antibodies (1:1000 in 1% non-fat milk), either mouse anti-IκBα or rabbit anti-phospho-IκBα (Cell Signaling Technology) at 4 • C. After rinsing three times with wash buffer, membranes were incubated with the corresponding anti-mouse or anti-rabbit horseradish peroxidase-conjugated secondary antibody (Cell Signaling Technology) for 1 h at room temperature, followed by detection using SuperSignal™ West Pico Chemiluminescent Substrate (Thermo Fisher Scientific) on a myECL imager (Thermo Fisher Scientific).
Cell Viability Assay
A549 cells were seeded in clear 96-well plates (Corning) (10 4 cells/well). After overnight incubation, cells were treated with 0.5% DMSO or increasing doses of patulin for 72 h. Cells were then fixed with cold trichloroacetic acid for 30 min, rinsed with tap water and dried overnight at room temperature. Proteins were stained with a 0.4% SRB solution for 30 min and cells were rinsed four times with a 1% acetic acid solution. The bound SRB was then solubilized in a 10 mM Tris base solution and absorbance was read at 510 nm using a Cytation 3 imaging multimode reader. Nonlinear regression (with sigmoidal dose response) was used to calculate the IC 50 values using GraphPad Prism 6.05.
Apoptosis Assay
Apoptosis was measured using the annexin V-fluorescein isothiocyanate (V-FITC)/propidium iodide (PI) assay according to the manufacturer's protocol (Thermo Fisher Scientific). A549 cells were seeded in a 12 wells plate (10 5 cells/well). After overnight incubation at 37 • C in a 5% CO 2 atmosphere, cells were treated with 0.5% DMSO or increasing doses of patulin for 48 h. Detached cells were collected with the medium, while attached cells were trypsinized. All recovered cells were rinsed with PBS and stained with 5 µL annexin V-FITC and 1 µL PI (100 µg/mL) for 15 min in the dark at room temperature. Percentage of all apoptotic cells was determined by counting the percentage of annexin V-FITC positive cells using an attune NxT flow cytometer (Thermo Fisher Scientific).
Cell Migration Assay
The scratch assay was used to determine cell migration as previously described [27]. Briefly, A549 cells were seeded in a 96-well plate (1.5 × 10 4 cells/well) and grown until confluence. A scratch was then drawn in each well using a 200 µL pipette tip and debris were removed by rinsing cells with DPBS. Cells were then treated with increasing concentrations of patulin or with DMSO alone (0.5% max). Pictures were taken quickly after treatment and after 24 h of incubation at 37 • C in a 5% CO 2 atmosphere. Widths of the wounds were measured at both time points using the Gen5 software 3.0 (Biotek) and percentage of space recovery after 24 h was obtained using the formula: % recovery = 100 − (width 24 h/width 0 h × 100). Inhibition of cell migration was quantified by comparing the percentage of space recovery between the cells treated with patulin and the cells treated with DMSO alone.
Quantitative Real Time PCR Analysis of mRNA Expression
A549 cells were seeded in 6-well plates (3.5 × 10 5 cells/well) and allowed to adhere overnight before treatment with increasing doses of patulin. After a 24 h treatment, cells were lysed, and total RNA was isolated using the Aurum total RNA Mini Kit (Bio-Rad, Cressier, Switzerland). The "high capacity total RNA to cDNA" Reverse Transcriptase (Thermo Fisher scientific) and a PCR standard thermal cycler (Thermo Fisher scientific) were used to perform the reverse transcription of 0.25 µg RNA. One microliter cDNA was amplified by quantitative PCR using a SYBR Green PCR Kit (Thermo Fisher scientific) and a Step One Plus Real-Time PCR Thermal Cycler (Thermo Fisher scientific). Custom primers for GAPDH, WIF1, DKK3, and CCND1 were designed on http://bioinfo.ut.ee/primer3-0.4.0/ and obtained from Thermo Fisher scientific [28,29]. Relative gene expression was calculated by normalizing to the housekeeping GAPDH using the ∆∆C T method.
Statistical Analysis
Results are presented as means ± standard errors. Relative p65 nuclear translocation was compared using one-way ANOVA followed by Tukey's multiple comparison test. Percentages of apoptotic cells as well as relative gene expression were compared to the DMSO control using a one-way ANOVA followed by Dunnett's multiple comparison test. A p value < 0.05 was considered significant.
Supplementary Materials: Supplementary materials are available online. Video S1: A549 cell migration for 24 h without patulin. Video S2: A549 cell migration for 24 h with 16.67 µM patulin. Dose-response curves for NF-κB inhibition and A549 cell viability are presented in Figures S1 and S2, respectively. Results of the NMR experiments are presented in Figures S3-S7. | 6,298.6 | 2018-03-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
KinMutRF: a random forest classifier of sequence variants in the human protein kinase superfamily
Background The association between aberrant signal processing by protein kinases and human diseases such as cancer was established long time ago. However, understanding the link between sequence variants in the protein kinase superfamily and the mechanistic complex traits at the molecular level remains challenging: cells tolerate most genomic alterations and only a minor fraction disrupt molecular function sufficiently and drive disease. Results KinMutRF is a novel random-forest method to automatically identify pathogenic variants in human kinases. Twenty six decision trees implemented as a random forest ponder a battery of features that characterize the variants: a) at the gene level, including membership to a Kinbase group and Gene Ontology terms; b) at the PFAM domain level; and c) at the residue level, the types of amino acids involved, changes in biochemical properties, functional annotations from UniProt, Phospho.ELM and FireDB. KinMutRF identifies disease-associated variants satisfactorily (Acc: 0.88, Prec:0.82, Rec:0.75, F-score:0.78, MCC:0.68) when trained and cross-validated with the 3689 human kinase variants from UniProt that have been annotated as neutral or pathogenic. All unclassified variants were excluded from the training set. Furthermore, KinMutRF is discussed with respect to two independent kinase-specific sets of mutations no included in the training and testing, Kin-Driver (643 variants) and Pon-BTK (1495 variants). Moreover, we provide predictions for the 848 protein kinase variants in UniProt that remained unclassified. A public implementation of KinMutRF, including documentation and examples, is available online (http://kinmut2.bioinfo.cnio.es). The source code for local installation is released under a GPL version 3 license, and can be downloaded from https://github.com/Rbbt-Workflows/KinMut2. Conclusions KinMutRF is capable of classifying kinase variation with good performance. Predictions by KinMutRF compare favorably in a benchmark with other state-of-the-art methods (i.e. SIFT, Polyphen-2, MutationAssesor, MutationTaster, LRT, CADD, FATHMM, and VEST). Kinase-specific features rank as the most elucidatory in terms of information gain and are likely the improvement in prediction performance. This advocates for the development of family-specific classifiers able to exploit the discriminatory power of features unique to individual protein families. Electronic supplementary material The online version of this article (doi:10.1186/s12864-016-2723-1) contains supplementary material, which is available to authorized users.
Background
Only a minor fraction of the large number of variants discovered with current high-throughput next generation sequencing (NGS) methodologies are causally implicated in disease onset [1][2][3][4][5][6]. The correct identification of the causative variants remains a challenging effort [7]. For a few examples there is sufficient experimental information associating variants and human maladies, and for an even smaller number of cases the underlying biochemical mechanism is known. However, for the vast majority of the sequence variants identified,~100,000 disease-associated variants, the functional information is missing [8]. The experimental characterization and functional annotation of those novel variants would require humongous resources. Nevertheless, this problem is very amenable to computational approaches [6]. Different methods to predict the probability of a variant being causaly implicated in a disease have been proposed during the last decade. A brief description of the most popular methods, along with relevant URLs and references, are listed in Additional file 1: Table S1. A first group of methods applied deterministic rules to a reduced number of protein features to identify damaging mutations. For example, the widely cited methods SIFT [9] and MutationAssessor [10], MutPred [11], FATHMM [12], Panther [13] and PROVEAN [14] rely on different interpretations of signatures of evolutionary constraint to assess the pathogenicity of variants. A second group of methods (e.g. PMUT [15], SNAP [16], PolyPhen-2 [17], NetDiseaseSNP [18], LS-SNP [19], PhD-SNP [20], MutationTaster [21], VEST [22], SNPs&GO [23], SNPs3D [24], MuD [25], Can-Predict [26], CADD [27], PON-P2 [28] and nsSNPAnalyzer [29]) rely on advanced automatic machine learning approaches that integrate prior knowledge in the form of both sequence-based and structure-based features, under the assumption that pathogenic variants will disrupt normal protein function and structural stability. After a training process where the system is presented a set of previously characterized damaging and neutral variants, new variants can be classified based on the knowledge acquired. Each method implements a different machine learning approach: neural networks [15,16,18], Bayesian methods [17,21], support vector machines [19,20,23,24,27] or random forests [22,25,26,28,29]. Recently, some meta-predictor have been published, for instance, Meta-SNP [30] combines four of the most widely employed computational methods for prioritising missense single nucleotide variations, both Condel [31] and PON-P [32] integrate five classifiers, and PredictSNP [33] incorporates eight. Moreover, the SPRING [34] method is based on six functional effect scores calculated by existing methods (SIFT, Polyphen2, LRT, MutationTaster, GERP and PhyloP) and five association scores derived from a variety of genomic data sources (Gene Ontology, protein protein interactions, protein sequences, protein domain annotations and gene pathway annotations). Concomitantly, each predictor implements a distinctive set of features with a different scope and applicability. Some predictors are generally applicable to any protein, while a recent group of methods include properties that focus on a characteristic subset of variants (eg. Cancer variants predicted by CanPredict [26], Can-DrA [35] and CHASM [36]) or a protein family of interest under the assumption that family-specific features bring discriminative information that justifies the development of specialized methods. An interesting example of the latter are protein kinases [5,[37][38][39][40]. The protein kinase superfamily is very amenable to this approach. Protein kinases play a central role in the cell and consequently they have been studied in detail. As a consequence, a broad number of variants in members of the protein kinase superfamily have been reported in the literature in relation to disease [41], including some types of cancer [42]. In previous publications, we demonstrated the preferential distribution of both germline and somatic variants [43,44] around regions of functional and structural relevance and how this information can be used to develop a computational method [37] to predict the impact of variants on the function of protein kinases. The combination of the predictions from the classifier with annotations extracted from the literature and other sources, facilitates the mechanistical interpretation of the consequences of the variants [45].
Here, we introduce KinMutRF as a random forest-based classifier to predict the pathogenicity of novel variants. Although the core functionality builds up on our previous work [37], in this new implementation we redefine the sequence-derived features, using optimized ways to extract the signals encoded at the protein, domain and residue levels. To demonstrate the improved prediction capabilities of the KinMutRF, approach we benchmark our random forest classifier with other state-of-the-art prediction methods and we discuss the benefits and pitfalls of the development of a family-specific predictor in the light of our findings.
Training datasets
Variants affecting members of the protein kinase superfamily were downloaded from the UniProt/Swiss-Prot variant pages (release 2014_08 of 03-Sept-2014) [46], which compile variants in UniProtKB. The training datasets used in this work have been included with the Supplementary Materials.
Statistics to evaluate prediction performance
Accordig to best practices in the field [46][47][48], perfomances was assesed in terms of Accuracy, Precision, Recall, F-score and Mathew's correlation coefficient (MCC).
Where: TP: True positives, correctly predicted pathogenic variants; FP: False positives, neutral variants predicted as disease prone; TN: True negatives, correctly predicted neutral variants; and FN: False negatives, pathogenic variants predicted as neutral.
Description of the classification features
Variants were characterized with a battery of 25 features at the protein, domain and residue level (see details below). The distribution of variants in the training sets respect the classification features can be found in Fig. 1 (panels from c to l). Classification features were computed as follows:
Membership to kinase groups
We used the taxonomy proposed by Manning [49] implemented in UniProt to classify the protein kinases superfamily. This taxonomy considers three levels of abstraction: subfamilies, families and groups. The level of protein kinase groups are stablished according to sequence similarity, the presence of accessory domains, and by considering the different modes of regulation. For a detailed description of protein kinase groups in KinBase and the abbreviations used in this work, see reference [50] and the supplementary materials. A total of 15 protein kinase groups were considered in this analysis (Fig. 1, panels c and d) and the log odds ratio of their contribution to disease was calculated according to the following formula: kinase group ¼ log 2 disease var: in kinase group þ ξ ð Þ =disease var: neutral var: in kinase group þ ξ ð Þ =neutral var: Where "disease var." and "neutral var." refer to the total number of variants in UniProt classified as disase or neutral, respectively. The terms "disease var. in kinase group" and "neutral var. in kinase group" are the number of variants in a specific kinase group for each category. Note that a pseudo count of ξ = 10 -20 is considered to resolve kinase groups with no neutral variants.
Gene ontology terms (sumGOlor)
Gene Ontology (GO) annotations were used as a proxy for the functional relevance of protein kinases. Starting from the terms that annotate each kinase in UniProt the three subontologies (i. e. molecular function, biological process and cellular compartment) were followed to their roots to consider all parent nodes. The probabilities of observing each of these GO terms together with neutral and disease variants were compared with log-odds ratio (Fig. 1, panel l). Protein kinase are characterised by the sum of the individual contributions of their GO terms. Where "disease var." and "neutral var." refer to the total number of variants in UniProt classified as disase or neutral, respectively. The terms "disease var. annotated with GOi" and "neutral var. annotated with GOi" are the number of variants annoatated with a particular gene ontology term for each category, disease-associated or neutral. Note that a pseudo count of ξ = 10 -20 is considered to resolve cases where no neutral variants where annotated with GO i .
PFAM domains
For each of the 80 different domains defined by UniProt as found in the protein kinase superfamily, a log-odds ratio (details in Fig. 1, panels e and f ) of the frequency with which they harbour disease and neutral variants has been computed according to the following formula: Where "disease var." and "neutral var." refer to the total number of variants in UniProt classified as disase or neutral, respectively. The terms "disease var. in PFAMi" and "neutral var. in PFAMi" are the number of variants in a specific kinase PFAM domain for each category. Note that a pseudo count of ξ = 10 -20 is considered to resolve cases where no neutral variants where annotated with PFAMi.
Amino acid and their biochemical properties
The physic-chemical properties of the amino acids involved in variation often determine the propensity to disease. Our prediction features consider the native amino acid, the newly observed one, and the derived changes in some crucial biochemical properties. These include changes volume, Kyte-Doolittle hydrophobicity, C beta branching and formal charge represented as differences in the nominal values ( Fig. 1, panels g, j and k).
Residue conservation: SIFT
Variants are described with the precomputed SIFT [51] scores downloaded from dbNSFP [52] as a proxy for amino Conservation within a set of related sequences has traditionally been the strongest and most widely implemented features for the classification of variants.
Functional annotations in UniProt, FireDB and Phospho.ELM
The activity of protein kinases is affected by the alteration of functionally relevant residues involved, for example, in catalysis or phosphorilation. In the implementation of KinMutRF, residue annotations in UniProt [53] define functionally relevant amino acids. The residue annoations include the following categories: active sites (act_site), general (binding) or specialised binding (carbohyd, metal, np_bind), disulfid bonding, experimentally modified residues (mod_res), repeat regions (repeat), signal peptides (signal), transmembrane regions (transmem) and zinc fingers (zn_fing), among others broadly defined sites. An additional categories (any_uniprot) account for the residues being annotated with at least one of the previous categories. Similarly, phosphorilation sites from Phospho.ELM [54] and for the prediction of the catalytic and ligand-binding sites according to FireDB [55] are included (Fig. 1, panel h).
Construction of the training datasets
Variants affecting members of the protein kinase superfamily were extracted from the UniProt/Swiss-Prot variant pages [46], the compilation of variation available in UniProtKB. Every variant in this set is given a classification as neutral or pathogenic. In the few cases were the same variant was described by several instances, a single record was considered, selecting a pathogenic instance if ambiguous. Note that no additional reclassification attending to disease types or information from other sources was applied. After the filtering process, 1021 unique variants in 84 protein kinases form the disease dataset and 2668 variants in 450 proteins conform its neutral counterpart. In total, there were variants described and classified for 459 out of the 507 protein kinases described in UniProt, and 75 kinases span both categories of variants. The disease and neutral variant sets were used for training and evaluation of the machine learning classifier. The 848 variants affecting 299 kinases that are listed as unclassified in UniProt were left out from this analysis.
The training of the random forest-based classification kernel of KinMutRF followed a 10-fold cross-validation approach. As suggested by the best practices in the field [16,46], the 459 protein kinases for which classified variation data exists were distributed randomly in 10 different bins. All variants corresponding to an individual protein were assigned to the same bin. We incorporated this rule to avoid overestimating the performance of the classification; the contrary would constitute a circularity type 2 bias [47,56]. This bias might originate from similarities at the protein level (i.e. different variants from the same protein) between the training and evaluation sets. To ensure reproducibility of our results and to facilitate of other methods to be developed in the future, these training bins have been included with the Supplementary Materials (Additional file 2: Supplementary File S1). Then, each bin was iteratively used as evaluation set whereas the remaining nine were used as training instances. Results are accumulated until all bins had been used in the evaluation step. Following current standard practice in the field [47][48][49], we assessed the performance of the clasiffier with five different statistics: accuracy, precision, recall, f-score and Mathew's correlation coefficient (MCC) according to the formulas described in Methods.
Optimization of the prediction method
A machine learning classifier was trained to predict the pathogenicity of variants affecting the human kinome. In particular, a Random Forest kernel was selected after exploration of the many methods implemented in the Weka (v.3.6.11) package. To optimise the parametrization of the random forest classifier, we explored an increasing number of decision trees, ranging from 4 to 30 elements. Our results (Fig. 1, panels a and b) show that all performance statistics reach a steady plateau after an expected initial overhead and suggest that prediciton performance is not afffected by moderate alterations in the size of the forest. Subsequent analyses implement a configuration with 26 trees given the slightly better f-score in average in our preliminary analyses. i Distribution of SIFT scores; j Changes in volume caused by disease-associated and neutral variants; k Changes in hydrophobicity caused by disease-associated and neutral variants; l Accumulated Gene Ontology (GO) log odds-ratio. Note that, where relevant, disease-associated variants were represented in dark red whereas ochre was used for their neutral counterparts
Evaluation of classification performance in the training set
In a previous section we described the construction of the training datasets and how these were used in 10-fold crossvalidation experiment to assess the prediction capabilities of the KinMutRF classifier according to five common statistics. Accuracy accounts for the fraction of variants correctly predicted in function of the total number of variants. Due to the innate inbalance in the constitution of the datasets, with 1021 neutral variants and 2668 disease-associated variants respectively, a naïve classifier predicting every variant as the majority class would achieve a basal 72.32 % accuracy. Consequently, the evaluation of the classification should refer to the prediction of the positive class. In the case of a predictor of pathogenicity, this corresponds to the pathogenic mutations. Precision accounts for the proportion of correctly predicted disease-associated variants with respect to all the variants predicted as positive by the classifier. Recall, often referred as sensitivity, accounts for the proportion of correctly predicted disease-associated variants respect to all positive variants present in the dataset. These two statistics combine into a single one, the f-score, which is convenient for evaluation purposes. Finally, we considered the Mathew's correlation coefficient (MCC) accounts for the performance of both the disease and the neutral prediction. Despite accuracy, this statistic is robust even in cases with dispair class sizes. KinMutRF yields accurate results when both classes are considered (accuracy: 88.45 %, MCC: 0.68). Performance is also satisfactory when only the pathogenic set is considered. KinMutRF achieves a precision of 81.62 % and a recall of 75.22 %, that combined produce an f-score of 78.29 %. The implementation of KinMutRF overcomes our previous KinMut results implementing a support vector machine (SVM) kernel and a different set of prediction features [37,51] (Acc: 83.29 %, Prec: 60.03 %, Recall: 75.17 %, f-score: 66.7 % and MCC: 0.6). The improvement is particularly significant in terms of precision, the ability to predict correctly in the pathogenic variants, while a similar recall is maintained.
Most relevant features for classification
The contribution of individual features for the classification of the classes was assesed using the InfoGainAttri-buteEval module in Weka (v.3.6.11). Features are ranked according to the information gain resulting from the inclusion of individual features. The ranking of the classification features of KinMutRF is summarised in Table 1. One would expect that a family-specific predictor would benefit from the use of the information encoded by features that pertain only to the family of interest. Our ranking of features follows this intuition as the highest information gain (0.491) corresponds to the implementation of Gene Ontology terms that describe the function of each protein kinase and the fequency with which it has been reported in relation with disease and neutral variants (sumGOlor). This observation is coherent with Fig. 1 (panel l), where a clear separation between the accummulated GO log odds ratio of the two classes of variations (disease-associated and neutral). The evolutionary conservation of the residues, measured with SIFT, follows in the ranking. with an information gain of 0.179. In spite of not being a kinase-specific feature, this observation is coherent with the widespread use of SIFT as part of a full body of other classifiers and with the observations in Fig. 1 (panel i). Third and fourth position in this ranking are also occupied by kinase-specific features, namely the membership to a kinase group and the relevance of the kinase domains, produce information gains of 0.120 and 0.112 respectively. It is clear from the observaton of Fig. 1 (panels c, d, e and f) that there is a preferential distribution of disease-associated mutations respect to certain protein kinases and domains. One could argue that the inclusion of features that rely on existing knowledge (e.g. protein and domain specific features) might inherently bias the classification of variants. Albeit partially true from a benchmark perspective, the ability to derive correct predictions from related proteins is the ultimate goal of family-specific methods as the one under consideration here. A different reasoning is that genetic aberrations affecting uncharted regions of the variation-spacei.e. less characterised protein kinasesmight result difficult to characterise as predictions would be hindered by lack of data, or on a worst case scenario by the strong contribution of the few exisiting examples. We expect that the wealth of data coming from current sequencing efforts would quickly bridge this knowledge gap and that all elements of the human kinome would present a comparable amount of information. This is also true for the development of family-specific methods outside the protein kinase superfamily, currently limited by lack of sufficient variation information. The ranking of features is continued by other commonly used features. However, their contribution to the information gain is an order of magnitude smaller. These include recurrently implemented by methods that focus on alteration of protein stability (Additional file 1: Supplementary Table S1) such as the nature of the wildtype (0.044) and mutant (0.037) amino acids or the associated change in hydrophobicity (0.037). Last in the ranking appear features that assess the relevance of the residue in terms of catalysis and phosphorylation propensity. Their position in the ranking might be determined by their limited abundance. Nevertheless, these observations are coherent with previous observations that determined that disease-associated variants, independently of their somatic or germline character, did not allocated necessarily on catalytic sites but on the close proximity of these, under the hypothesis that the structural neighbourhood of the functional sites is also determinant for correct protein function [43,44,57].
Benchmark of the classifier respect to other methods
The capability of KinMutRF to correctly identify pathogenic variants was benchmarked to that of another eight state-ofthe-art approaches ( Table 2). Evaluation was studied according to the five performance measures described in Methods, KinMutRF yields very satisfactory predictions when the other methods are interrogated about the pathogenicity of the 3689 kinase variants for which UniProt provides a characterization. In fact, our methodology achieves the best accuracy (0.88) and precision (0.82) among the evaluated methods, indicative that the prediction of both neutral and pathogenic mutations is sufficiently reliable. This observation is supported by a Matthew's correlation coefficient (MCC) of 0.68, comparable to that achieved by the the best in this category, VEST [22]. Our f-score (0.78) is also comparable with the one achieved by VEST, that compensated the lack precison with increased recall. The difference in prediction performance might be bigger in practical terms, as the results of KinMutRF competitors correspond to an optimistic interpretation that might be boosted by a circularity type 1 bias [56]; the set used in the benchmark might include variants already presented to the classifiers during their own training phase [52]. This effect was taxatively avoided in the evaluation of KinMutRF.
Comparison to Kin-Driver manually curated kinase variants
To understand the prediction performance of KinMutRF beyond the training datasets, we evaluated the agreement with an independent source, Kin-Driver [58]. The resource present two quantitative adjantages: First, it includes variants that have not been presented to KinMutRF during its training phase. Second, variants are manually classified according to their consequence on protein activity into activating and deactivating, which allows further understanding of the strengths and weakenesses of our model. KinMutRF correctly predicted 65 out of the 159 (40.88 %) pathogenic variants included in Kin-Driver that were not included in the set used for training our predictor. The drop in performance might be explained by the nature of the consequence of the variants. The random forest correctly identified 21 out of 34 (61.76 %) loss-of-function variants whereas only 44 out of the 125 (35.20 %) gain-of-function variants were classified correctly. This analysis is coherent with previous observations [54,57] that advocate for the further development of methods to predict the consequences of activating variants as most of the methodologies focus on the disruption of protein function. Predicting the pathogenicity of unclassfied variants, recorded in UniProtKB/Swiss-Prot In a previous section we discussed the preparation of a training set from the variation in UniProtKB/Swiss-Prot variant pages. In this process, we excluded 848 variants in 299 kinases for which a classification of "Disease" and/or "Polymorphism" was not available. We propose that KinMutRF can bridge this gap in knowledge and suggest whether these are most likely pathogenic or neutral. KinMutRF predicted 185 (21.81 %) of these variants as pathogenic (Fig. 2, panel b). The full list of predictions, as well as the prediction features that originated them, can be found with the Supplementary Materials (Additional file 4: Supplementary File S2). One could argue that the prediction features used in this analysis rely excessively on existing knowledge. Should this be the case, predictions for all the variants in a particular kinase group, protein kinase or PFAM domain would follow the same character, being all either neutral or pathogenic. Most of the 53 protein kinases that harbored variants predicted as disease-associated also presented neutral variation (Fig. 2, panel a). The same is also true for kinase groups and PFAM domains (Fig. 2, panels c, d and e). These results support our selection of features, most importantly, the highly informative accumulative log odds ratio of Gene Ontology terms as a proxy for protein function (Fig. 2, panel f). In spite of being distributed satisfactorily, the results from KinMutRF highlight the functional relevance of previously reported domains such as the protein kinase domain or the PI3K/PI4K and certain taxonomical kinase groups characterised by them, namely Tyr, atypical PI3/PI4 kinase, CAMK and TKL.
Conclusions
Here we presented a novel method for prioritization of pathogenic variants in the human protein kinase superfamily. KinMutRF implements a random forest classifier that outperforms our previous implementation (KinMut) and other state-of-the-art methods with a similar purpose. Our choice of features and datasets makes the method especially relevant in the context of kinase variantion and their intrinsic role in cancer biology. The family-specific character of the Kin-MutRF classifier allowed us to introduce features that are unique to the protein kinase family. An analysis of the individual information gain identified these kinasespecific features among the most relevant for a correct classification. Namely, the functional characterization of the kinase according to Gene Ontology terms, the membership to a particular kinase group or the occurrence of the variants at relevant catalytic protein kinase domain arise as important features that are unique to the protein kinase superfamily. This is in full agreement with previous observations and advocates for the urgent development of family-specific classifiers where the abundance of variation data permits.
Availability of supporting data
KinMutRF is publicly implemented as a component of our pipeline for the identification, annotation and interpretation of the consequences of kinase variants, wKinMut-2 [61]. This resource is freely available at http://kinmut2.bioinfo.cnio.es. The source code, documentation and examples for KinMutRF can be downloaded for local installation from https://github.com/ Rbbt-Workflows under a GPV version 3 licence. We are also grateful to the two anonymous reviewers that revised this manuscript for their very relevant comments.
Consent for publication
Not applicable.
Ethics approval and consent to participate
Not applicable.
Availability of data and materials
Training datasets used for 10-fold cross-validation experiment provided as Additional file 2: Supplementary File S1. Predictions for the unclassified variants in Uniprot and the Bruton agammaglobulinemia tyrosine kinase domain are available as Additional file 2: Supplementary Files S1 and Additional file 4: Supplementary Files S2 respectively. The source code of KinMutRF is released under a GPL version 3 license, and can be downloaded from https:// github.com/Rbbt-Workflows/KinMut2 whereas a web implementation of KinMutRF is freely available at http://kinmut2.bioinfo.cnio.es. f Distribution of the accummulated Gene Ontology log odds-ratios (sumGOlor) for neutral and disease-associated variants | 6,210.4 | 2016-06-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A Biomimic Reconstituted High Density Lipoprotein Nanosystem for Enhanced VEGF Gene Therapy of Myocardial Ischemia
A biomimic reconstituted high density lipoprotein (rHDL) based system, rHDL/Stearic-PEI/VEGF complexes, was fabricated as an advanced nanovector for delivering VEGF plasmid. Here, Stearic-PEI was utilized to effectively condense VEGF plasmid and to incorporate the plasmid into rHDL. The rHDL/Stearic-PEI/VEGF complexes with diameter under 100 nm and neutral surface charge demonstrated enhanced stability under the presence of bovine serum albumin. Moreover, in vitro cytotoxicity and transfection assays on H9C2 cells further revealed their superiority, as they displayed lower cytotoxicity with much higher transfection efficiencywhen compared to PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes. In addition, in vivo investigation on ischemia/reperfusion rat model implied that rHDL/Stearic-PEI/VEGF complexes possessed high transgene capacity and strong therapeutic activity. These findings indicated that rHDL/Stearic-PEI/VEGF complexes could be an ideal gene delivery system for enhanced VEGF gene therapy of myocardial ischemia, which might be a new promising strategy for effective myocardial ischemia treatment.
Introduction
In spite of increasing efforts to improve its management, cardiovascular disease (CVD) continues to be the leading cause of death worldwide, accounting for almost 30% of all deaths [1].Within this group of disorders, myocardial ischemia remains to be one of the major causes, which occurs when blood supply to the myocardium is reduced by stenosis or occlusion of a coronary artery [2].To avoid severe consequences of ischemia in the heart, the idea of growing new blood vessel for increasing blood supply and insuring better heart function has been proposed [3].Evidences have proved that angiogenic factors could promote vessel growth and restore blood supply to the myocardium, thus preserving left ventricular (LV) function and preventing left ventricular remodeling [4,5].Therapies using vascular endothelial growth factor (VEGF) have demonstrated beneficial effects of relieving myocardial ischemia symptoms [6,7].Compared with gene therapy, VEGF administered as a natural recombinant human protein in vivo exhibited some disappointing results [8,9], which were attributed, at least partially, to the short lived effect and high instability of the protein when injected as a bolus [10].The most common technique for myocardial ischemia gene therapy has been the utilization of viral vectors to deliver VEGF plasmid into cardiomyocytes [11].While viral gene therapy offers high transfection efficiencies, its clinical utility is limited by many disadvantages, including host immune responses, oncogenic potential, limitations in viral loading, and difficulty in largescale manufacturing.Owing to these reasons, the development of safer, nonviral methods for gene delivery gains its popularity [12].
Among various nonviral vectors, polyethyleneimine (PEI) has been treated as the gold standard in cationic gene delivery polymers, for it shows high transfection efficiency.However, the inherent cytotoxicity and rapid clearance from the blood induced by its highly positive charges limit its further application in vivo [13].As a result, the fabrication of a PEI derived vector with high transfection efficiency and low cytotoxicity appears to be a challenging issue in successful application of VEGF in myocardial ischemia gene therapy.
High density lipoprotein (HDL) is one of the essential components of the lipid transport system, which has been well established to play a protective role against the development of CVD [14].The major protein component (∼70%) in HDL is apolipoprotein A-I (apoA-I), a highly helical polypeptide (28 kDa), which has been proved to have high affinity to the ABCA1 protein, ABCG-I, and scavenger receptor-BI (SR-BI) on the vascular and myocardial cells [15,16].Endogenous HDL particles are of complete biodegradation and nonimmunogenicity.Reconstituted HDL (rHDL) is the synthetic form of the endogenous human HDL, and it possesses similar physicochemical properties.In the past decades, rHDL has been successfully developed as a gene carrier [17,18], which displayed promising application potential in vivo.
In this study, an rHDL-based system was developed for effective VEGF delivery and gene therapy in myocardial ischemia model.Stearic-PEI was first synthesized and then employed to construct a lipophilic core of rHDL (Lipos/Stearic-PEI).The cationic Stearic-PEI was served to condense the VEGF plasmid to formulate Lipos/Stearic-PEI/VEGF complexes.Finally, functional protein apoA-I was introduced to eventually assemble the delivery system (rHDL/Stearic-PEI/VEGF complexes).As a biomimic delivery system, the rHDL/Stearic-PEI/VEGF complexes were expected to remain stable and reduce the cytotoxicity of PEI in vitro.Moreover, they should be able to exert high transfection efficiency and strong therapeutic effect in vivo due to the combination effect of PEI and rHDL.
Plasmid Preparation.
The VEGF plasmid used in this study was prepared as described previously [19].Briefly, plasmid carrying the VEGF-165 coding region under the control of the cytomegalovirus (CMV) immediate early promoter enhancer region and the chicken beta globulin intron (pCMV-VEGF) was created.The therapeutic gene was inserted into pCMV-lei based on the MluI and BamHI restriction sites.Plasmid preparation was performed by double CsCl gradient purification and purity was confirmed by spectrophotometer (Hitachi Japan) at A260/A280.2.3.Cell Culture.H9C2 cells were purchased from the Cell Bank of Shanghai Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences (Shanghai, China) and cultured in DMEM medium (Gibco, USA) supplemented with 10% FBS (HyClone, USA), 100 U/mL penicillin, and 100 g/mL streptomycin (Gibco, USA) in a humidified atmosphere of 95% air/5% CO 2 incubator at 37 ∘ C. All experiments were performed on cells in the logarithmic phase of growth.
Synthesis of Stearic-PEI.
Stearic-PEI was prepared by coupling the carboxylic groups of stearic acid with the secondary amine groups of PEI through amidation reaction.Stearic acid (17.09mg, 0.06 mmol), EDC (34.34 mg, 0.18 mmol), and NHS (20.61 mg, 0.18 mmol) were charged into a 5 mL tube, dissolved in 2 mL of dimethyl sulfoxide (DMSO), and kept at room temperature for 30 min to activate the carboxyl groups of stearic acid.PEI 10 KDa (PEI 10K, 100 mg, 0.01 mmol) was dissolved in 5 mL of DMSO in a 10 mL flask.The activated stearic acid solution was then dropwise added to the PEI solution with magnetic stirring.The reaction was allowed to continue with the protection of argon at room temperature for 24 h.The mixture was purified by repeated precipitation in diethyl ether.The raw product was further purified by dialysis in deionized water (MWCO 3500 Da, 2 L × 3) to remove the unreacted stearic acid, EDC, and NHS.The resulting solution was lyophilized to obtain Stearic-PEI.The chemical structure of Stearic-PEI was characterized by 1H NMR (Avance 300, Bruker, Germany).
Preparation of Lipos/Stearic-PEI.
Thin-film dispersion method was employed to construct Lipos/Stearic-PEI.Briefly, 120 mg of PC, 12 mg of cholesterol, and 24 mg of CE were dissolved in 2 mL of organic solvent (chloroform : methanol = 1 : 1, v/v), and the solvent of lipid solutions was evaporated with a rotary evaporator at 30 ∘ C until a thin film was formed.The trace solvent residue was finally removed with a stream of nitrogen gas.500 L of Stearic-PEI solution (20 mg/mL), 50 L of sodium cholate solution (30 mg/mL in PBS buffer), and Tris buffer (0.1 M KCl, 10 mM Tris, 1 mM EDTA, pH 8.0) were added to dissolve the thin film.The mixture was vortexed thoroughly for 5 min, followed by ultrasonication using an ultrahomogenizer (JY92II, Ningbo, China) until a clear suspension was obtained.The dispersion was then filtered through a 0.22 m filter and dialyzed to remove the free sodium cholate (MWCO 7500 Da, 2 L × 3).Finally, the prepared Lipos/Stearic-PEI complexes were collected and stored at 4 ∘ C until further use.
Preparation of Lipos/Stearic-PEI/VEGF Complexes.
VEGF plasmid was dissolved in PBS buffer to obtain a final concentration of 0.1 mg/mL and then dropwise added to the above prepared Lipos/Stearic-PEI complexes with vortex to formulate Lipos/Stearic-PEI/VEGF complexes.
2.8.Cytotoxicity.The cytotoxicity of the complexes was measured by MTT assay.The H9C2 cells were seeded in 96well plates at a density of 1 × 10 4 cells/well and incubated for 70-80% cell confluence.PEI 10K/VEGF, Lipos/Stearic-PEI/VEGF, or rHDL/Stearic-PEI/VEGF complexes containing various concentrations of PEI were cocultured with cells for 24 h.After that, MTT solution (20 L, 5 mg/mL) was added and cells were further incubated for 4 h at 37 ∘ C.After the medium was removed, DMSO (150 L) was added to each well.The absorption was measured at 570 nm using a Universal Microplate Reader (EL800, BIO-TEK Instruments Inc., USA).The cell viability was determined as a percentage relative to untreated control cells.
2.9.In Vitro Transfection of VEGF.H9C2 cells were seeded on 24-well plates at a density of 5 × 10 4 cells/well.After 24 h of incubation, the culture media were replaced with serum-free media containing PEI 10K/VEGF, Lipos/Stearic-PEI/VEGF, or rHDL/Stearic-PEI/VEGF complexes.After 4 h, the cells were washed with PBS three times to thoroughly remove the uninternalized complexes and cultured with complete medium for 48 h.The cell culture media were collected after transfection and the amount of VEGF production and secretion was quantified using a VEGF ELISA kit (R&D Systems; Minneapolis, MN) according to the manufacturer's protocol.
Rat Ischemia/Reperfusion
Model.The rat ischemia/reperfusion model was produced according to previous report [20].All animal experiments were conducted in compliance with our institutional and NIH guidelines for care and use of research animals.Male Sprague Dawley rats purchased from Shanghai Laboratory Animal Center (SLAC, China) were firstly anesthetized in an induction chamber delivering 4% isoflurane.Then animals underwent tracheal intubation for ventilation that was maintained during the procedure on an operating table equipped with a warm water pad (2% isoflurane at a tidal volume of 2.0 mL and a respiration rate of 70 breaths per minute).A small incision was made in the 5th intercostal space and the ribs were spread to expose the chest cavity.The left lung was gently collapsed and retracted to visualize the heart using a wet 2 × 2 gauze pad.After widely incising the pericardium, the left anterior descending artery (LAD) coronary artery was exposed and then blocked 2-3 mm distal from its origin with an 7/0 polypropylene suture.The suture was threaded through a 2 cm length of PE-50 tubing (Becton Dickinson; Franklin Lakes, NJ) and then removed 30 min after the ligation.Blanching of the myocardium and visible dyskinesia of the anterior wall of the left ventricle was observed to confirm successful ligation of the LAD.The rats were randomly assigned to one of four experimental groups: (1) ligation only (control, = 6); (2) injection of PEI 10K/VEGF complexes ( = 6); (3) injection of Lipos/Stearic-PEI/VEGF complexes ( = 6); and (4) injection of rHDL/Stearic-PEI/VEGF complexes ( = 6).After assignment, the suture was removed from the myocardium and 100 L PBS or complexes solution containing 25 g of plasmid DNA was injected in 4 sites of the myocardium (3 sites around the ischemic border zone and 1 site in the central infarct zone).The chest was closed in layers and the animal was allowed to recover under a warming lamp and given injections of pain alleviating medication (Buprenorphine) and antibiotics (Cefazolin).Only those rats with an EF equal to or below 55% (as determined by echocardiography) 2 days after surgery were included in the following study.
Echocardiographical Evaluation.
The echocardiographic analysis was performed using a Vevo 770 high-resolution ultrasound system (Visualsonics, Toronto, Canada).Ejection fraction (EF) measurement was performed to evaluate the left ventricular (LV) systolic function of the heart.The calculation followed Simpson's rule and was based on a parasternal long axis view and four parasternal short axis views at different levels of the LV.At the long axis view, the left ventricle length was measured from the aortic annulus to the endocardial border at the apex level in both diastole and systole.At the parasternal short axis view, the endocardium was traced at four different levels in both systole and diastole, to derive the areas required to obtain Simpson's value.All measurements were performed offline using dedicated Vevo 770 quantification software (Vevo 770 version 3.0.0)[21].
2.12.Histological Studies.Three weeks after surgery, animals were sacrificed and hearts were harvested for subsequent histological analysis.Harvested hearts were fixed and sliced in three 4 mm thick segments from apex to base.The hearts were dehydrated and embedded in paraffin.Sections (5 m) were cut from each slice.
In order to quantify the small caliber vessel density and area, anti-caveolin-1 antibody (diluted 1 : 50) was used as marker, and 2 peri-infarct and 2 intrainfarct images per section were analyzed.Secondary antibody was Alexa Fluor 488 goat conjugated anti-mouse IgG (diluted 1 : 100).Images were captured using the Axio Cam MR3 video camera at 20x connected to the Zeiss Axio Imager M1 microscope equipped with epifluorescence optics.Digital images were analyzed by MATLAB software platform (Mathworks Inc., Natick, MA, USA) [22].
The determination of apoptosis was performed by a commercially available ApopTag Fluorescein Apoptosis Detection Kit (Millipore; Billerica, MA) according to the manufacturer's instructions.Apoptosis in the border zone was imaged at 40x magnification using confocal laser scanning microscope (CLSM, Leica TCS SP5, Germany) and quantified by counting positively stained cells from five random high power fields for each animal.
Characterization of Stearic-PEI.
The conjugation of stearic acid with PEI was conducted via amidation reaction.The cationic amido groups of PEI 10K were employed to condense the VEGF plasmid; on the other hand, the highly hydrophobic stearic groups were introduced to incorporate the PEI/DNA complexes with the hydrophobic component of rHDL through hydrophobic interaction.Here, the Stearic-PEI served not only in packaging the VEGF plasmid, but also as a linker to integrate the DNA with rHDL.The chemical structure of Stearic-PEI was confirmed by 1 H NMR in D 2 O.As shown in Figure 1, compared with the spectrum of PEI, the proton peaks of −NHCH 2 CH 2 − from Stearic-PEI appeared at 2.4−3.4 ppm, whereas PEI only appeared at about 2.7 ppm.Moreover, the products had alkyl peaks of stearic acid at 0.82, 1.18, and 1.38 ppm.These results provided decisive evidences that stearic acid was successfully grafted to the PEI chain.
Particle Size, Zeta Potential Measurement, and BSA Challenging
Assay.An ideal gene delivery system requires meticulous design of its particle size and zeta potential, as multiple researches have demonstrated that the cellular uptake of particles is in great relation to their size and surface charge [23,24].Smaller size usually leads to preferable cellular uptake and superior therapeutic effect of particles, for they can be readily recognized and transported by corresponding receptor or channel [25].Herein, the particle size and zeta potential of Lipos/Stearic-PEI/VEGF and rHDL/Stearic-PEI/VEGF complexes were analyzed.As shown in Table 1, both Lipos/Stearic-PEI/VEGF and rHDL/Stearic-PEI/VEGF complexes showed nanoscale size under 100 nm.Comparing the particle size of these two complexes, a minor increase was observed in rHDL/Stearic-PEI/VEGF group, which indicated the successful coating of apoA-I protein.This conclusion was further confirmed by the significant change of zeta potential between them.It is well established that the positively charged particles tend to interact with negatively charged proteins in the blood and extracellular matrix, which could be an obstacle for the effective transfection of VEGF plasmid; however, negatively charged particles are less likely to be taken up by the cells, for the cell membrane is also negatively charged and, also, it is a common rule that negative charged carrier has less stable complex with DNA [26].To dissolve this dilemma, we employed layer-by-layer method to construct rHDL/Stearic-PEI/VEGF.The positively charged Lipos/Stearic-PEI was firstly introduced to condense VEGF plasmid; then the Lipos/Stearic-PEI/VEGF complexes were coated with apoA-I protein to shield the surface charge.
In order to verify the superiority of the apoA-I coating in inhibiting serum protein interaction, the protein adsorption behavior of Lipos/Stearic-PEI/VEGF and rHDL/Stearic-PEI/VEGF complexes was explored with the presence of negatively charged BSA.As revealed in Figure 2, positively charged PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes exhibited serious protein adsorption profile with the increase of BSA concentration as predicted.On the contrary, rHDL/Stearic-PEI/VEGF complexes with neutral surface charge showed negligible change in turbidity, suggesting their safe application and high transfection potential in vivo.
Cytotoxicity.
The safe issue should always be the primary concern of an ideal delivery system.As a result, the cytotoxicity of PEI 10K/VEGF, Lipos/Stearic-PEI/VEGF, and rHDL/Stearic-PEI/VEGF complexes was evaluated against H9C2 cells by MTT assay.Cells were treated with complexes containing various PEI concentrations ranging from 2 to 100 g/mL.As presented in Figure 3, significant inhibitory effects of PEI 10K/VEGF and Lipos/Stearic-PEI complexes were observed.PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes displayed 70.42% and 67.28% mortality on H9C2 cells at 100 g/mL, respectively, which might be related to their considerable positive charges.In contrast, compared with PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes in all groups, the cell viability of rHDL/Stearic-PEI/VEGF complexes was higher.At the PEI concentration of 100 g/mL, especially, the serious cytotoxicity of Lipos/Stearic-PEI/VEGF complexes was dramatically decreased after apoA-I protein shielding, indicating that rHDL as a biomimic delivery vector could indeed lower the * < 0.05 and * * < 0.01.
cytotoxicity of PEI, which was in line with our speculation in Section 3.2.
In Vitro Transfection of VEGF.
HDL is one of the essential components of the lipid transport system with the ability to specifically bind with the SR-BI.As rHDL has been proved to possess similar properties to HDL, which can directly deliver their payload from rHDL to cytoplasm through nonaqueous "channel" of SR-BI [27], we anticipated that the VEGF plasmid in rHDL/Stearic-PEI/VEGF complexes can be internalized more effectively than that in PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes via the receptor mediated transportation.In order to verify our speculation, the transfection efficiency of the VEGF plasmid in H9C2 cells using PEI 10K/VEGF, Lipos/Stearic-PEI/VEGF, and rHDL/Stearic-PEI/VEGF complexes was assessed by ELISA.The untreated cells were employed as a blank control with secretion of VEGF as 1-fold.As shown in Figure 4, the relative secretion of VEGF in PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes was 2.7-and 3.2-fold higher than that in control group, respectively, indicating that both PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes hold certain transfection efficiency.However, it was worth mentioning that rHDL/Stearic-PEI/VEGF complexes with apoA-I protein shielding displayed the most effective capacity among three complexes with a relative secretion level of 6.4fold, which was 2.4-and 2.0-fold higher than PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes, respectively.This finding provided decisive evidence that rHDL could indeed improve the transfection efficiency of PEI and the combination of them was proved to be more powerful than applying PEI alone.Based on the above results, effective transfection of rHDL/Stearic-PEI/VEGF complexes in vivo can be expected.
3.5.
In Vivo Therapeutic Effect.The in vivo therapeutic effect of different formulations was evaluated by measuring the EF rates and performing histological assays on ischemia/ reperfusion rat model.As depicted in Figure 5, all formulations improved the heart function to some extent, among which rHDL/Stearic-PEI/VEGF complexes significantly restored the heart function as indicated by most increased EF rates when compared to other groups.In detail, rHDL/Stearic-PEI/VEGF complexes improved 7.32% of the EF rates, which was 3.47-and 3.04-fold higher than that of PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF complexes, respectively.The same conclusion could also be reached by the histological assays.Animals receiving rHDL/Stearic-PEI/VEGF complexes showed a statistically highly significant increase in the number of capillaries in the infarct and periinfarct area (Figure 6(a)).In addition, average vessel diameter analysis (Figure 6(b)) implied that these vessels were mostly neonatal ones, whose diameter was merely half of that in control group.Similar conclusion could also be drawn from the TUNEL assay.TUNEL-positive cells indicated the presence of apoptotic cardiomyocytes in the border zone of the infarct area (Figure 6(c)).The average number of TUNELpositive cardiomyocytes in five random regions was ∼127 in the control group, ∼110 in the PEI 10K/VEGF group, ∼76 in the Lipos/Stearic-PEI/VEGF complexes treatment group, and ∼43 in the rHDL/Stearic-PEI/VEGF complexes treatment group.The rHDL/Stearic-PEI/VEGF complexes treated hearts demonstrated the most remarkable reduction in apoptotic cardiomyocytes compared to both PEI 10K/VEGF and Lipos/Stearic-PEI/VEGF groups.All the above data indicated that the rHDL/Stearic-PEI/VEGF complexes with the combination of PEI and rHDL were the most potent formulation and the myocardial ischemia was greatly improved with the treatment of rHDL/Stearic-PEI/VEGF complexes.
Conclusion
In this work, a biomimic rHLD based gene delivery system, rHDL/Stearic-PEI/VEGF, was successfully developed with the aim of enhancing the efficacy of VEGF gene therapy of myocardial ischemia.The rHDL/Stearic-PEI/VEGF complexes containing PEI 10K were able to condense VEGF plasmid into nanosized particles with a diameter under 100 nm.
On the other hand, the biomimic structure of rHDL with similar properties to natural HDL demonstrated biocompatible and safe profiles in vitro.Transfection in H9C2 cells and in vivo therapeutic assays on ischemia/reperfusion rat model provided convincing evidences to the high performance of rHDL/Stearic-PEI/VEGF complexes.Taking all these data into account, we can conclude that rHDL/Stearic-PEI/VEGF complexes with biocompatible and potent transgene properties could be selected as a potential nonviral VEGF delivery system and a new promising strategy for effective myocardial ischemia treatment.
2. 6 .
Particle Size and Zeta Potential Measurement.The particle size and zeta potential of Lipos/Stearic-PEI/VEGF and rHDL/Stearic-PEI/VEGF complexes were measured in triplicate by dynamic light scattering (DLS) using a Malvern Zetasizer (Nano ZS-90, Malvern instruments, UK) at 25 ∘ C with 90 ∘ scattering angle.
Table 1 :
Particle size and zeta potential of complexes. | 4,755.8 | 2015-01-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Incremental quadratic stability
The concept of incremental quadratic
stability ($\delta$QS) is very useful in treating systems with persistently acting inputs.
To illustrate, if a time-invariant $\delta$QS system
is subject to a constant input or $T$-periodic input then,
all its trajectories exponentially converge to a unique constant or $T$-periodic
trajectory, respectively.
By considering the relationship of $\delta$QS to the usual concept of
quadratic stability, we obtain a useful necessary and sufficient
condition for $\delta$QS.
A main contribution of the paper is to consider nonlinear/uncertain systems whose
state dependent nonlinear/uncertain terms satisfy an
incremental quadratic constraint which is characterized by a bunch of symmetric matrices
we call incremental multiplier matrices .
We obtain linear matrix inequalities whose feasibility guarantee $\delta$QS of these systems.
Frequency domain characterizations of $\delta$QS are then obtained from these conditions.
By characterizing incremental multiplier matrices for many common classes of nonlinearities, we
demonstrate the usefulness of our results.
1.
Introduction. Boundedness and convergence of solutions of nonlinear systems are important issues in system analysis and control design. The distinctive characteristic of the incremental stability approach to these issues lies in the fact that it considers stability in an incremental fashion, that is, it studies the evolution of the state trajectories of a system with respect to each other, rather than with respect to a given nominal trajectory or equilibrium state. Roughly speaking, we examine whether the state trajectories of a given system converge to one another, and if they do, depending on the nature of the system (for example, autonomous or periodic in time), we can conclude that all trajectories converge to a specific type of bounded trajectory (for example, constant or periodic.) Incremental stability is an intrinsic property of systems, and not a property of particular solutions or equilibrium points. Therefore, its application requires no previous knowledge or assumption on the existence and value of specific solutions. This is particularly important when we have systems with inputs, where the attractive trajectory actually depends on the input.
LUIS D'ALTO AND MARTIN CORLESS
Considerations of incremental stability are made in the theory of contraction analysis for nonlinear systems introduced by Lohmiller and Slotine [22,23]. This theory is derived from consideration of elementary tools from continuum mechanics and differential geometry, and leads to a metric analysis on a generalized jacobian of the system. Fromion, Scorletti and Ferreres [16] also make use of the notion of incremental stability in an input-output stability context, and they present a sufficient condition for incremental stability of a system that they named quadratic incremental stability, and which involves a Lyapunov condition on the jacobian of the system. The approach of analyzing the stability of a nonlinear system by replacing it by an equivalent linear time-varying system is called global linearization; see [20,21]. Angeli [4] considers several notions of incremental stability of autonomous systems in an input-to-state stability context, and he presents a result on the existence of globally attractive solutions for incrementally input-to-state systems subject to constant or periodic inputs. Pavlov et al. [26,27] have brought to our attention some early results in the Russian literature related to this paper: Demidovich [13,14,15] pioneered some incremental stability ideas and results using the concept of a convergent system. Some of the results cited in [26] rely on Yacubovich [30]; this work also contains results on the response of a specific class of systems subject to bounded inputs.
In our case, basic Lyapunov stability theory [18,19] serves as the starting point for our results. The usual basic Lyapunov stability theory is based on the construction of radially unbounded functions of the state with a global minimum at the nominal trajectory whose stability we want to analyze. Consideration of quadratic Lyapunov functions leads to the stronger concept of quadratic stability. This concept has turned out to be very useful in system analysis and control design for large classes of uncertain/nonlinear systems; see, for example, [17,10,6,7,28,9,8,2,29] and the references therein. The quadratic stability framework has the advantage that analysis and control problems can be reduced to convex optimization problems involving the solution of linear matrix inequalities. This approach has gained considerable attention in the last two decades [8]. The theoretical framework presented here is built upon consideration of quadratic Lyapunov functions of the increment between any two states of a system, and leads to a concept of stability that we call incremental quadratic stability. This idea is also used in [16,26,27].
In our approach, the state dependent nonlinearities of an uncertain/nonlinear system are described by a quadratic inequality that is characterized by a set of symmetric matrices we call incremental multiplier matrices. The concept of a multiplier matrix and the more general approach of integral quadratic constraints have been recently used in the analysis of nonlinear/time-varying and uncertain systems; see [5,25,1]. In our case, the use of incremental multiplier matrices provides a unifying description for various types of common nonlinearities that is amenable to the analysis of incremental quadratic stability of systems through linear matrix inequalities. In this way, analysis problems are systematically reduced to the solution of linear matrix inequalities in the Lyapunov matrix and the incremental multiplier matrix.
Recent results on necessary and sufficient multiplier conditions for quadratic stability of systems provide frequency domain characterizations for quadratic stability; see [2]. These conditions consist of frequency domain inequalities involving an associated linear system and a multiplier matrix. These results can be readily extended to obtain similar conditions guaranteeing δQS of a system, this time involving incremental multiplier matrices. This paper is organized as follows. In Section 2 we consider incremental quadratic stability of systems with inputs; there we see that if a time-invariant system is subject to a constant input or a periodic input of period T , then all trajectories of the system converge to a unique trajectory which is respectively constant or periodic with period T . Section 3 considers the relationship of incremental quadratic stability to quadratic stability. In particular, we show that, when a system has an associated derivative system then, quadratic stability of the derivative system is both necessary and sufficient for incremental quadratic stability of the original system. In Section 4 we introduce a description of state dependent nonlinearities by means of incremental multiplier matrices. Linear matrix inequalities (LMIs) in the Lyapunov matrix and an incremental multiplier matrix for the system are presented. A main result of the paper is that feasibility of one of these inequalities guarantees δQS. This section also looks at differentiable nonlinearities, and the relationship between incremental multiplier matrices and the derivatives of the nonlinearities is explored. In Section 5 we use the LMIs to obtain frequency domain characterizations of δQS. Section 6 provides simple characterizations of incremental multiplier matrices for many common nonlinear/uncertain terms. Section 7 deals with systems with multiple nonlinear terms and their corresponding incremental multiplier matrices. In Section 8 we present an alternative approach to the analysis of incremental quadratic stability of systems with nonlinearities whose derivatives lie in a convex set. Section 9 contains some conclusions. For many examples and simulations illustrating the results of this paper, the reader is referred to [11].
2. Incremental quadratic stability. In this section, we introduce the concept of incremental quadratic stability (δQS) for a system with inputs and present some important properties of systems which are δQS.
In many treatments of the stability of nonlinear systems, one considers stability about a fixed equilibrium state or solution. However, in many applications one has a system with an input and wishes that the system has stable behavior for all inputs in a specific class. The input may be of any nature, such as a control input or an unknown disturbance input. In particular, one may require that the system has the following behavior.
• If the input is bounded then, the system state is bounded.
• If the input is constant then, the system has a unique equilibrium state and all solutions converge to this equilibrium state. • If the input is periodic with period T , then the system has a unique periodic solution of the same period and all solutions converge to this solution. As we will shortly demonstrate, it is in this context that the concept of δQS proves very useful. So, consider a system with input described bẏ where t ≥ 0 is the time variable, x(t) ∈ R n is the state, and w(t) ∈ W is the input where the set W of input values is some subset of R m . Throughout this paper, we assume that F is continuous. (1) is incrementally quadratically stable (δQS) with decay rate α > 0 and Lyapunov matrix P = P T > 0 if for all t ≥ 0, x,x ∈ R n and w ∈ W .
Example 2.1. The specification of the set W of input values is important in the above definition. To see this, consider the scalar systeṁ If W = [−w,w] andw < 1 then, we have δQS with P = 1 and α = 1−w. Ifw ≥ 1, we do not have δQS.
Remark 1. Using standard Lyapunov-type arguments, one can readily show that if system (1) is δQS with rate α and Lyapunov matrix P , it has the following property. If x(·) andx(·) are any two solutions of system (1) then, for all t ≥ t 0 for which both x(t) andx(t) exist where κ(P ) = λ max (P )/λ min (P ) and λ max (P ) and λ min (P ) denote the largest and smallest eigenvalues of P . Consider now a system described bẏ Then it should be clear that, regardless of g, if the "unforced system" systeṁ x = f (t, x) is δQS, then the "forced system" (2) is δQS for any set W of input values. Considering a time-invariant δQS system described bẏ we can make the following conclusions using the results in [4,16,26,27,12].
• If w(·) is bounded, then all solutions of (3) are bounded.
• If w(·) is constant, that is, w(t) ≡ w e then, system (3) has a unique equilibrium state and is GUES about x e . In general, the equilibrium state depends on the input w e . • If w(·) asymptotically approaches a constant input w e , that is, lim t→∞ w(t) = w e then, every solution x(·) of the system converges to the equilibrium state of (3) corresponding to w(t) ≡ w e . • If w(·) is periodic with period T then, system (3) has a unique periodic solution; this solution has period T and all other solutions exponentially converge to this solution. In general, the periodic solution depends on the input. • Suppose w(·) asymptotically approaches a periodic solutionw(·), that is, lim t→∞ w(t)−w(t) = 0. Then, every solution x(·) of the system converges to the periodic solution of (3) corresponding to w(t) ≡w(t). Note that in order to apply δQS to demonstrate the above properties, one does not have to know the value of the equilibrium state or the periodic solution. These specific solutions depend on the input; δQS guarantees that these solutions exist. In many applications, one does not need to know the values of the specific solutions; just knowing that they exist is sufficient.
Remark 2 (Linear time-invariant systems with inputs). Consider a system with input w described byẋ = Ax + g(t, w) , where A is a constant matrix. One can readily show that this system is δQS with decay rate α and Lyapunov matrix P if and only if P A + A T P + 2αP ≤ 0 . Satisfaction of the above inequality for some α > 0 is equivalent to the requirement that P satisfy the Lyapunov equation for some matrix Q = Q T > 0. Using Lyapunov theory for linear systems, we now obtain that δQS is equivalent to the requirement that A is Hurwitz 1 . Moreover, a Lyapunov matrix P can be obtained by first choosing any matrix Q = Q T > 0 and letting P be the unique solution to the Lyapunov equation. In this case, α = λ min (P −1 Q)/2.
Application to observers. Consider a system (we will call it the plant) with state x, input w and measured output y described bẏ Suppose we wish to construct a state observer which, based on input w and output y, asymptotically produces an estimatex of the plant state x. A general description of an observer is given byẋ This can be regarded as a system with input (w, y). Requiring that every state trajectory of the plant is also a possible motion of the observer is equivalent to the requirement thatF for all t, w andx. If this condition is satisfied and the observer is δQS with decay rate α and Lyapunov matrix P then we obtain that, for all plant and observer initial conditions, for all t ≥ t 0 . Thus the state estimate of the observer always converges exponentially to the plant state. This is basically the approach taken in [3].
3. Incremental quadratic stability and quadratic stability of the derivative system. In this section, we show that if a system has a derivative system (defined below) then quadratic stability (QS) (defined below) of the derivative system is equivalent to δQS of the original system. This result is very useful. It permits one to apply the considerable body of quadratic stability results in the literature to the derivative system in order to guarantee incremental quadratic stability of the original system; see, for example, [6,7,9,8,2] and the references therein.
Definition 3.1. Consider a system described by (1) for which ∂F ∂x (t, x, w) exists for all t, x, w. The corresponding derivative system is given bẏ where ϕ and w are any continuous functions mapping [0, ∞) into R n and W , respectively.
Definition 3.2. The derivative system (7) is quadratically stable (QS) with decay rate α > 0 and Lyapunov matrix P = P T > 0 if for all t ≥ 0, w ∈ W and ϕ ∈ R n , for all η ∈ R n , that is, Remark 3. The results in [16,26] are based on condition (9). Reference [16] also requires a condition on ∂F ∂w . Lemma 3.3. Consider a system described by (1) with F differentiable w.r.t. its second argument. This system is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P if and only if the corresponding derivative system (7) is quadratically stable with rate of convergence α and Lyapunov matrix P .
Proof. Consider a system described by (1) and suppose that its derivative system (7) is quadratically stable about the zero state with rate of convergence α and Lyapunov matrix P . Consider any time t ≥ 0, any w ∈ W and any two states x,x ∈ R n . For any vector z ∈ R n , it follows from the mean value theorem that there isφ ∈ R n such that Considering z = P (x−x) and using the fact that the derivative system is quadratically stable with rate of convergence α and Lyapunov matrix P , we obtain that Since the above holds for all t ≥ 0, w ∈ W and x,x ∈ R n , it follows that system (1) is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P .
To prove the converse, suppose now that system (1) is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P , that is, for all t ≥ 0, w ∈ W and x,x ∈ R n . Now pick anyx, η ∈ R n and let Then, the previous inequality yields Since the above holds for all t ≥ 0, w ∈ W andx, η ∈ R n , it follows that the corresponding derivative system (7) is quadratically stable about the zero state with rate of convergence α and Lyapunov matrix P . Scalar systems. Consider any scalar system of the forṁ which, for some α > 0, satisfies for all t ≥ 0, w ∈ W and x ∈ R. Considering P = 1, we readily obtain that the derivative system is quadratically stable with rate of convergence α. It now follows from the above lemma that system (10) is incrementally quadratically stable with rate of convergence α.
4.
Incremental multiplier matrices and an LMI characterization of δQS.
Systems under consideration.
In this section, we consider systems described by (1) whose state dependent nonlinearities can be characterized via symmetric matrices which we call incremental multiplier matrices. Using these matrices and any structural information on the manner in which the nonlinearities enter the system description, we obtain a sufficient condition for incremental quadratic stability. This condition is in the form of a linear matrix inequality (LMI).
To take into account the manner in which state dependent nonlinearities enter a system description, consider a system described bẏ where t ≥ 0 is the time variable, x(t) ∈ R n is the system state and w(t) ∈ W ⊂ R m is the system input. All the elements in the system involving nonlinear dependence on the state x are lumped into the term p(t, x, w) ∈ R mp and all the elements not depending on x are lumped into the term g(t, w) ∈ R n . The matrices A and B are constant and of appropriate dimensions; A describes a nominal linear unforced system while B describes the manner in which the state dependent nonlinearities enter the description. To take into account any information available on the dependency of p on x, we consider p = φ(t, q, w), where q(t, x, w) ∈ R mq is given by q(t, x, w) = Cx + Dp(t, x, w) with C and D constant matrices of appropriate dimensions. Thus, a system under consideration is described bẏ When D = 0, a system under consideration can be described bẏ To see the usefulness of allowing D = 0, consider a system described by (11a) in which p =φ(t, Eẋ+C 0 x, w) .
All the state dependent nonlinearities in the system are now described by the function φ; we characterize this function by its incremental multiplier matrices which are defined as follows.
for all t ≥ 0, w ∈ W and q,q ∈ R mq .
we obtain that p = φ(t, q, w) wherê q = Cx + Dp, and φ has the same incremental multiplier matrices asφ.
Remark 5. When D = 0, the term p is implicitly defined by In this case, we assume that there exists a function ψ such that for all t, z, w, the vector p = ψ(t, z, w) (14) solves the above implicit identity. Note that M is an incremental multiplier matrix for φ if and only if the matrix is an incremental multiplier matrix for ψ. This follows from the relationship: Of course, ψ = φ when D is zero.
The following two general scalar examples illustrate the characterization of nonlinearities via incremental multiplier matrices.
Example 4.1. Consider any differentiable scalar valued function φ of a scalar variable. Suppose that φ ′ , the derivative of φ, is bounded and choose σ 1 and σ 2 so that σ 1 ≤ φ ′ (q) ≤ σ 2 for all q ∈ R. An application of the mean value theorem shows that φ satisfies for all q,q ∈ R. We call condition (17) a pointwise sector bounded constraint because, for eachq ∈ R, the graph of the function φ lies inside the sector defined by the lines Condition (17) is equivalent to Hence, any matrix is an incremental multiplier matrix for φ.
Example 4.2. Consider any monotone scalar valued function φ of a scalar variable, that is, This is equivalent to for all q,q ∈ R. Notice that satisfaction of (18) is equivalent to satisfaction of This clearly shows that any matrix is a multiplier matrix for φ.
4.3.
A sufficient condition for δQS. In this section we present a matrix inequality which, if satisfied by a system described in the previous section, guarantees incremental quadratic stability.
Theorem 4.2. Consider a system described by (11). Suppose there exists a matrix P = P T > 0, a scalar α > 0 and an incremental multiplier matrix M for φ such that Then the system is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Proof. We first note that incremental quadratic stability of a system described by (11a) with rate of convergence α and Lyapunov matrix P means that for all t ≥ 0, x,x ∈ R n and w ∈ W where p := p(t, x, w) andp := p(t,x, w). The above inequality is equivalent to We now show that when p is given by (11c) and (11b) and M is any multiplier matrix for φ then, for all t ≥ 0, x,x ∈ R n and w ∈ W . To see this, notice that p = φ(t, q, w) and p = φ(t,q, w) where q = Cx + Dp andq = Cx + Dp. Hence, Condition (20) now follows from the definition of the multiplier matrix M .
To prove the lemma, suppose that matrix inequality (19) holds and consider any time t ≥ 0, any two states x,x ∈ R n and any input w ∈ W . Pre-and post-multiplication of the matrix inequality by (x −x) T (p −p) T and its transpose respectively, yields Since M is a multiplier matrix for φ, we have L 1 (t, x,x, w) ≥ 0; hence L 0 (t, x,x, w) ≤ 0. Since this holds for all t ≥ 0, x,x ∈ R n and all w ∈ W , it follows that the system under consideration is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Remark 6. Note that condition (19) is a linear matrix inequality in P and M . Notice also that maximization of α subject to (19) is a generalized eigenvalue problem.
Remark 7. Recalling the definition of N in (15), the matrix inequality (19) can be expressed as If we partition M and N as where M 22 , N 22 ∈ R mp×mp , then the inequality (19) can be expressed as and Note that in order for inequality (23) to hold we must have Thus, we need only consider incremental multiplier matrices for which N 22 ≤ 0. In particular, when D = 0, we must have M 22 ≤ 0.
Remark 8. In many cases it is difficult to completely characterize the complete set of incremental multiplier matrices associated with a given function φ. Fortunately, it is not necessary to have this characterization for our purposes. We need only consider sets of matrices which are sufficiently rich. It should be clear that if M is a sufficiently rich set of incremental multiplier matrices for φ then, satifisfaction of the matrix inequality (19) with any incremental multiplier matrix for φ implies satisfaction with some M in M.
4.4.
A strict matrix inequality. If the following strict inequality holds then, it can readily be shown that the non-strict inequality (19) holds for α > 0 sufficiently small: This provides another sufficient condition for δQS. However, there are situations in which the non-strict inequality (19) holds for some α > 0 but, the strict inequality (26) does not hold. In fact, when the strict inequality holds, the dependency of p on z = Cx must be globally Lipschitz, that is, the function ψ(t, ·, w) (recall (14)) must be globally Lipschitz. To see this, consider any z,z and recall from Remark 5 that the incremental quadratic constraint (12) is equivalent to δz T N 11 δz + δz T N 12 δp + δp T N 21 δz + δp T N 22 δp ≥ 0 for all t and w where δz = z −z and δp = ψ(t, z, w)−ψ(t,z, w) and N ij are defined in (24). Recalling the discussion in Remark 7, one can see that the strict matrix inequality (26) implies that N 22 < 0. The last inequality above now implies that c 1 δz 2 + 2c 2 δz · δp − c 3 p 2 ≥ 0 where c 1 = N 11 , c 2 = N 12 and −c 3 < 0 is the maximum eigenvalue of N 22 . Rearranging the last inequality results in for all t, w and z,z, that is, ψ(t, ·, w) is globally Lipschitz for all t and w.
On the other hand, suppose ψ satisfies the global Lipschitz condition (27). We will show that satisfaction of the non-strict inequality with some incremental multiplier matrix implies satisfaction of the strict inequality with some incremental multiplier matrix.
To see this, suppose that for some α > 0, the non-strict matrix inequality (19) is satisfied with M =M whereM is an incremental multiplier matrix for φ. It follows from (27) that the matrix γ 2 I 0 0 −I is an incremental multiplier for ψ; hence, recalling Remark 5, the matrix is an incremental multiplier matrix for φ. It now follows that, for any ǫ ≥ 0, the matrix, is also an incremental multiplier matrix for φ. Let To complete the proof, it suffices to show that Q(0, ǫ) < 0 for some ǫ > 0; this is because inequality (26), with M replaced by the new multiplier matrix M ǫ , is equivalent to Q(0, ǫ) < 0. To achieve the above goal, note that where S(α, ǫ) = 2αP − ǫγ 2 C T C 0 0 ǫI and the non-strict matrix inequality (19) can be written as Q(α, 0) ≤ 0. Considering ǫ > 0, we obtain that S(α, ǫ) > 0 if and only if 2αP − ǫγ 2 C T C > 0 .
Remark 9.
Recalling the definition of N in (15), the strict matrix inequality (26) can be expressed as If we partition N as in (22) where N 22 ∈ R mp×mp , then the above inequality can be expressed as Note that in order for inequality (30) to hold we must have Thus, in utilizing the strict inequality (26), we need only consider incremental multiplier matrices for which N 22 < 0. In particular, when D = 0, we must have M 22 < 0.
Remark 10. Using a Schur complement result, inequality (30) is equivalent to This is a Riccati-type matrix inequality in P .
Differentiable nonlinearities.
Here we consider nonlinearities φ which are continuously differentiable with respect to their second argument, that is, exists for all t, q, w and is continuous with respect to q. We will characterize incremental multiplier matrices M for φ with the condition that for all t, q, w.
Consider an uncertain/nonlinear element described by (11b) and (11c), that is, Recall that, in order to satisfy our condition for δQS, we need only consider incremental multiplier matrices M which satisfy (25). Consider first the case in which D = 0. We claim that a symmetric matrix M which satisfies (25) is an incremental multiplier matrix for φ if and only if (33) holds for all t, q, w. This follows from Lemmas 4.4 and 4.5 which are given below. Consider now the case in which D = 0. Suppose p is well-defined, that is, for each t, z, w, there is a p = ψ(t, z, w) which satisfies p = φ(t, z + Dp, w).
If we assume that, for each t, w, the function ψ(t, ·, w) is continuously differentiable and the mapping z → z +Dψ(t, z, w) is onto then, a symmetric matrix M which satisfies (25) is an incremental multiplier matrix for φ if and only if (33) holds for all t, q, w. This also follows from the following two lemmas.
for all q,q ∈ R q . Then, Proof. A proof is contained in the Appendix.
Lemma 4.5. Suppose h : R mq → R mp is a continuously differentiable function, M is a symmetric matrix and inequality (35) holds for all q ∈ R mq . In addition, suppose there is a matrix D such that, and for each z ∈ R mq , the equation has a solution p = ψ(z) where ψ is continuously differentiable and the mapping z → z +Dψ(z) is onto. Then, inequality (34) holds for all q,q ∈ R mq .
Proof. A proof is contained in the Appendix.
The following result is used in the proof of Lemma 4.5; it is also useful later in the paper. Lemma 4.6. Suppose h : R mq → R mp is a continuously differentiable function with derivative Dh and let Ω be any closed convex set of real matrices such that Dh(q) is in Ω for all q ∈ R mq . Then for every q,q ∈ R mq there is a matrix Θ in Ω such that Proof. A proof is contained in the Appendix.
5.
Frequency domain conditions for δQS. Using results from [2], one can readily obtain frequency domain conditions which guarantee δQS of system (11). These conditions involve the transfer function G defined by Using the strict inequality (26) and Lemma 9 of [2], one can obtain the following result.
Lemma 5.1. A system described by (11) is incrementally quadratically stable if there is an incremental multiplier matrix M for φ which satisfies the following conditions. (a) There is a matrix K ∈ R mp×mq for which A+BKC is Hurwitz and (b) The strict frequency domain inequality, holds for 0 ≤ ω ≤ ∞ whenever ω is not an eigenvalue of A.
Note that the frequency domain inequality (39) can also be written as Using the non-strict inequality (19) and Lemma 6 of [2], one can also obtain a sufficient condition involving a non-strict frequency domain inequality Lemma 5.2. Consider a system described by (11) with (A, B) controllable and (C, A) observable. Suppose that, for some α > 0, there is an incremental multiplier matrix M for φ which satisfies the following conditions.
(a) There is a matrix K ∈ R mp×mq for which A + BKC + αI is Hurwitz, (38) holds, and the matrix M I +DK K has maximum column rank. (b) The non-strict frequency domain inequality holds for 0 ≤ ω ≤ ∞ whenever ω −α is not an eigenvalue of A. Then system (11) is incrementally quadratically stable with decay rate α.
Remark 11. Yacubovich [30] considers systems described by (11) in which p and q are scalars, D = 0, φ(t, q, w) = ϕ(q) and for some real constant µ 0 ≤ ∞, the function ϕ satisfies the incremental sector condition for all q =q. Incremental multiplier matrices for this nonlinearity are given by where κ > 0 and K = 0 satisfies condition (38). Yacubovich shows that if A is Hurwitz and a certain frequency domain condition is satisfied then, the systems under his consideration have the properties claimed here for incrementally quadratically stable systems. His frequency domain condition is basically condition (41) with one of the above multiplier matrices, that is, 6. Incremental multiplier matrices for many common nonlinearities. In this section, we provide incremental multiplier matrices for many commonly encountered nonlinearities.
6.1. Incremental norm bounded nonlinearities. Consider a function φ which, for some symmetric positive definite matrices U and V , satisfies for all t ≥ 0, w ∈ W and q,q ∈ R mq . Incremental multiplier matrices for φ are given by Note that incremental norm bounded nonlinearities include nonlinearities which are globally Lipschitz with respect to q, that is, for some scalar γ ≥ 0, they satisfy φ(t, q, w) − φ(t,q, w) ≤ γ q −q for all t ≥ 0, w ∈ W and q,q ∈ R mq . In this case, condition (42) is satisfied with U = I and V = γ 2 I. So, incremental multiplier matrices for φ are given by 6.2. Incremental positive real nonlinearities. Consider a function φ which satisfies [φ(t, q, w) − φ(t,q, w)] T U (q −q) ≥ 0 for all t ≥ 0, w ∈ W and q,q ∈ R mq , where U ∈ R mp×mq . Incremental multiplier matrices for φ are given by Note that incremental positive real nonlinearities include scalar nonlinearities which satisfy (φ(t, q, w) − φ(t,q, w))(q −q) ≥ 0 for all t ≥ 0, w ∈ W and q,q ∈ R, or equivalently, φ(t, q, w) − φ(t,q, w) q −q ≥ 0 for all q,q ∈ R with q =q. Notice that this condition is equivalent to φ being monotonic with respect to its second argument q. Incremental multiplier matrices for φ are given by 6.3. Incremental sector bounded nonlinearities. Consider a function φ which satisfies for all t ≥ 0, w ∈ W and q,q ∈ R mq , where U = U T ∈ R mp×mp , and K 1 , K 2 ∈ R mp×mq are fixed matrices. Incremental multiplier matrices for φ are given by Incremental sector bounded nonlinearities include scalar nonlinearities which satisfy for all t ≥ 0, q =q ∈ R and w ∈ W where σ 1 , σ 2 ∈ R are constants. This is because the above inequalities are equivalent to [φ(t, q, w)−φ(t,q, w) − σ 1 (q−q)] · [σ 2 (q−q) − φ(t, q, w)+φ(t,q, w)] ≥ 0 .
Hence, incremental multiplier matrices for φ are given by 6.4. Nonlinearities with matrix characterizations. In this section, we consider nonlinearities which are characterized by some known set Ω of matrices. Specifically, we assume that there is a known set Ω of real matrices Θ with the following property. For each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix Θ in Ω such that φ(t, q, w) − φ(t,q, w) = Θ(q −q) . (43) For example, suppose that φ is continuously differentiable with respect to its second argument, and for each t ≥ 0, w ∈ W , and q ∈ R mq the derivative ∂φ ∂q (t, q, w) lies in some known closed convex set Ω, that is, ∂φ ∂q (t, q, w) ∈ Ω .
Then, it follows from Lemma 4.6 that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there exists a matrix Θ in Ω such that (43) holds. As a non-differentiable example, consider the absolute value function, φ(t, q, w) = |q|. Here, Θ is the interval [−1, 1]. and satisfies (52) with M T 22 = M 22 ≤ 0. If the matrix [Θ 1 Θ 2 · · · Θ ν ] has maximum row rank m p then, the second condition in (52) is simply equivalent to M 22 = 0. The case in which D is nonzero. Recalling the corresponding discussion of the polytopic case, we obtain that a symmetric matrix M is an incremental multiplier matrix for φ if there is a matrix J ∈ R mp×mp such that (49) is satisfied for all Θ ∈ Ω. Clearly, condition (49) is satisfied if and Clearly, inequality (54) holds for all Θ ∈ Ω = Cone{Θ 1 , Using the same reasoning as in the case D = 0, it follows that satisfaction of (55) by all Θ in Ω is equivalent to M 11 ≥ 0, and (58) As before, we can take M 11 = 0 without loss of generality. It now follows that a symmetric matrix M is a multiplier matrix for φ if it has the form given in (53) and there is a matrix J ∈ R mp×mp such that (56)-(58) hold with M 22 = M T 22 . If the matrix [Θ 1 Θ 2 · · · Θ ν ] has maximum row rank m p then, the second condition in (58) is simply equivalent to M 22 + J + J T = 0. 6.4.3. Polytopic/Conic case. One can readily generalize the results of the previous two sections to consider situations in which Θ has a mixed polytopic/conic description. To see this, consider a function φ which for each t, w, q andq satisfies (43) with some matrix Θ in a known fixed set Ω. Suppose that there are fixed matrices Θ 1 , . . . , Θ ν with the property that for every Θ in Ω there are scalars λ 1 , λ 2 , . . . , λ ν so that Θ = ν k=1 λ k Θ k where ν1 k=1 λ k = 1 and λ k ≥ 0 for k = 1, 2, . . . , ν .
This is a combined polytopic/conic description. If ν 1 = ν, it reduces to a polytopic description; if ν 1 = 0, it reduces to a conic description. Considering M 22 ≤ 0, one can readily show that any symmetric matrix M satisfying is an incremental multiplier matrix for φ.
7.
Systems with multiple nonlinear terms.
7.1. General case. Consider a system described by (11a) whose nonlinear term p consists of multiple terms given by t, x, w) . . .
where each nonlinear term p j ∈ R mp j can be described by and C j , D j are constant matrices of appropriate dimensions. Letting the nonlinear term p can be described by p = φ(t, q, w) where q(t, x, w) = Cx + Dp(t, x, w) and Suppose that, for j = 1, 2, · · · , µ, is an incremental multiplier matrix for φ j where M j,22 ∈ R mp j ×mp j . In this case, the function φ has an incremental multiplier matrix M given by In this way, incremental multiplier matrices M for φ can be obtained from incremental multiplier matrices M 1 , . . . , M µ corresponding to the individual nonlinear terms φ 1 , . . . , φ µ .
7.2.
A common special case. In this subsection, we consider a important special case of multiple uncertain/nonlinear terms and provide a richer set of incremental multiplier matrices than would be obtained using the general approach of the previous section. Suppose the functions φ 1 , . . . , φ µ are scalar-valued and, for j = 1, . . . , µ and q j =q j , they satisfy where q 1 , . . . , q µ are scalars. In this case, One could use the results of the previous subsection to obtain an incremental multiplier set based on incremental multiplier sets for φ 1 , . . . , φ µ . However, one can obtain a richer set of incremental multiplier matrices by proceeding as follows.
Using the notation of the preceding section, we obtain a single uncertain/nonlinear term described by Thus Θ ∈ Co{Θ 1 , . . . , Θ ν } where the ν = 2 µ matrices Θ 1 , . . . , Θ ν correspond to the extreme values σ 1j , σ 2j of the parameters θ j . We now have a polytopic description of φ and we can obtain a set of incremental multiplier matrices as described in Section 6.4.1.
In a similar fashion, one can use the results of Section 6.4.2 to treat the case in which the functions φ 1 , . . . , φ µ are nondecreasing scalar-valued functions, that is, they satisfy (φ j (q j )−φ j (q j ))(q j −q j ) ≥ 0 .
In this case, we have Finally, one can also use the results of section 6.4.3 to consider a bunch of scalar uncertainties satisfying (60) or (62).
8. An alternative sufficient condition for nonlinearities with matrix characterizations. We present here another sufficient condition for incremental quadratic stability of systems whose nonlinearities are characterized by a set of matrices. Specifically, we consider systems described by (11) with D = 0, that is, and we assume that there is a set Ω of matrices such that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix in Θ in Ω such that φ(t, q, w) − φ(t,q, w) = Θ(q −q) .
The following lemma provides a sufficient condition for incremental quadratic stability of such systems.
Lemma 8.1. Consider a system described by (63) and suppose there is a set Ω of matrices such that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix in Θ in Ω such that (64) holds. Suppose also there is a matrix P = P T > 0 and a scalar α > 0 such that P (A + BΘC) + (A + BΘC) T P + 2αP ≤ 0 (65) for all Θ ∈ Ω. Then system (63) is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Since (66) holds for all Θ in Ω, we now obtain that Since the above holds for all t ≥ 0, w ∈ W and x,x ∈ R n , we can conclude that system (63) is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Remark 12. When ν = 2 and B, C are rank one, [29] contains some very easily verifiable conditions which guarantee the existence of P satisfying the above inequalities.
9.
Conclusions. We introduced and discussed basic properties of incrementally quadratically stable (δQS) systems, and considered the particular cases of (asymptotically) time-invariant systems and (asymptotically) periodic-in-time systems: if a time-invariant system is subject to a constant input or a periodic input of period T , then all trajectories of the system converge to a unique trajectory which is respectively constant or periodic with period T . We showed that incremental quadratic stability of a system is equivalent to quadratic stability of its associated derivative system. We presented a characterization of state dependent nonlinearities by means of incremental multiplier matrices, and we formulated conditions guaranteeing δQS of a system in terms of linear matrix inequalities in the Lyapunov and incremental multiplier matrices. These conditions allowed us to obtain a characterization of δQS consisting of frequency domain inequalities involving an associated linear system and the original incremental multiplier matrices. For a differentiable nonlinearity, we formulated a necessary and sufficient condition for incremental multiplier matrices in terms of the derivative of the nonlinearity. Several common classes of nonlinearities were then described by means of incremental multiplier matrices. We finally presented an alternative approach to the analysis of δQS for systems whose nonlinearities have their derivatives in a convex set.
Rewrite (67) as Since N 22 ≤ 0, this is equivalent to Since the above inequality is linear in ∂ψ ∂z (z), we obtain that N 11 + N 12 Θ + Θ T N 21 Θ T N 22 for all Θ ∈ Ω where Ω is the convex hull of the set ∂ψ ∂z (z) : z ∈ R mq . Thus, for all Θ ∈ Ω, that is, I Θ | 10,113.6 | 2013-01-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Improved Circuits with Capacitive Feedback for Readout Resistive Sensor Arrays
One of the most suitable ways of distributing a resistive sensor array for reading is an array with M rows and N columns. This allows reduced wiring and a certain degree of parallelism in the implementation, although it also introduces crosstalk effects. Several types of circuits can carry out the analogue-digital conversion of this type of sensors. This article focuses on the use of operational amplifiers with capacitive feedback and FPGAs for this task. Specifically, modifications of a previously reported circuit are proposed to reduce the errors due to the non-idealities of the amplifiers and the I/O drivers of the FPGA. Moreover, calibration algorithms are derived from the analysis of the proposed circuitry to reduce the crosstalk error and improve the accuracy. Finally, the performances of the proposals is evaluated experimentally on an array of resistors and for different ranges.
Introduction
There is a large range of applications which use resistive sensor arrays to obtain information about a specific system, such as temperature sensing [1,2], gas detection [3,4], tactile sensing [5][6][7][8] and others. The complexity of the electronic system necessary to read the information of the array depends on the number of sensors, the number of connections necessary to extract the information, the resistance values of each sensor, and the speed necessary to obtain this information. Moreover, the system is even more complex when subsequent processing, such as in the case of a smart sensor, is required for sending data to a central unit and it is to be done in the circuit which scans the information of the array.
The processing speed of the array signals involves a trade-off with the complexity of the system: if maximum processing speed is required, parallel access to the information of all the individual sensors must be done individually, meaning a high number of wires to carry this information and a large number of processing units in the circuits to receive it.
Indeed, for circuits with maximum parallelism, if the sensors are distributed in a 2 dimensional array with lengths M for rows and N for columns, the number of wires may reach 2ˆMˆN. However, one of the sensor terminals is generally shared, meaning the final number may be reduced to MˆN + 1 [9]. Each of these wires should be connected to a circuit to scan the information of a specific sensor in which the resistance value is translated into a voltage value and subsequently a digital number, meaning MˆN of these circuits would be required. The time to scan the array would approximately coincide with the time to scan an array element.
Readout Circuit with Capacitive Feedback
A circuit with a direct resistive sensor-FPGA interface, without A/D converters, is presented in [13]. This circuit uses operational amplifiers (OAs) with capacitive feedback to implement grounding and reduce crosstalk on the element being tested (hereinafter EBT). This solution is shown in Figure 1. [13] for reading of a resistive array.
The resistance value R ij of an EBT is read by measuring a discharge time through C j . Initially, the charged capacitor maintains voltage in the OA output node, which is interpreted as a 1-logic in a digital processor input pin. As the capacitor discharges, a voltage is reached which makes the input pin interpret the input as a 0-logic level. The measurement will therefore be the time difference between the capacitor charge finishing and receipt of a 0 through the digital processor input pin. [13] sets out a series of arguments in which the choice of an FPGA is suitable as a processing unit. These include, most notably, that each of the inputs can be processed in parallel by the FPGA programmable hardware.
The way the circuit works is described in more detail below. As illustrated in Figure 1, the sensor row lines are the M pin outputs, configured as output of an FPGA. In order to know the resistance of the sensor with value R ij , the pin of row i, PFi, is placed at a high level (a value close to V DD in the FPGA output), and the other row pins at 0-logic value (a value close to 0 V at the FPGA output). This means current will only circulate through the resistors of this row (a single resistor in each column).
In order to duly use this current in a first cycle (CHARGE cycle), all the C j capacitors are charged with V DD voltage in the OA output terminal, maintaining 0 V in the inverter terminal. This is done via two FPGA output buffers (Zero and PV oj in Figure 2). It should be noted that the pins which will be in charge of reading PV oj voltage are configured as output in this phase. The OA will also be disabled (placing the shutdown pin at 0 V) and the FPGA outputs which control the row signals, PF k , will all be 0 V. A new cycle, ACTIVATE, is then entered, activating the OA and placing shutdown at high level.The pin PV oj is configured in high impedance at the same time. Finally, the DISCHARGE cycle is carried out, discharging the capacitors. To do this, the Zero pin is configured in high impedance and the output buffer of the row to be scanned, PFi, is placed at V DD , with all other row buffers remaining at 0 V. At the same time, the OA output voltage reading pins, PV oj , are configured as input. This means the output voltage of all OAs will decrease as the current which crosses the different R ij discharges the capacitors.
Operating in this manner, all PV oj pins will have an input of value V DD volts at the start of the DISCHARGE cycle, which will decrease as time passes through to a value VT j , the threshold voltage for which the PV oj pin input buffers start to interpret the input as a 0-logic which is transmitted to inside the FPGA. The time in which V oj passes from V DD to VT j in the DISCHARGE cycle will be known as ∆t ij . Considering the OA ideal and the resistance value of the sensor constant, a simple analysis would indicate that: in which there is a linear relationship between time measured ∆t ij and resistance value R ij . The main advantages of the circuit proposed in [13] are the elimination of the AD converter and the possibility of parallel processing in the FPGA of the information from the series of sensors which are scanned simultaneously. This configuration therefore allows greater processing speed and a reduced area and consumption.
An additional row, calibration row, is added in [13], in order to not have to evaluate C j and VT j in Equation (1), since these magnitudes are difficult to measure due to their value varying with the supply voltage, time and temperature. This procedure is equivalent to what is known in literature as single-point calibration [15]. However, this type of calibration does not take into account the resistances of the buffers of each row, k, of the FPGA, RB k , which results in an error in estimating R ij . If the RB k values were constant and equal for all buffers, the value could be calculated with a Two-Points calibration [15], which would involve adding a second calibration row to that proposed in [13]. Moreover, RB k although it is constant in a DISCHARGE cycle, it is not equal in all buffers, since it depends on the array resistors it is connected to, meaning even adding a second calibration row would not avoid the errors due to this resistor. It is important to note that we have to evaluate its value and also avoid the effect of crosstalk which comes about from being joined to several array resistors, one resistor per column, a question which a two-point calibration does not resolve. Furthermore, it should be remembered that the OAs used are not ideal and, in consequence, present second-order effects, which also results in crosstalk, as will be shown in more detail below. Finally, [13] does not include any analysis of the possible ranges of resistance values which the circuit may measure correctly.
Improving Circuits with Capacitive Feedback for Readout Resistive Sensor Arrays
Each paragraph in this section analyses and proposes solutions for each of the problems set out above.
Estimation of R ij Considering the Effect of the Resistance of the Row Selection Buffers and Variations in the Values C j and VT j
In order to reduce the effects of variation in measurements C j and VT j and to simultaneously eliminate the effect of the resistance of the row selection buffers, using the circuit in Figure 3 is proposed. The so-called calibration row and column have been added to it (in red). The circuit therefore has M + 1 rows and N + 1 columns. In consequence, N + M + 1 additional resistors with known values need to be used, along with an extra OA in the calibration column.
The operation of the circuit is exactly the same as indicated in the paragraph above, taking into account that there is now one more row to read (for simplicity, the part marked in blue in Figure 2 is not shown). Figure 4 shows the equivalent circuit to scan the resistors of row i. Here the row control buffer has been replaced with the corresponding output resistor RB k . In these buffers the resistors for high status output, RB p k , are different to those shown in a low status output, RB n k , although both can be small (10-50 Ω), due to the CMOS technology used in manufacture. Supposing that the OA are ideal, the value for ∆t ij would be given in the Equation (1), although this does not take into account the different RB k . To illustrate the influence of these resistors, the current, Ic i j , which enters through resistor R ij and discharges capacitor C j when row i is activated, is calculated. If RP i " R i1 ||R i2 || ... ||R iN ||R ic is the parallel of all resistors being scanned, including the calibration resistor, a simple analysis of the circuit of Figure 4 shows: meaning the time to discharge C j at VT j is given by: It is worthwhile noting that the Equation (3) shows the appearance of crosstalk in the circuit, since ∆t ij is no longer a function of a single resistor of the array, R ij , but rather of all resistors of the same row (through RP i ). Hence, ∆t ij should be calculated by way of a non-linear system of N + 1 equations in which it is necessary to know the exact values of C j , VT j , RB p i and the discharge times for all the columns when row i is activated. It is also necessary to take into account that RB p i is not constant and varies throughout the discharge process. The details below indicate how to proceed in order to avoid these inconveniences in the R ij calculation using different array measurement times.
Once the scanning process for all resistors has been completed, these times are stored in the FPGA. Hence, ∆t ic can be used to calculate the ∆t ij /∆t ic coefficient and express R ij based on the Equation (3) as follows: In this expression only the part between parentheses is unknown; however, as there is also ∆t cj and ∆t cc its value can be found by proceeding in the same way as when finding the expression (4): replacing Equation (5) in Equation (4): In the term on the right of Equation (6), once the sensor array has been scanned, all the terms are known and, in consequence, it is not necessary to modify the scanning process indicated in Section 2 in order to obtain the values of the different R ij resistors. The operations necessary to find R ij can be carried out in the FPGA and their results transmitted or processed whilst extracting data from a new sensor array frame.
The Equation (6) can also be used in line with the digital number of cycles (D) which the FPGA uses to measure the different times of this equation. Hence, if ∆t = DˆT s , where T s is the meter clock period, Equation (6) the result is: It should be noted that in Equation (2), in order to deduce R ij , a series of resistors RB k have been used which model the operation of each buffer; however, if R ij is evaluated by way of Equation (6), these effects do not have any influence since RB k does not appear in the expression.
Limitations in the Range of Resistors to Be Measured
In order to use the Equation (6) correctly, it is necessary to determine the range of resistors which can be measured with the circuit of Figure 3. Two main circumstances limit this range. Firstly, the maximum current which the FPGA can provide will depend on each specific model and must be carefully examined in the design stage. For the correct operation of the circuit, it does not matter that the buffer output provides a voltage below V DD in a specific amount, since the limitation actually comes from the maximum current which can be provided by a specific buffer or series of buffers which select the rows of the array without affecting the correct operation of the FPGA.
Secondly, it should be noted that the current provided by each buffer will depend inversely on RP i , and, in consequence, will increase with the number of array columns or with the reduction of the resistance values. This limits the size of the array to be scanned and the possible array resistor ranges. In order to prevent this restriction, the resistor of each row to be scanned can be increased by adding an RS i series resistor at the output of each buffer, which will be added to RB k , thus reducing the current provided by the buffer. Although this resistor is added, the Equation (6) can continue to be used to calculate R ij .
However, there is another limitation related to the range of resistors to be measured, since each resistor of the sensors of a row takes different times to discharge the capacitor of its column. Hence, whilst row i has the lowest possible value resistor, RL (in column l), and another with the highest possible value, RH (in column h), it may occur that the former completely discharges its capacitor to 0 V whilst the latter has not yet achieved output voltage below VT h . In this situation, the capacitor of column l continues to receive current from the sensor resistor, but, as V ol has reached its minimum value, the OA enters non-linear operation mode (even when it is a rail-to-rail OA), meaning the voltage in the OA inverter input node starts to increase, and ceases to be a virtual ground. If this occurs, the crosstalk phenomenon will appear through the resistors of the non-selected rows (as shown in Section 3.4), affecting the resistor time measurement RH which is as yet incomplete.
I RH and I RL are the currents which cross the RH and RL resistors respectively. The relation which must be met in order to prevent the aforementioned situation from coming about is set out below.
The output voltage of a column in DISCHARGE phase is given by: Hence, the time necessary for RL to discharge the capacitor of its column to 0 V will be: In order for crosstalk effect to be prevented, in this time the output voltage of column h must have dropped below the threshold value VT (for the purpose of simplicity, it is considered that the threshold voltages of all rows are equal). Hence, in accordance with Equation (8): replacing Equation (9) in Equation (10) and supposing, for the purpose of simplicity, the capacitors of all the columns are equal, we obtain: The Equation (2) can be used to find the values for I RL and I RH , and replacing them in Equation (11) we obtain: Hence, Equation (12) shows the limitations in the possible sensor resistance values in order for the circuit to work correctly. In the implementation carried out in Section 4, using a Spartan 3 XCS50AN-4TQG144C, with V DD = 3.3 V results in VT = 1.4 V, meaning RH < 1.74ˆRL.
As will also be shown in Section 4, for RL values over, approximately, 3 kΩ, the effects of failing to meet Equation (12) are very small. Moreover, it is possible to prevent increased voltage of the OA inverter node by activating the Zero pin of the FPGA at the moment the output node reaches voltage VT. However, for lower RL values of the array, crosstalk effect is increasingly important and Equation (12) is a serious limitation in the circuit.
Increasing the Range of Resistors
In order to increase the range of resistors to be measured, it is not necessary to modify the design of the Figure 3 but only to carry out the reading of two simultaneous rows: the sensors array, i, and the calibration resistors row, c.
Doing this, there are two resistors through which the capacitor of each column is discharged. If the maximum and minimum resistors of the array, RH and RL, are found again in the row to be scanned, in columns h and l simultaneously, the calibration resistors of the same columns are scanned: R C (all with the same value), so the Equation (11) is now transformed into: Replacing the values of the currents obtained using Equation (2) in this expression, the following can be written: (14) in which the threshold voltages and the capacitors of all the rows have again be taken as equal.
Regrouping all the terms with RH in the left member of Equation (14), the following can be written: A simpler expression can be achieved with the approximation RB p ăă RP (the resistors of the row selection buffers have very low values), meaning Equation (15) obtains an upper limit for RH as:
16)
Comparing the Equations (12) and (16) shows how the range has been extended. The restrictions on RH can also be eliminated by making the denominator of Equation (16) equal to 0. This means that R C should be: In the previous design example R C = 0.74ˆRL. However, if the restriction RB p ăă RP is not met, either because the number of columns is large, the RL values are small, or a series resistor has been introduced with the buffer, then RH will continue to be limited by Equation (15); however, if in this expression the term between square brackets is below or equal to 0, any RH value would be possible. Modifying Equation (15), this condition can be written as: In order to meet the restriction, the R C and RB p i values can be modified (including a series resistor with the buffer, RS i ). Hence, a higher limit can be found for the R C value and a lower limit for RB p i`R S i . These limits will be more restrictive when RP i is maximum, RP imax found in accordance with the following expression: Taking into account all the foregoing, R C can be cleared in Equation (18) obtaining: It is clear that the member of the right of Equation (20) must be greater than 0, but this can always be achieved thanks to RS i , even when the number of columns N is large. However, in order to obtain Equation (20), two rows of the array are selected simultaneously, and, in consequence, the expression Equation (6) is no longer valid since it is not a single current which discharges the capacitor but rather the sum of two: the current which circulates through resistor R ij and the one which circulates through resistor R cj .
In order to find an expression equivalent to Equation (6) and to determine the value of R ij , 4 discharge times are used: ∆t 1 ij , ∆t 1 ic , ∆t cj and ∆t cc . The first two are the discharge times of the columns j and c when rows i and calibration row c are selected simultaneously. ∆t cj and ∆t cc are the discharge times of columns j and c when only the calibration row is selected. The expression Equation (3) can be used to calculate ∆t cj and ∆t cc , whilst to calculate ∆t 1 ij , ∆t 1 ic only two currents discharge the capacitor and in consequence: Taking into account these expressions and using those obtained in Equation (2) for the currents, R ij can be expressed as: Hence, modifying the array scanning procedure and selecting the appropriate values of R C means very wide ranges of resistors can be measured, and, knowing four discharge times, the value of R ij determined. It should be highlighted that the time spent scanning the whole array in order to obtain the data of Equation (6) is the same as the time taken to obtain the data of Equation (22), and that the number of time measurements to be saved in the internal memory of the FPGA is the same. Moreover, the calculations to obtain R ij by Equation (22) can be carried out in the FPGA whilst data are obtained for a new array frame.
Crosstalk Due to the Offset Voltages of the Operational Amplifiers
The OAs present offset voltages, θ, which, depending on the models used, can vary in the range between millivolts and microvolts. As the offset voltages are random values determined by variations in the transistor manufacture processes, each OA (even when it is the same model) can have a different value. Moreover, as the OA non-inverter input voltage is set to ground in the circuit, the offset voltage appears in the OA inverter terminal. For this reason, the inverter input voltages of all OAs can vary, resulting in the appearance of crosstalk.
The circuit used to analyse crosstalk due to the offset of the OAs is indicated in Figure 5, which shows the different offset voltages of the OAs, θ j ; j P t1, 2, ...N, cu and the situation in which a single row of sensors, row i, is scanned is illustrated. All FPGA row control buffers have been replaced with their corresponding resistors, RB p i , for the case of rows selected at V DD or RB n i for rows at 0 V. The value to be found is the current Ic i j which enters capacitor C j . This current has several components. Figure 6 can be used for analysis, showing in detail the components of Ic i j .
Observing Figure 6a, all the currents which flow through the resistors of row i can be set out in accordance with I ij since: Hence, using this equation, we can write: where θ jk = θ j´θk and k = N + 1 is the calibration column. Taking into account that V DD can be expressed: I ij can be cleared using Equation (25): Secondly there are all the currents which, through the rest of the resistors of column j, are drained to ground, Figure 6b. As shown in this figure, I n gj is the current which flows through the resistor, R gj , with g " i. Using the same analysis as for the calculation of I ij in Figure 6a (replacing V DD with 0 V, RB p i with RB n i and taking the appropriate indices) we arrive at: As current is drained to ground through all the resistors of column j with the exception of R ij , this can be calculated as: where g = M + 1 is the calibration row. Replacing Equation (27) and Equation (28) where, separating θ jk = θ j´θk , we can write: an expression which characterises the crosstalk of the circuit when the inverter input of the OAs does not have a virtual ground voltage. It should be noted that this expression can be written in abbreviated form: where F(i) is a function which depends on index i but not on j: and G(j) is a function which depends on index j but not on i: Gpjq "´θ j M`1 ÿ g"1 1 R gj`M`1 ÿ g"1 RB n g¨R P g RB n g`R P g¨1 R gj Moreover, using Equation (28) it is easy to check that: a result which will be used in the following section.
In this manner, dependence on Ic i j is separated in: one term for rows and another for columns and the value of R ij . This will allow us to, as will be seen in the following section, design a strategy to find the value of R ij taking into account the effects of the offset voltages of the OA. It can also be seen how from the Equation (31), with all offset voltages at 0, we can derive the Equation (2).
Returning to the Equation (31), the operation of the crosstalk in the array can be analysed. Hence, if the resistors of buffers RB n and RB p are very small, or small compared to RP, Equation (31) this is simplified: where the crosstalk effect due to resistors of other columns has disappeared. Hence, Ic i j is only modified, with regard to the Equation (2), by the resistors of the same column, j, and by θ j . This may not be the case if an extension were necessary in the range of resistor values, since RS i may have to be added to RB p i if the Equation (20) so requires. However, this provides a design guide, since RS i must be as small as possible in order to reduce the crosstalk.
Equation (31) also shows how an increase in the minimum values of the R ij resistors of the array brings a larger decrease in terms 2 and 4 of the right side of the equation with regards to the first (since these have a quadratic dependence with the array resistors) meaning, if the resistors are not large enough, these terms could be eliminated and Equation (31) would be again reduced to Equation (36).
Moreover, an increase in the number of rows or columns due to the summations which appear in Equation (31) means an increase in crosstalk through terms 2, 3 and 4 of the right member of this equation. In consequence, even with high minimum array resistor values, if this has a large number of rows and columns, the crosstalk effect may be the factor which most influences the current which crosses a resistor, even more than the value of the resistor itself.
R ij Calculation Taking into Account the Offset Voltages of the Operational Amplifiers
This section sets out a simple method to obtain the R ij , values taking into account all the second-order effects presented so far. To apply this method, it is necessary to modify the circuit of Figure 3, adding a second calibration row (these rows will be called c1 and c2), as shown in Figure 7.
Again the method is based on modifying the row reading procedure and the use of different discharge times in order to, firstly, eliminate the term G(j) from the Equation (32) and, secondly, use simple coefficients to find the resistor values, eliminating F(i). The process requires the following steps:
‚
Step-1: A row, i, of the array and row c1 are activated simultaneously. The process is repeated for each of the array rows. A series of M times ∆t 1 ij , the times taken to discharge the different capacitors of the columns when rows i and c1 are activated simultaneously, are therefore obtained.
It should be noted that Steps 2 and 3 are only carried out once during the scanning of all the array rows, and that the three steps can be carried out in any order. The process to obtain R ij based on the previous steps is shown below. Following the same procedure as used to find Equation (29), in Step-1 we would have: where Ic i,c1 j indicates the current which discharges capacitor C j simultaneously activating rows i and c1. Subtracting the current, Ic i j , found during Step-2: we obtain:
43)
The members of the right of Equations (40) and (42) are now divided by the members of the right of Equations (41) and (43), proceeding in the same way for the members of the left. Operating with these ratios finally: As resistances R ic , R c2j and R c2c and the times are known, the value of R ij can be calculated again without taking into account the value of the capacitors, VT or buffer resistors, RB. It should also be noted that the fact that rows i and c1 can be activated simultaneously allows the extension of the array resistor ranges as seen in Section 3.3.
It should be noted that, although the offset voltage has been compensated using the Equation (44), it would be necessary to add to the terms θ j a term V oj /A due to the finite gain of the OA. However, as will be seen in Section 4, this term takes a much lower value than offset voltage, for which reason it is not taken into account.
Elimination of Effects of the Polarization Currents of the Operational Amplifiers
The Equation (44) also take into account the effects of the polarization current which would enter the OA through the inverter terminal, Ib j . In effect, if Ib j is taken into account, Equations (37) and (38) must be modified: As it appears in the same way in both, it disappears when obtaining the subtraction Ic i,c1 j´I c c1 j , as is also the case with the three current subtractions carried out to obtain Equation (44), meaning this equation continues to be valid, even considering the polarization currents.
Materials and Methods
The circuits proposed have been carried out on an FPGA Spartan3AN by Xilinx (XC3S50AN-4TQG144C) [16] with a working frequency of 50 MHz. The meter used by the capture modules is 14 bits, with a base time of 20 ns. The supply voltages are 1.2 V for the core and 3.3 V for the inputs/outputs.
The OAs used are the model TLV2475N [17] by Texas Instruments. Their main characteristics are: CMOS Rail-To-Rail Input/Output, shutdown mode, input offset voltage: 2400µV (max), voltage amplification: 88 dB (min). From these parameters it can be deducted that, as commented in Section 3.5, the voltage which appears in the inverter terminal due to the finite gain is, at the most, 3.3 V/25119 = 0.13 mV, 20 times lower than offset voltage. The sensor array consists of eight rows and six columns. In addition, two rows and one column are used to measure the calibration resistors. Its values, along with the values of the capacitors of each of the columns of the array, are indicated in the following section for each of the experiments carried out.
Results and Discussion
The experiments carried out are detailed below:
Experiment 1
This experiment is carried out in order to analyse the performances of the Equation (6). The results are shown in Table 1. Three resistors (560, 5357.51 and 10018.6 Ω) will be measured using 5350 Ω as nominal value for the calibration resistors. The other resistors of the array take the minimum value 560 Ω, this being the worst situation in terms of crosstalk in the array when evaluating the value of a resistor. The capacitors of each column of the array have a nominal value of 47 nF.
The results of R and σ have been obtained carrying out 500 measurements, whilst the errors of columns 4 and 5 of Table 1 show the worst case for the 500 measurements. The same procedure will be used for the other experiments of this section.
The maximum resistance RH permitted in accordance with the Equation (12) for an RL of 560 Ω is 870 Ω, a condition which is not met in any of the two RL resistors. Indeed it is verified that the results for the two last rows of the table show high absolute and relative errors. The same can be observed in the column which shows the systematic error (|R´R|). This happens even when using the Zero pin of the FPGA, as indicated in Section 2.
Experiment 2
In this case, the Equation (6) is once again used, but for a range of resistors (3.3 kΩ-10 kΩ), the minimum value for which is significantly higher than in the previous case. For the calibration resistors, 6.8 kΩ has been taken as the nominal value. The capacitors of each column of the array continue to be the same as in Experiment 1. In this case the value RH in accordance with Equation (12) is 5723.94 Ω, meaning we continue to have resistors which do not form part of the optimal measurement range. Table 2 shows how both the systematic error and the maximum errors are reduced compared to Experiment 1. This confirms the discussion set out in Section 3.5 on the Equation (30), since in this Experiment the terms 2 and 4 of the right member of the equation are reduced proportionally more than term 1, having increased the minimum value of the array resistor. Moreover, since the RL resistor is much greater than in Experiment 1, the Zero pin of the FPGA achieves a smaller value in the OA inverter input. In consequence, these two experiments show that the Equation (6) only applies in resistive sensors where RL > 3 kΩ.
Experiment 3
As observed in Experiment 1, the results are not as desired for the resistor ranges where RL takes lower values. For this reason, in this experiment the circuit of Figure 7 is implemented, allowing an increase in the range of resistors by modifying the row addressing. In this case there are two approximation methods which can be used: the Equation (22) which allows an increase in the range of resistors permitted, and Equation (44) which, in addition to increasing the range, eliminates the influence of the offset voltage on the EBT estimation. The range of resistors used in this case goes from 560 Ω to 3.3 kΩ. A value of 750 Ω has been chosen for the calibration resistors of row c1 and the calibration column. 990 Ω has been taken as nominal value for the calibration resistors of row c2. The capacitors of each column of the array have a value of 330 nF.
As can be seen in Table 3, the errors, both systematic and relative, using the Equation (22) are much lower than those obtained in experiment 1. Moreover, the maximum resistor of the range used exceeds the maximum value permitted, RH = 2163 Ω, obtained from the Equation (16). Table 3 shows how, in greater RH values, these systematic errors are much higher than the others, although they are lower than those obtained for the same resistors of experiment 1. Table 4 uses the same experimental data as those used in Table 3 but evaluated in accordance with the Equation (44). It can be seen that the systematic error is lower in this case, having considered the effects of offset. However it is observed that the maximum errors are reduced less than the systematic error. This is due to the fact that the Equation (44) uses six independent time measurements to carry out the estimation, whilst Equation (22), only needs four. In this case, no increase is observed in the systematic errors for resistors above value RH.
Experiment 4
This experiment also uses the circuit of Figure 7, and the Equations (22) and (44) to calculate the value of R ij . The only change compared to experiment 3 is the range of resistors to study, 3.3-10 kΩ. A value of 10 kΩ has been chosen for the calibration resistors of row c1 and the calibration column. 6.8 kΩ has been taken as nominal value for the calibration resistors of row c2. The capacitors of each column of the array have a value of 47 nF.
For this range the maximum resistor again exceeds the maximum value permitted, RH = 7560 Ω, obtained from the Equation (16). However, on this occasion, using Equation (22), as indicated in Table 5, does not show any important variations in the systematic and maximum errors for the resistors which exceed RH, as would be expected when increasing RL and for use of the Zero pin.
Again, Table 6 uses the same experimental data as those used in Table 5 but evaluated in accordance with Equation (44). The significant reduction can again be seen in the systematic and maximum errors compared to those obtained by Equation (22).
Conclusions/Outlook
This paper studies the use of a distribution of M rows and N columns for arrays of resistive sensors of different ranges and for many applications, for instance chemical, biological, robotics, etc. This distribution allows access, with a certain degree of parallelism (M simultaneous readings) to the information provided by the sensors. It also allows the use of simple conditioning circuits for analogue-digital conversion. The circuit, which enables a simple connection between the information of the resistive sensor and an FPGA as the converter element, comprises a series of M OAs with capacitive feedback. However, it presents certain limitations due to its inherent nature which reduce its functions. The causes of these limitations include: the appearance of crosstalk, the reduced range of resistors to measure (a function of the minimum resistor of the array), variability and difficulty in measuring parameters and important elements of the circuit (FPGA buffer resistors, the threshold voltages of these buffers, capacitors used in OA feedback) and, finally, the offset voltage and bias currents of the OAs. Different modiffications of the circuit and procedures have been proposed to mitigate each of these limitations. In order to check the effectiveness of the different proposed reading methods, a series of experiments have been carried out for a piezoresistive tactile sensor. Our final proposal achieves a maximum relative systematic error of 0.11% and a maximum relative error of 0.77% for an array with values in the range (556 Ω to 3159 Ω) and a maximum relative systematic error of 0.08% and a maximum relative error of 0.69% for an array with values in the range (3296 Ω to 9975 Ω). Future work will need to evaluate the influence of the finite gain of the OA on the performance of the circuit. It has been shown that in the design proposed in this document, this is much lower than the influence due to offset voltage. However, this may not be the case in other implementations. | 9,147.6 | 2016-01-25T00:00:00.000 | [
"Engineering",
"Physics"
] |
New Technique of Stereolithography to Local Curing in Thermosensitive Resins Using CO(2) Laser
A theoretical and experimental study of thermosensitive resins used in thermal stereolithography is presented. The process of local curing through the application of infrared radiation, which has proved to be useful in a new technique for the making of prototypes by means of selective heating with C02 láser (10.6um), is studied. The ideal composition of the thermosensitive resins has proved to be 10 parts epoxy, 1.4 part diethylene triamine (the curing agent) and 0.7 part silica powder. A physical theoretical model is applied for control of the parameters which influence the confinement of the curing in the irradiated bulk. A mathematical model is applied too; it was developed through the resolution of the heat conduction equation dependent on time in cylindrical co-ordinates, which enables to determine the behaviour of curing in terms of irradiation conditions.
INTRODUCTION
The great world movements in search of optimization in prototype production has placed stereolithography as a powerful technique used for obtaining tridimensional models.Usually, the conventional technique in stereolithography makes use of systems of application of ultraviolet light on photosensitive resins in the curing process, by operating a HeCd (0.352 |um) láser for obtaining prototypes (1 y 2).This work presents the study of an innovator technique of application of C0 2 (10.6 |uim) láser infrared radiation on thermosensitive resins by means of stereolithography (3).It is believed to be a really new and advantageous method as compared with the process mentioned before.In previous works, the behaviour of the influence parameters as well as the characteristics of the resins in the process of local curing were studied and defined (4).The results were satisfactory as a starting point for the determination of a proper theoretical model ( 5), aiming at the accurate determination of the local curing.In order we might come to such a composition for local curing, a detailed study was fulfilled considering the amount of silica in the process.The experimental analysis were carried out in two stages -one of thermal characterisation and another of optical characterisation of the thermosensitive resins.The results determined the temperature range at which the curing starts and the behaviour of the absorption depth in terms of the silica variation in the composition.A mathematical model was applied by solving the equation for the conduction of heat dependent on time in the cylindrical co-ordinates, which permits to foresee the behaviour of the curing in terms of irradiation conditions.
THEORETICAL MODEL
A physical theoretical model ( 6) has been worked out aiming at the exact characterisation of every physical phenomenon occurred in the process of local curing.The model describes the energy flow deposited by the láser in terms of the control of the operational parameters and the behaviour of the resin, aiming at the local curing.The determination of how the released energy was distributed is essential for obtaining the local curing.The local curing was achieved by scanning a continuous wave (cw) C0 2 láser repeatedly over a circular trajectory on the sample's surface with a sean speed.By dividing the beam diameter by the sean speed, the 'dwell time': is obtained concerning the time of interaction laser/resin at a surface point.As the resin is highly absorptive at the C0 2 láser wavelength (10.6 ¡jm), it is assumed that, during the 'dwell time', nearly all the beam energy has gone into the inner part of the sample at a distance from the surface equivalent to the absorption depth, 8.It is assumed that energy E has been absorbed in the small cylindrical volume V during the 'dwell time', the volume being defined as: The energy released in V is the product of the láser power by 'dwell time': By means energy, E, it is possible to determine the variation of temperature, which is proportional to the deposited energy concerning to specific heat, C p , and mass, m, of the material contained in volume, V, according to the following equation: The mass of the heated volume may be calculated by using the mass density of the sample p = 1,16 g/cm 3 .A numencal solution was applied based on the Finite Differences Method, developed to the general equation for conduction of heat (7).If it is assumed that nearly all the flow of energy deposited by the láser beam is absorbed every moment the láser passes at a point on the surface of the sample, it follows that the irradiated volume will undergo a temperature increase which is determined by the expression: Ar = -Sg-= p 2 Xá [5] raC p 7ico 8pC p As it concerns to a láser linear heating rate, the general equation for conduction of heat dependent on time is applied as follows: DdT K where D stands for the thermal diffusivity of the sample, K is the thermal conductivity, and G describes the rate of energy generated by the láser source.As it concerns to a Gaussian profile (8) of intensity of the láser beam, the term of source G generated by the C0 2 láser may be expressed by: is the function which considers the term of source in the general equation, r is the distance from the beam center, z is the depth of the sample surface, 8 is the absorption depth, P is the output power of the láser.The displacement sean speed over the trajectory of the circular way resulted in repetition rate of the láser equal to , and the 'dwell time' of 377 (as for a beam focused to of the radius.These parameters were used vin the numencal simulation, the constant of radial thermal transient a being defined as follows: = co 2 /D, where D = 22 x 10~5 cirros -1 , thus a is approximately lOs for the radial transient.The Crank-Nickelson method of infinite differences was used to solve the equation for the conduction heat dependent on time in the cylindrical co-ordinates.The thermal conductivity of the epoxy sample is K = 0.359 x 10-3 mW/cmK, which is near the conductivity of air K = 0.24 x 10~3 mW/cmK.As it concerns to the theoretical model, it was assumed that thermal properties K and D, as well as the optical properties 8 and reflectivity are all of them independent of temperature.The influence of silica in the process of local curing was disregarded.
Stage I
Initially, for the thermal characterization of the thermosensitive resin, it was essential the study of the sample behaviour under the influence of the heat action.For that purpose, it was decided to simúlate experimentally how the sample reacts when it is submitted to an external source of heat.In order to carry out the experimental analysis, the experimental apparatus was used, in which a small semi-liquid volume of the sample was heated at different external temperatures.Although the result of the local curing may also be obtained with polyester resin, the decisión to use the epoxy resin (# D.E.R. 330) was taken because of its low coefficient of thermal diffusivity D = 22 x 10~5 cm 2 -s _1 , as well as its appropriate viscosity, thermosensitivy and stability in the course of the curing process.In order that the explanations might be coherent, we were firstly decided to characterise the curing process occuring in the bulk of the sample.For that purpose, a small volume of liquid sample was heated in a beaker wrapped in heat tape, which enabled to control the parameters which could bring information concerning the curing process.Initially, the temperature of the sample volume in terms of time was monitored by means of a thermocouple by applying different surrounding temperatures (external).Temperatures of 37, 44, 57, 65, 73 and 80 °C were selected for the analysis of the curing behaviour of the sample.The results obtained in the graphics initially indicate a similar behaviour in the evolution process of the curing at the different temperatures applied; however, it is clearly observed a much faster answer to the curing as the temperature increases.It is also noticed that the onset of the curing process occurs in a very narrow temperature range of the sample (onset the curing).Such a variation arises out of the difficulty in controlling stoichiometry when preparing the sample.
Stage II
The optical characterization of the properties of the thermosensitive resin will depend on the sample composition, which in proper conditions permits obtaining information about the absorption depth.Therefore, these analyses have proved to be of great use for the study of the samples behaviour in terms of the silica absorption of energy, since each sample was submitted to a variation in the amount of silica in the composition.The absorption depth, as defined in the theoretical physical model, is essential for the definition of the volume of the cured sample.If the absorption depth determines the radiation depth in the sample, it follows that the amount of silica in the composition may determine the dimensión of the cured volume, seeing that as the amount of silica changes, there occurs a variation in the absorption depth.In order to determine the absorption depth in each sample, since the semi-liquid sample was inserted between two KBr crystals (transparent to infrared light).An analytical solution -by applying the Lambert-Beer Law Í/ T =¡ Q e~~x / ^V was established.Considering that the intensity is partly reflected and attenuated by the crystals; through the analytical solution it is possible to determine the depth of energy absorption by the sample.
Thus, by applying the Lambert-Beer Law, a satisfactory estímate is obtained as follows: 2 e-a °ix «+ x "Vt [8] where: / T //Q = sample transmittance, R = KBr crystal reflectance, x c = crystal thickness, a = optical absorption coefficient of the crystal, 8 = absorption depth.
If 5 is detached from eq, [8], it follows: where T stands for transmittance.The analysis of the optical absorption depth of the samples have shown how silica is important in the calculation of the samples behaviour under the influence of infrared radiation.In order to facilítate the accomplishment of these experiments, the proportion of the silica in the composition (which permits the local curing of the resin) was considered as the basis, where for each sample composition the following proportions were added in parts of the reagents.
RESULTS AND DISCUSSIONS
The experiments concerning samples under exothermic conditions indicate the possibility of influencing the curing of the sample through the control of factors which affect the speeds of reaction, and determine the moment the reaction starts.As suggested by the experiment in question it was important because, traditionally, it is possible to determine the activation energy of the reaction involved in the curing process with the obtaining of the complete temperature curve for the sample in the function of time.This result permits to establish a mathematical relationship between the parameters involved in the curing and to obtain the activation energy of the reaction.
However, the experimental difficulties are clear along this process, which does not make it possible to obtain the activation energy accurately, since the highest temperature point of reaction has not been attained.For example, to determine the reaction rate more accurately, as a function of temperature, DSC (differential scanning calorimetry) should be used.The study of the sample by means of DSC is being developed and aims to determine the conversión rate of the sample in the curing, in function of time as well as to determine the activation energy of the sample, which is estimated to be around E a = 50kcal/mol.The simple analytical solution established in terms of the sample transmittance and thickness was used for the analysis of the behaviour of the absorption depth in semi-liquid samples with variation of silica in the composition.
From a theoretical and experimental viewpoint, silica plays an important part in controlling the local curing, as it hinders heat diffusion to zones which are outside of the irradiated área.The amount of silica is understood to be critical in the process of local curing, since if silica occurs excessively in the composition, it restricts the curing of the reagents and absorbs all the energy.Thus, the curing is not complete, even if it occurs.On the other hand, if silica occurs in a small amount, heat may diffuse to undesirable áreas, which facilitates the curing among the reagents and makes the local curing impracticable.Notwithstanding, it is possible to control the thickness of the cured layers when the absorption depth is known through the control of the silica amount added to the composition.Thus, the energy penetration in the sample is defíned.The interest in analyzing in terms of the variation of silica powder is associated with the obtaining of local curing, since it is the amount of silica powder that will determine the depth of energy entering into the irradiated sample (Fig. 2), and consequently the thickness of the cured part.Considering the importance of silica in the formation of the appropriate composition of the thermosensitive resin, it is indispensable to analyse the effect since it is considered to be one of the main parameters in the physical interpretation and development of the process to obtain local curing.
A further preoccupation was to créate a physical model with láser operational parameters which could determine the boundary conditions of the láser application on the sample for the obtaining of the local curing.A careful control of these parameters (sean speed, power output and depth absorption) is essential for the confinement of the curing in the volume specified in the eq.[2].In order to know the importance of the parameters which are involved, an experiment was made by scanning the beam through a circular trajectory at n-•-i-•-i-«-i-•-i-'-i-• .Local curing composition 0,000 0,025 0,050 0,075 the sample surface, at a sean speed v = 159.2cm-s-1 , which results in a láser repetition rate of 35 ms, a dwell time of x á = 377 JLLS was obtained.The láser beam was focused to lie of the 0.3 mm radius, so that by using an absorption depth of 5 = 22.4 |us, the energy deposited by the láser at each 'dwell time' in the 0.33 x 10~2 mnr 3 volume was calculated.The láser being operating continuously with P = 20 W, the deposited energy was found to be E p = 7.6 mJ approximately, the numerical model has shown to be quite flexible on describing the complete bidimensional mapping of the thermal evolution in the sample bulk.The distribution of temperature at each point of the attained volume and over the sample surface was obtained through numerical simulation.Figure 3 shows the distribution of temperature along áreas situated over and below the sample surface with the isothermal profíle described for lie of the highest temperature reached in the sample.The results produced by the numerical model proved the possibility of confíning the curing within the same dimensión of the láser beam.They are in conformity with the preliminary experimental results where tridimensional pieces were built by means of the superposition of layers with individual thickness varying from 0.1 to 0.2 mm.
CONCLUSIONS
The results of the study in the process of local curing of a semi-liquid sample, composed of epoxy resin, diethylentriamine and powdered silica, with the use of a C0 2 láser as selective source of heat, were presented.The use of infrared láser to lócate the curing of thermosensitive resins is new, and it is believed to be an advantageous method as compared with the conventionally used process.It was observed that local curing only takes place with a specifíc composition of the sample.In order that the curing might oceur, the sample surface was swept by the láser beam along a circular trajectory, at an appropriate speed.In order to adapt the process of local curing, with the use of infrared radiation, for the obtaining of plástic pieces of unlimited geometry, it is necessary the effective control of the parameters involved, including láser power, 'dwell time' or exposition time, speed and frequency of displacement, the láser beam dimensión, the thermal conductivity of the sample, and the reagents mixture.Numerical simulation has proved successful as it concerns the lateral confmement and the depth of the curing as dimensioned by the láser beam, with excellent spatial resolution, and with no shrinkage of the sample after the curing, to the detriment of the final product.Interesting results were obtained with the experiment related to the application of external temperature fo the sample, which affeets the reaction speed of the curing.For this experiments it is concluded that, if the boundary conditions are improved, it will be possible to obtain, for example, the activation energy of the reaction for the sample in question at a low cost, in a simpler experiment, since the reaction is exotermic.The application of DSC is already renowned and involves sophisticated equipment of high cost.However, it presents very good results for the obtaining of the activation energy of the reaction. | 3,889.2 | 1998-04-30T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Atomic force microscopy phase imaging of epitaxial graphene films
Dynamic mode atomic force microscopy phase imaging is known to produce distinct contrast between graphene areas of different atomic thickness. But the intrinsic complexity of the processes controlling the tip motion and the phase angle shift excludes its use as an independent technique for a quantitative type of analysis. By investigating the relationship between the phase shift, the tip-surface interaction, and the thickness of the epitaxial graphene areas grown on silicon carbide, we shed light on the origin of such phase contrast, and on the complex energy dissipation processes underlying phase imaging. In particular, we study the behavior of phase shift and energy dissipation when imaging the interfacial buffer layer, single-layer, and bilayer graphene regions as a function of the tip-surface separation and the interaction forces. Finally, we compare these results with those obtained on differently-grown quasi free standing single- and bilayer graphene samples.
Introduction
In recent years, the epitaxial growth of graphene on silicon carbide (SiC) is gaining interest thanks to its ability to provide large area, high quality graphene films that are suitable for a variety of promising technological applications [1], including electronic [2,3], mechanical [4,5] and optoelectronic [6,7] systems. Epitaxial graphene (EG) continuous films are grown by high temperature sublimation of silicon atoms from SiC substrates [1,8]. Due to the complex growth dynamics, the graphene films generally show heterogeneous surfaces, which encompasses regions with non-uniform thicknesses and properties. Given these premises, a large scientific effort is underway to investigate the fundamental properties of EG films, using and integrating non-invasive and versatile characterization techniques to rapidly gather information from the heterogeneous surface of the studied atomic thin film [9][10][11][12].
Atomic force microscopy (AFM)-based methodologies stand out for their ability to locally map the sample characteristics down to the nanometer scale, and for their operational simplicity and flexibility, which allow numerous characterization and nanomanipulation experiments to be performed in situ on the same sample area, and at the same time [13][14][15]. Among the different AFM techniques, dynamic mode AFM, and in particular AFM phase shift imaging [16][17][18][19], represents a simple technique to achieve thin films nanoscale surface characterization, free of restriction on experimental conditions and operational instrumentations. In phase imaging, contrast arises from the local changes in the energy dissipated during the oscillation of the tip over the sample surface [20]. Recording the phase shift between the excitation oscillating force and the tip response while scanning the sample has been used to map with high spatial resolution compositional information of heterogeneous surfaces [18,21,22]. Nevertheless, since the contributing forces related to the tip-surface energy dissipation are not trivial to distinguish and isolate, and depend on a variety of experimental factors, phase Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
imaging is still regarded as not quantitative [23], and very few models have been provided to explain the origins of the phase contrast. The community has been aware about the capability of AFM phase imaging to obtain distinct contrast among graphene regions of different thickness, in air and at room temperature [23][24][25][26]. However, to our best knowledge, no experiments nor models have tried to explore the origin of the phase contrast in EG films. Integrated with complementary, quantitative AFM techniques, such as friction force microscopy (FFM) or Kelvin probe force microscopy, which are able to distinguish and identify the number of graphene layers on a non-homogenous surface [27,28], AFM phase imaging can provide a complete picture of the EG properties, and shed light on how the energy dissipation mechanisms vary with the surface composition.
In this paper we explore AFM phase mapping of EG film with heterogeneous surface composition, and study the behavior of the phase shift in different regimes of oscillation and different imaging conditions, to understand the evolution of the energy dissipation mechanisms in EG, and how they relate to the thickness of the different graphene domains. Finally, we compare the dissipative processes occurring in conventional EG films with those in quasi free standing single-and bilayer graphene films (see Methods section), to explore the effect that different growth procedures and different layer structures, including the presence of the carbon interfacial buffer layer and intercalated hydrogen, may have on the tip-graphene interaction forces.
Methods
The EG samples studied in this work are synthesized on the silicon terminated face of a 4H-SiC wafer by the confinement-controlled sublimation method [8]. Referring to the inset panel in figure 1(b), we thus consider single-layer (1LG) and bilayer (2LG) EG respectively the first and the second graphene layer overlying the interfacial buffer layer (BfL). Although this process has been extensively studied to yield large area, uniform single-layer films [29], the control of thickness distribution still remains a challenging task due to the rapid and complex evolution of the growing process. This non-uniformity allows to find graphene domains with different number of layers (BfL, 1LG and 2LG) within a small scanning area (less than 2 μm 2 ) and instantly compare contrast arising from phase mapping. The quasi free standing monolayer graphene (QF1LG) and quasi free standing bilayer graphene samples are prepared by hydrogen intercalation of the SiC/buffer layer interface in a buffer layer and single-layer EG sample, respectively, following the procedure indicated in [30]. Our experiments reported in figures 3 and 4, are performed on a Bruker Multimode 8 AFM, using a polycrystalline (1), representing the two oscillation and interaction regimes the tip can experience approaching the sample surface. At resonance, the phase shift angle j phy is exactly 90°. In the attractive regime, the phase angle j phy increases from 90°to 180°by decreasing the tip-surface separation, while for repulsive regime the phase j phy decreases from 90°to 0°. The arrow connecting the two solution branches represents the transition between the two interaction states. The monotonic dotted lines represent the two solutions with no inelastic terms and energy dissipation. diamond-coated silicon tip (resonance frequency f 0 ∼400 kHz, spring constant k ∼ 90 N m −1 , quality factor Q ∼ 800). The phase imaging experiments reported in the manuscript are carried out at room temperature and in well-defined, controlled relative humidity conditions, which are specified in the description of the respective experimental set-up. The relative humidity is constantly monitored throughout the course of each experiment.
Experimental results and discussion AFM phase imaging operation The phase signal is recorded during conventional dynamic AFM mode experiments. In amplitude-modulation-AFM (AM or tapping mode AFM), the dynamics of the cantilever-tip can be modeled by a driven damped harmonic oscillator [17], whose motion is defined by its mechanical characteristics (i.e. spring constant, quality factor and resonance frequency) and by the extent of the tip-surface interaction forces. They are considered as a combination of elastic restoring and dissipative components, including long-range attractive van der Waals interactions, viscoelastic damping, adhesion and capillary forces, and short-range repulsive interactions, which are related to the material stiffness.
In tapping mode operations, the cantilever is mechanically oscillated at a fixed frequency, close to the first natural frequency of the cantilever, while being scanned over the specimen. Far from the sample surface, the cantilever oscillates freely, driven at a specific amplitude decided by the user, called free oscillation amplitude (A 0 ), and with a free phase angle j phy , usually 90°. The proximity of the tip to the sample surface influences both the nature and intensity of the interaction forces, thus causing the amplitude of the oscillation to be damped from the free A 0 and the phase angle to shift from its initial value. In the AM-AFM mode the amplitude of the interacting damped cantilever is used as the feedback parameter to track the topography of the sample. Since the extent of the tip-surface interaction is inversely proportional to their reciprocal distance, by decreasing the oscillation amplitude set-point (A sp , namely the amplitude at which the cantilever is set to oscillate), normalized to the free oscillation amplitude (A sp /A 0 ), it is possible to approach the tip to the sample surface. On the other hand, the shift of the phase angle j phy is directly related to the energy dissipation (E dis ) associated with inelastic tip-surface interactions-see equation (1)-as described below [18]: where k is the spring constant of the cantilever, and Q the quality factor. Usually the oscillation frequency f is set equal to the resonance frequency f 0 . Therefore, the phase imaging contrast arises from local variations of the tipsurface energy dissipation, thus revealing features beyond topography and inferring information about the chemical/mechanical/electrical heterogeneity of a surface. The intrinsic nonlinear character of the tip motion in AM-AFM and the participation of attractive and repulsive interactions give rise to the coexistence of two stable oscillation states, both allowed solutions for equation (1), and represented in figure 1(a) by the solid lines. The two possible oscillation regimes are distinguished by the prevalence of specific interacting forces and dissipative processes. The first branch, where the phase angle j phy shifts from 90°up to 180°, corresponds to the attractive regime, where energy dissipation is dominated by long-range, attractive forces. The other solution corresponds to the repulsive regime. Here the phase angle j phy decreases from 90°to 0°, and the tip-surface interaction is characterized by short-range, repulsive forces. The dotted lines in figure 1(a) represent the phase angle solutions in the absence of dissipative process: the larger the dissipative phenomena perturbing the tip oscillation, the bigger the deviation from the linear, conservative solutions [31,32]. While introducing AFM phase mapping, one clarification is due: whereas the phase angle of the oscillating tip j phy has a sinusoidal dependence on the tip-surface dissipation-equation (1)-and varies between 0°and 180°, being centered at 90°for free oscillation, the phase angle Φ AFM extracted during AFM phase imaging is measured relatively to the free oscillation, and represents the shift from 90°. We can thus relate the two angles accordingly: Therefore, Φ AFM angle varies between 0°and −90°in the attractive regime, and between 0°and +90°in the repulsive regime. Figure 1(b) displays a tapping-mode AFM image of the surface topography of the EG film taken simultaneously with the corresponding phase shift (Φ AFM ) image, see figure 1(c). The same surface area is also imaged by contact mode AFM to acquire a friction map, see figure 1(d). In the tapping-AFM topography image of figure 1(b) it is possible to recognize the terraces typical of EG films, which originate from the annealing of SiC, with widths spanning from hundreds of nanometers to few micrometers. It is very difficult to discern the number of graphene layers from the topographical image, since variations in the height profile do not necessarily follow the effective changes in graphene thickness. This is due to two main reasons: first because additional graphene layers grow underneath the first one following the sublimation and out-diffusion of Si [33]-i.e. lower height may indicate a larger number of layers-and second because of the intrinsic step structure of the SiC substrate. On the other hand, phase imaging of the same area in figure 1(c) provides a very different scenario, where some terraces with a clear different height in the topographical image show no contrast in the phase image. Furthermore, it is possible to observe new features with significant contrast and three different populations of phase values appearing in the phase image. We argue that these features represent different graphene domains, with a specific number of graphene layers. Raman spectroscopy performed on this EG sample shows that it is composed of mainly single-layer graphene with a minor presence of regions with ±1 graphene layers, i.e. BfL and 2LG, as suggested by the width of the 2D peak of the Raman spectra (see supplementary material figure S1 is available online at stacks.iop.org/JPMATER/3/024005/mmedia). In view of the abundance of data available in literature about the relationship between friction force and number of layers in EG films [28,34,35], we perform FFM in the same region displayed in figures 1(b) and (c), in order to comprehend the distribution of the number of graphene layers and relate it to the phase contrast. Friction images are acquired by simply switching from tapping to contact-mode AFM during the same set of experiments and using the same tip. The friction force map displayed in figure 1(d) shows the same features observed in the phase image, indicating the presence of three populations of friction values, that spatially correspond to those of the phase image. Following the results reported in literature [28], we can assign the exact number of layers to each population of friction values. In particular, we identify regions with high friction (average value of around 12 nN) to the buffer layer, the population with mid friction (average value: 1.35 nN) to 1LG, and the population with lower friction (average value: 0.8 nN) to 2LG. In fact, friction forces on 2LG regions are well known to be smaller than those measured on 1LG, and both these regions exhibit much lower friction compared to the buffer layer. The 1.7±0.1 ratio between the 1LG and 2LG lateral forces is in good agreement with data available in literature [28], as well as the factor of 10 found between friction forces on 1LG and BfL [28]. Considering the exact spatial matching between domains in figures 1(c) and (d), we can assign a specific number of layers to the phase image regions of different contrast, as labeled in figure 1(c). Considering that phase imaging does not have any specific restrictions in terms of experimental conditions and operational instrumentations, pairing phase imaging with other quantitative AFM techniques (as FFM), could result useful and versatile in executing a variety of AFM measurements. For example, after determining the distribution of the number of graphene layers on the surface via FFM, it is possible to use phase imaging to promptly locate different graphene domains and perform targeted, in situ mechanical/indentation experiments [36] (possible using the same diamond-coated tip used for acquiring phase and FFM maps) and extract data selectively, without the need to change cantilever or surface location between different measurements (see figure S2 in the supplementary material).
Despite being able to distinguish regions of the film surface with different atomic thickness, phase imaging has been disregarded so far as a quantitative method to assess the exact number of atomic graphene layers in EG [23]. The tip-surface interaction forces, regulating the energy dissipation processes that cause the phase contrast to emerge, and the coexistence and transition between the two oscillating states depend on several and disparate factors, including initial experimental conditions (e.g. the tip-sample rest separation, the driving force, the driving frequency and its deviation from the cantilever natural frequency, the environmental relative humidity, the temperature) and operational parameters (e.g. the free oscillation amplitude A 0 , and the cantilever specifications k, Q, f 0 , among the others), together with the sample properties (e.g. elastic modulus, hydrophilicity) [37,38]. Furthermore, modulating the amplitude of cantilever oscillation, A sp , during the tapping mode imaging gives access to different oscillating regimes. According to the dominating interaction forces, graphene surface domains show different properties and responses. Consequently the displayed phase values and contrast vary with the experienced oscillation regime and the occurring conditions [19], making phase imaging additionally complicated to reproduce. Comparing the phase values obtained during phase imaging, Φ AFM, as displayed in figures 2(a) and (b), acquired on the same region of the EG sample first at amplitude ratio A sp /A 0 =0.9 and then at A sp /A 0 =0.5, respectively, we can observe a variation in the phase contrast between 1LG and 2LG domains. Whereas for high amplitude ratio no significant phase difference is discernible among graphene domains, the appearance of phase contrast while lowering amplitude ratiosrecalling the relationship reported in equation (1)-suggests alterations of both the tip-surface energy dissipation mechanisms, and the response of the two domains to the interaction forces. This shows the difficulty related to the use of phase imaging as an independent quantitative technique for the identification of graphene domains of different thickness. But it opens up to interesting questions regarding the evolution of energy dissipation mechanisms in different oscillating conditions and how these interactions vary with surface morphology.
Origin of energy dissipation
To further understand the relationship between phase angle and dissipative processes, and to obtain a more quantitative meaning of the phase values, as well as to explore their reproducibility, we record the variations of the phase angle as a function of the tip-sample separation (namely A sp /A 0 ) for BfL, 1LG and 2LG domains of the EG sample. The experiment is carried out using a polycrystalline diamond-coated silicon tip (see Methods for tip specifications), at a relative humidity of RH∼ 30% (constantly monitored during the experiment), with a free oscillation amplitude A 0 =17 nm (details of the calibration of the free amplitude can be find in figure S3 of the supplementary material). Results are displayed in the top row of figures 3(a)-(c)-(e). We can see that the phase angle measured at moderate and soft tapping (A sp /A 0 =0.4−1.0) show values above 90°, suggesting that the tip is operating primarily in the attractive regime, which seems reasonable given that the contribution of attractive forces is more noticeable for small free amplitudes and stiff materials [38]. However, it is important to stress that, to an extent related to the initial A 0 , during each oscillation cycle the tip feels the relative influence of both long-and short-range forces, whose intensities and contributions to the energy dissipation and phase angle vary with the tip-surface separation, and in general the dynamics of oscillations mirrors a combination of both Long-range regime-soft tapping Long-range dissipative processes do not imply mechanical contact, are defined by attractive interfacial van der Waals interaction forces, and are usually derived by the non-retarded van der Waals energy and the Hamaker approach [38,39]. In this soft tapping regime (A sp /A 0 >0.7), the energy dissipation is proportional to the Hamaker constant (H) of the material, and proportional to the inverse of the tip-sample distance, according to the following [39] where R is the tip radius and d 1 and d 2 are the closest and the farthest tip-surface separation during the oscillation cycle. In particular, in this regime, we observe that the energy dissipated by the three graphene domains increases monotonically with the decrease of the amplitude ratio A sp /A 0 , as both d 1 and d 2 decreases while the tip approaches the sample surface, but there is no noticeable difference among their respective phase values or E dis , as shown in figures 3(a) and (b), respectively. It is well known that for single and bilayer graphene, surface forces are heavily influenced by the underlying substrate [40]. Therefore, the Hamaker coefficient and the surface energy sensed by the oscillating tip are mainly due to the SiC. The highly hydrophilic character of the SiC substrate, together with the wetting transparency of extremely thin graphene, may explain the absence of appreciable differences in the dissipated energy for BfL, 1LG and 2LG in the long-range regime, as the tip mostly senses the properties of the substrate. In addition, more hydrophilic substrates, as SiC or SiO 2 , have a more pronounced effect even for thicker graphene films [40].
Short-range regime-moderate tapping
When the tip amplitude is further decreased and it enters a more short-range regime, i.e. A sp /A 0 <0.7, we observe the emergence of a sensible difference between the values of phase angle-see figure 3(c)-and E dis for the different number of layers, with BfL and 1LG showing higher dissipation than 2LG, as shown in figure 3(d).
By decreasing the tip-surface separation, short-range attractive forces, like adhesion and capillary forces, become predominant, and the energy dissipation is mostly due to the presence of adhesion hysteresis [39]. Interacting forces in this regime can be calculated by the Derjaguin-Muller-Toporov model, and the total energy dissipation is directly proportional to both surface energy hysteresis and sample deformation, according to the following [20]: Here, δ is the sample deformation and Δγ is the difference in the surface energy between the approaching and retracting curves, indicative of the adhesion hysteresis. Since in this phase the contact time is minimal, the contribution of the sample deformation to the total energy dissipated is negligible compared to the surface adhesion hysteresis. Indeed, the energy dissipation is mostly related to adhesion effects, and the observed E dis (BfL)>E dis (1L)>E dis (2L) is likely due to the wettability of the different domains, as their hydrophilicity follows the same trend BfL>1LG>2LG [41,42]. Measurements obtained on thicker (>2LG) domains seem to corroborate this picture (for further details, see figure S5 in the supplementary material). We said that in this regime the energy dissipated during the oscillation cycle is proportional to the difference between the approach and retraction surface energies, which is defined by the area enclosed by the approach and retraction force curves. Initially, E dis continues to increase with decreasing amplitude ratio, as adhesive interactions increase approaching the surface. However, at a certain distance, a reduction of the tip-surface separation, and of the oscillation amplitude A sp , entails also a reduction of the force-distance area enclosed by the approaching and retraction curves, which again represents the adhesion hysteresis loop. Consequently, these competing effects slow down E dis increasing rate, resulting in a flattening of the E dis curve, and eventually causing an inversion of E dis dependence on amplitude ratio, with E dis decreasing with decreasing A sp , as observed in figure 3(f) and explained in the following section.
Short-range regime-hard tapping For hard tapping (A sp <0.3) the dynamics of the oscillating tip enters the repulsive regime, which involves a significant mechanical hard contact time between the tip and the sample. Here, the phase j phy approaches values below 90°, as shown in figure 3(e). The sudden transition observed between the two regimes is typical of stiff materials and small A 0 [38]. It is usually discouraged to work under conditions close to the attractive-repulsive transition, as it is characterized by unstable tip motion and unstable phase values (for further details, see figure S6 of the supplementary material). For this very low tip-surface separation, contact time represents an important percentage of the total period of oscillation, and surface indentation is not negligible anymore. As displayed in figure 3(f), we consequently see an inversion of the energy dissipated by 1LG and 2LG, which is now higher for the latter, resulting from the effect of a higher indentation depth. This data is in agreement with nanoindentation measurements performed on EG [12] that proved higher contact stiffness for 1LG compared to 2LG. As the tip oscillates in hard tapping conditions, together with the transition to the repulsive regime we observe a drop in the dissipated energy. In fact, the non-conservative contribution from adhesion interaction is now considerably reduced (due to small approach and retraction force loop), and we also have the emergence of elastic, conservative effects. In order to corroborate the proposed model, for which phase contrast in the attractive regime arises in the three graphene domains largely because of their different hydrophilicity character, and to monitor the effect of environmental humidity on phase shift and energy dissipation, we performed a comparative experiment recording phase angle j phy as a function of the amplitude ratio A sp /A 0 in two different controlled humidity conditions, namely moderate relative humidity conditions (RH∼25%) and dry conditions (RH<5%). The two resulting graphs, together with the experimental details, are available in the supplementary material (see figure S7). While the data retrieved from the experiment in humid conditions mirror those presented in figure 3, we observe some differences in the results obtained in dry conditions. Here, the phase shift j phy monotonically increases towards 180°-see figure S7(a). This behavior is typical of interactions with no or reduced nonconservative dissipative processes, and there is no transition to a short-range adhesive, attractive regime for moderate tapping. Moreover, in this condition no significant difference is noticeable among phase values of 1LG, 2LG and BfL for any amplitude ratio, suggesting the predominance of long-range interfacial interactions, as also observed in figure 3 only for high A sp /A 0 ratios. Considering the reduced amount of water present on the sample surface in a dry environment, this picture seems to corroborate the idea that differences in phase angles and energy dissipation among the three graphene domains emerge in the attractive regime due to adhesion hysteresis. And that adhesion hysteresis contributes to the greatest extent to the energy dissipated in the attractive regime, as confirmed also by the higher E dis values for the humid experiment compared to the dry experiment- figure S7(b).
Quasi free standing graphene
To further explore the mechanism underlying energy dissipation in EG and the role of atomic structure, we compare the results obtained on the EG sample to those found in quasi free standing single-(QF1LG) and bilayer (QF2LG) graphene samples, where the buffer layer has been converted to free standing graphene by hydrogen intercalation (see Methods section). In particular, we recall that QF1LG is obtained by hydrogen intercalation at the buffer layer-SiC interface, and is equivalent to the BfL in terms of atomic thickness, while QF2LG is obtained from single-layer EG, and has therefore equivalent thickness to 1LG. Topography and phase imaging maps sampled from QF1LG and QF2LG are available in the supplementary material (see figure S9). The quasi free standing samples are examined immediately after the EG sample, using the exact same parameters employed in the experiment shown in figure 3. Figures 4(a) and (b) show respectively the variation of the phase angle j phy and the energy dissipated E dis , recorded while decreasing the A sp /A 0 ratio, for the three samples measured. In the long-range attractive force regime (A sp /A 0 >0.7) we can observe no substantial difference in the behavior of the epitaxial and quasi free standing samples. This seems to confirm the explanation presented in the previous paragraph for the long range dissipative processes in EG, and extend it to the intercalated samples. Since all the samples share similar E dis values, long range tip-sample energy dissipation does not see dependence on graphene atomic thickness, and it is mostly affected by substrate interactions. Entering the short-range attractive regime (A sp /A 0 <0.7) the QF2LG follows the behavior of the equivalent epitaxial 1LG, suggesting that the screening effects underlying the increasing hydrophobicity with increasing number of layers [41] are not influenced by the SiC-graphene interface configuration and the presence of intercalated hydrogen, but is only dependent on the thickness, after the first carbon atomic layer. In fact, the wetting transparency responsible for the comparable wettability behavior between BfL and the substrate is observable only for the first carbon layer [41] and explains the higher energy dissipation of BfL, but it seems not to apply to QF1LG, as it shows substantially lower E dis values. Differently from thicker domains, the macroscopic wettability in the first carbon layer is influenced by both local substrate-carbon and carbon-water interfaces, and controlled in the buffer layer by the covalent character of the epitaxial bonding with SiC [43]. The absence of this type of interaction with the substrate, and the presence of defective intercalated hydrogen may explain the reduced adhesion hysteresis energy dissipation in QF1LG. By further decreasing the tip-sample separation and entering the hard-tapping regime, we start to observe scratching of the tip on the surface of the quasi free standing samples (respectively at A sp /A 0 <0.3 for QF1LG and<0.5 for QF2LG), making it impossible to extract clear j phy values. This is probably due to an abrupt transition to the contact regime and suggest a lower surface stiffness for these samples compared to EG.
Conclusion
In summary, we present a thorough study of the application of phase imaging to the analysis of the energy dissipation mechanisms in EG. In particular we study the different responses and interactions of the EG film to the tip oscillating over its surface. We explore the different mechanisms underlying the energy dissipative processes responsible for the emerging of the phase contrast in EG domains of different atomic thickness, and studied how they evolve in different imaging and environmental conditions, and oscillation regimes. We also sort out the effect and influence of different interaction forces on the evolution of the phase angle values. Finally, a comparison with the phase values extracted from quasi free standing graphene samples (both single layer and bilayer quasi free standing graphene) allows us to understand the influence of the SiC-carbon interface in the tipsurface interactions and dissipative processes.
Other experiments have been carried out on the same EG sample, varying different experimental parameters including A 0 and AFM tip, but the complex dependence of the phase shift on these and numerous other parameters and their combinations, makes it intrinsically complicated to be able to consistently control the tipsurface interactions and reproduce the data. The dependence of phase contrast on the aforementioned parameters is out of the descriptive intent of this paper. The inherent reproducibility issue is what prevents from using phase imaging as a quantitative method, and in particular, as an independent technique for assessing the exact number of atomic graphene layers. | 6,834.8 | 2020-03-17T00:00:00.000 | [
"Materials Science"
] |
Robust statistical inference for longitudinal data with nonignorable dropouts
In this paper, we propose robust statistical inference and variable selection method for generalized linear models that accommodate the outliers, nonignorable dropouts and within-subject correlations. The purpose of our study is threefold. First, we construct the robust and bias-corrected generalized estimating equations (GEEs) by combining the Mallows-type weights, Huber's score function and inverse probability weighting approaches to against the influence of outliers and account for nonignorable dropouts. Subsequently, the generalized method of moments is utilized to estimate the parameters in the nonignorable dropout propensity based on sufficient instrumental estimating equations. Second, in order to incorporate the within-subject correlations under an informative working correlation structure, we borrow the idea of quadratic inference function and hybrid-GEE to obtain the improved empirical likelihood procedures. The asymptotic properties of the proposed estimators and their confidence regions are derived. Third, the robust variable selection and algorithm are investigated. We evaluate the performance of proposed estimators through simulation and illustrate our method in an application to HIV-CD4 data.
Introduction
Longitudinal data arises frequently in medicine, population health, biological researches, economics, social sciences and so on. The observations are often collected repeatedly from every sampled subject at many time points, and thus intrinsically correlated. Our study is motivated by the following longitudinal data from the AIDS Clinical Trial Group 193A (ACTG 193A), which was a study of HIV-AIDS patients with advanced immune suppression. The data set can be accessed at http://www.hsph.harvard.edu/fitzmaur/ala/cd4.txt. For this HIV clinical trial, the CD4 cell count is of prime interest which decreases as HIV progresses. After the treatments were applied, the CD4 cell count was scheduled to be collected from each patient in every 8 weeks. However, because of adverse events, lowgrade toxic reactions, the desire to seek other therapies, death and some other reasons, the dropout rates of the first four follow-up times are 31.5%, 42.4%, 55.4% and 65.3%, respectively. Previous experiences from doctors indicate that a steep decline in the CD4 cell count indicates the disease progression, and patients with low CD4 cell counts are more likely to drop out from the scheduled study visits as compared to patients with normal CD4. Therefore, nonresponse of the CD4 cell count is likely related to itself and is nonignorable [1]. Our purpose is to examine whether the CD4 counts of young patients are more likely to decrease.
For the longitudinal data, a major aspect is how to take into account the within-subject correlation structure to improve estimation efficiency. A naive and simple way is to use a working independent model [2,3], but it ignores the correlation structure and may lose efficiency when strong correlations exist. In order to incorporate the within-subject correlation, Liang and Zeger [4] proposed generalized estimating equations (GEEs) through a working correlation matrix. However, two challenges related to GEEs have not been solved yet. First, it is difficult to describe and specify the underlying within-subject covariance matrix, which is unknown in practice and suffers from the positive-definiteness constraint. Second, the GEEs method is closely related to the weighted least squares method, which is not robust when the observed data contains outliers, such that the resulting estimators may be biased and their confidence regions may be greatly lengthened in the direction of the outliers [5]. To solve the first issue, Huang et al. [6], Bai et al. [7], Fu and Wang [8] approximated the covariance matrices with basis functions or possible dependence structures. Leung et al. [9] proposed a hybrid-GEE method that combines multiple GEEs based on different working correlation matrices. Leng et al. [10], Zhang and Leng [11], Zhang et al. [12] and Lv et al. [13] applied the Cholesky decomposition to obtain the within-subject covariance matrix. Li and Pan [14] and Leng and Zhang [15] constructed estimating functions by the quadratic inference function (QIF). Xu et al. [16] proposed a combined multiple likelihood estimating procedure based on three dynamic covariance models. Among these methods, the QIF and hybrid-GEE approaches have recently received considerable attention due to their flexibility and simplicity [17,18] . For the second challenge, the traditional robust M-estimation for longitudinal data have been discussed, by using the Mallowstype weights to downweight the effect of leverage points and adopting the Huber's score function on the Pearson residuals to dampen the effect of outliers; see He et al. [19], Wang et al. [20], Qin and Zhu [21], Qin et al. [22], Fan et al. [23], Zheng et al. [24] and so on.
Moreover, the CD4 cell count drops out of the study prior to the end of the study. It is well known that under nonignorable missing responses, the complete case analysis can not be trusted. The majority of existing methods only naturally accommodate ignorable dropout or missing at random (MAR; [25]) assumption. For the ACTG 193A data, however, the dropout is nonignorable or missing not at random (MNAR) such that developing valid methodologies for statistical analysis with nonignorable dropout is challenging [26]. In this case, the population parameters are, in general, not identifiable [27] if there is no assumption imposed, and estimators based on the assumption of ignorable dropouts may have large biases. Thus, methods which are very different from those for an ignorable propensity have to be applied; for example, see Molenberghs and Kenward [28] , Kim and Yu [29], Wang et al. [30], Shao and Wang [31], Bindele and Zhao [32] and so on. Furthermore, when the dimension of covariates is high, the generalized linear models (GLMs) often include many irrelevant covariates. In practice, it is useful to find which covariates are relevant for prediction, both for better interpretation of the model and for better efficiency of the estimators [33].
The existence of nonignorable dropouts and outliers and their impacts on the inference motivate us to find a robust and efficient estimation approach. To the best of our knowledge, this problem has not previously been investigated. Our contributions of this paper are in three aspects.
(1) We impose a parametric model on the dropout propensity and then use a nonresponse instrument, which is a useful covariate vector that can be excluded from the propensity given the response and other covariates, to deal with the identifiability issue [26,30]. The main technique is to create sufficient estimating equations to estimate the parametric propensity based on a given instrument. In specific, we apply the generalized method of moments (GMM; [34]) to estimate the propensity. (2) Once the propensity is estimated, we propose the improved robust and bias-corrected GEEs in the presence of outliers and nonignorable dropouts through the following three steps. First, in order to against the influence of outliers, the Huber's score function and Mallows-type weights are applied to build the robust GEEs. Second, the bias-corrected GEEs are constructed by the inverse propensity weighting (IPW; [35]) to account for nonignorable dropouts. Third, in conjunction with the quadratic inference function (QIF; [36]) and hybrid-GEE [9] methods, we construct two classes of improved estimators that can incorporate the within-subject correlations under an informative working correlation structure. In specific, the proposed QIF procedure is based on the matrix expansion idea, which neither assumes the exact knowledge of the within-subject covariance matrix nor estimates the parameters of the withinsubject covariance matrix. Alternatively, the hybrid-GEE method combines multiple GEEs based on different working correlation models to improve the estimation efficiency. The resulting two robust EL ratios are two different weighted sum of chi-square random variables asymptotically, which can be used to construct two corresponding confidence regions, respectively. (3) For the robust variable selection, we propose the penalized robust EL approach by combining the profile robust EL and the smoothly clipped absolute deviation (SCAD; [37]) methods together in Section 3. We show that the proposed variable selection method can efficiently select significant variables and estimate parameters simultaneously. Furthermore, the resulting estimators based on the QIF and hybrid-GEE methods are consistent and have the oracle property. In addition, an algorithm based on the local quadratic approximation (LQA) is proposed.
The rest of this paper is organized as follows. We propose the improved robust estimators and variable selection approaches based on the QIF and hybrid-GEE methods in Section 2. The asymptotic theories are investigated in Section 3. Simulation studies are given in Sections 4 and 5 analyzes the ACTG 193A data for illustration. Some discussions can be found in Section 6. All technical details and additional simulation results are provided in the Supplementary Material.
Robust GEEs
To illustrate our proposed methods, we first review the robust GEEs without missing data. Let y i = (y i1 , y i2 , . . . , y im i ) T be a m i dimensional vector of the ith subject's response which may drop out, x i = (x i1 , . . . , x im i ) T be an always observed (m i × p)-dimensional matrix of covariates associated with y i , i = 1, . . . , n and j = 1, . . . , m i . Here, m i is called as the size of the ith subject. We consider that y ij are modelled by where β is a p-dimensional parameter vector, g(·) is a known link function, μ ij = E(y ij ), φ is a dispersion parameter, v(·) is a known variance function and a T is the transpose of a. Without loss of generality, we consider the balanced data with the same subject size m i = m and the unbalanced situation will be discussed in Section 6 later. Motivated by Lv et al. [38] and Qin et al. [39], denote μ i = (μ i1 , . . . , μ im ) T and define the core robust estimating equations is a bounded function to downweight the influence of outliers, and the correction term C i = E{ψ(A −1/2 i (φ)(y i − μ i ))} is used to ensure Fisher consistency of the estimator. In specific, ψ(·) is the Huber's score function ψ(x) = min{c, max{−c, x}}, where the tuning constant c is used to balance the efficiency and robustness of the estimator. While, the weight w ij is considered based on the Mahalanobis distance [40] as follows: where b 0 is the 0.95 quantile of the chi-square distribution with the degree of freedom equal to the dimension of x ij , m x and S x are robust estimates of location and scale of x ij , such as minimum volume ellipsoid (MVE) estimates of Rousseeuw and van Zomeren [41], Lv et al. [38] and Qin et al. [39]. By using the Huber's score function and weight matrix, the influence of the outliers will be reduced.
Robust and bias-corrected GEEs
Motivated by the ACTG 193A data, let r i = (r i1 , r i2 , . . . , r im i ) T be the vector of response indicators, where r ij = 1 if y ij is observed and r ij = 0 if y ij , . . . , y im i are not observed, and our interest is to estimate the unknown β based on (y i , x i , r i ) for i = 1, . . . , n. To take into account the nonignorable dropouts, define π ij = Pr(r ij = 1 | x i , y i ). Subsequently, we propose the robust and bias-corrected GEEs as follows: where D i = ∂μ i /∂β, V i is the covariance matrix of (y i − μ i ) and S i = diag{r ij /π ij , j = 1, 2, . . . , m}. Moreover, it can be verified that the inverse of covariance matrix V −1 i can be decomposed as A with being an (m × m)-dimensional true correlation matrix. Unfortunately, is unknown in practice and one has to use a working correlation matrix R. Some common working correlation structures include independent structure, compound symmetry (CS) and first-order autoregressive (AR(1)). Then, the above equations can be written as To obtain the estimator of β, we first need to get the consistent estimators of π ij and φ, denoted asπ ij andφ, respectively. Using the identifiability and estimation approaches proposed in Wang et al. [26], we assume that x ij can be decomposed as two parts u ij and z ij , where u ij is continuously distributed and z ij can be continuous, discrete, or mixed. Denote T as the histories of y ij , u ij and z ij up to and including cycle j, respectively. According to Diggle and Kenward [42], we impose a parametric model on the propensity as follows: where is a vector of unknown parameters, is a known monotone function and r i0 is always defined to be 1. Given − y ij and − u ij , the instruments − z ij can be excluded from the nonresponse propensity, which are used to create sufficient estimation equations for estimating the propensity [26,30].
Under the model (2), . Similar to He et al. [19] and Wang et al. [20], a consistent estimator of φ can be obtained aŝ where β * is an initial estimate of β by using the MM method, i.e., a robust and efficient estimator in the presence of outliers [43,44].
Once we haveπ ij andφ, the unknown A , respectively. Then, we define the robust and unbiased estimating equations for β as follows:
Robust EL inference based on QIF and hybrid-GEE
In order to improve the efficiency, we borrow the matrix expansion idea of Qu et al. [36] and propose the quadratic inference function (QIF) by assuming that the inverse of the working correlation matrix R −1 can be approximated by a linear combination of several basis matrices, that is where B k ∈ R m×m , k = 1, 2, . . . , l, are symmetric basic matrices depending on the particular choice of R −1 and b 1 , . . . , b l are unknown coefficients. For example, if a working correlation structure is CS, then R −1 = b 1 B 1 + b 2 B 2 with an identity matrix B 1 and a symmetric matrix B 2 having 0 on the diagonal and 1 elsewhere. If R corresponds to AR(1), 3 with an identity matrix B 1 , a symmetric matrix B 2 having 1 on the sub-diagonal entries and 0 elsewhere, and a symmetric matrix B 3 having 1 in elements (1, 1) and (m, m), and 0 elsewhere. More details can be found in Qu et al. [36] and Cho and Qu [17]. Therefore, by substituting (5) into (4), it leads to Consequently, the equations (6) can be approximated as a linear combination of elementŝ g i (β) for i = 1, . . . , n, wherê . . .
Notice that the functionsĝ i (β) do not involve the parameters b 1 , . . . , b l , such that the estimation of these parameters is not required andĝ i (β) are the overdetermined estimating equations with pl functions. Thus, the EL method is proposed for the inference of β under some regular conditions. Let p i represent the probability weight allocated toĝ i (β) and the empirical log-likelihood ratio function is defined as follows: The maximum EL estimator based onĝ i (β), denoted asβ Q , can be obtained bŷ Alternatively, Liang and Zeger [4] considered R = R(α) with unknown nuisance parameter α and Leung et al. [9] proposed a hybrid method that combines multiple GEEs based on different and linearly independent choices of R(α), say R k (α), k = 1, 2, . . . , q, to improve the efficiency as follows: In practice, a few popular choices of R(α) may be used, i.e., CS and AR(1), and different values of α are specified by Leung et al. [9] to model the degree of intra-subject correlation. Similarly, the corresponding empirical log-likelihood ratio function is defined as follows: and the maximum EL estimator based onĥ i (β), denoted asβ H , is obtained bŷ
Variable selection
When the dimension of covariates is high, we attend to identify the zero coefficients consistently and estimate the nonzero coefficients efficiently and robustly. Therefore, we use penalized empirical likelihood (PEL) to do variable selection as follows: whereη can beĝ andĥ, p ν (·) is a penalty function with tuning parameter ν which can shrink small components of coefficients to zero. In this paper, we use the SCAD penalty, while other choices of penalty could possibly also be entertained. The first derivative of the SCAD is where a > 2 and ν > 0 are tuning parameters. As suggested by Fan and Li [37], a = 3.7 is recommended. To choose the optimal tuning parameter ν, three information criteria: BIC of Schwarz [45], BICC of Wang et al. [46] and EBIC of Chen and Chen [47], are considered as follows: where β ν is the estimate of β based on the QIF or hybrid-GEE methods with ν being the tuning parameter, and df ν is the number of nonzero coefficients in β ν .
Asymptotic theories
where β 0 and θ 0 are the true values of β and θ respectively.
Remark 3.1:
If π ij is known, we have H g = H h = 0, then the asymptotic covariance matrices ofβ Q andβ H can be simplified as
Remark 3.2:
By the plug-in method [48], we can obtain It can be proved thatˆ g andˆ g are consistent estimators of g and g , respectively. Once we haveˆ g andˆ g , the variance matrix g g T g can be estimated byˆ gˆ gˆ T g . Similarly, we can obtain the variance matrix estimator forβ H . Hence, Theorem 3.1 can be used to construct a normal-approximation-based confidence region.
As in Theorems 2 and 4 of Tang et al. [48] and many others, for simplicity we only need to study the asymptotic properties ofR Q (β 0 ) andR H (β 0 ). Compared to the standard empirical log-likelihood ratio without missing data, the main difference is that botĥ . . , n, are not independent and identically distributed. Hence, the asymptotic distributions ofR Q (β 0 ) andR H (β 0 ) may not be standard chi-squares. Actually, we will show thatR Q (β 0 ) andR H (β 0 ) are two different weighted sum of chi-square random variables asymptotically.
Remark 3.3:
In general, when the parameters are over-identified, one can define the empirical log-likelihood ratio functions [49]. For nonignorable missing data, however, the Wilks's theorem based on the EL does not hold any more [48]. Lemmas 1 and 2 in the Supplementary Material reveal the reasons why the limiting distributions ofR Q (β 0 ) andR H (β 0 ) do not follow the standard chi-square. Thus, it also can be shown that W Q (β 0 ) and W H (β 0 ) do not follow Wilks's phenomenon.
Remark 3.4:
When there are no missing data, we have g = g and h = h such that both −1 g g and −1 h h equal to the identity matrix, which makes the Wilks's theorem hold. This is the same as the result of Li and Pan [14]. Moreover, Theorem 3.2 can be used to test the hypothesis H 0 : β = β 0 and construct the confidence region for β 0 .
Along the lines of Rao and Scott [50], we have the following corollary.
Remark 3.5:
To construct the confidence regions ofβ Q according to Corollary 3.3, 100(1 − α)% confidence region forβ Q is give by Similarly, we can obtain the confidence regions ofβ H .
Let A = {j : β j0 = 0} be the set of nonzero components of true parameter vector β 0 with its cardinality as d = |A|. Without loss of generality, the parameter vector can be partitioned as two parts Through the Equation (9), we can perform variable selection and also produce robust estimators of the nonzero components.
T as the resulting penalized robust estimators based on the QIF and hybrid-GEE methods in (9), respectively. Here, the true parameter where (11) g and (11) h are dl × dl and dq × dq sub-matrices of g and h , (12) g and (12) h are dl × d and dq × d sub-matrices of g and h , (11) g and (11) h are dl × dl and dq × dq sub-matrices of g and h ,
Simulation studies
We conduct simulation studies to examine the finite-sample performance of the following estimators of β: (a) the proposed robust QIF estimators based onĝ i (β) in (7) and the robust hybrid-GEE estimators based onĥ i (β) in (8) with nonignorable dropout propensity π ij (θ j ) inŜ i and the GMM estimatorθ j obtained by (11). Computation details can be seen in (7) and (8) with T i (μ i (β)) = y i − μ i are also obtained, which are denoted as QIF AR (1) , QIF CS , Hybrid 0.4 and Hybrid 0.7 , respectively. (b) the robust MAR estimator (denoted as R-MAR) based on (5) with the ignorable dropout π ij (Υ j ) = π ij ( − x ij ,Υ j ) inŜ i and true correlation structure R = . Here, the ignorable dropout propensity Pr(r ij = 1 | r i(j−1) = 1, x ij ) is imposed by a parametric linear logistic regression and the GMM estimatorΥ j is obtained similarly by (11). While, the non-robust MAR estimator (denoted as MAR) is also obtained. (c) the robust full sample (denoted as R-FULL) estimator based on (5) with the true correlation structure R = and S i = I when there is no missing data, which is used as a gold standard. While, the non-robust full sample (denoted as FULL) estimator is also obtained.
We consider as in Normal errors.
If ξ 0 j is the true value of ξ j , it can be verified that E{s j (y i , x i , r i , ξ 0 j )} = 0. The efficient twostep GMM [34] estimator of ξ j iŝ To see the robustness of the proposed estimators, we consider the following three cases: (Case0) no contamination based on observed y ij and x ij ; (Case1) randomly choose 10% of observed y ij to be y ij + 10; (Case2) randomly choose 2% of observed x ij to be x ij + 2 and 10% of observed y ij to be y ij + 10.
To save space, Tables 1-2 only report the simulation results under multivariate t errors with the AR(1) and CS correlation structures and the rest of the results are provided in Tables S1-S4 of the Supplementary Material. A few conclusions can be drawn from the results: (1) When there are no outliers (Case0), the proposed robust QIF and hybrid-GEE estimates are almost the same as those non-robust QIF and hybrid-GEE estimates in terms of the biases, SDs, RMSEs and CPs. These results indicate that our proposed robust estimators seem to perform no worse than non-robust estimators by introducing the weight matrix and the Huber's score function, which control the degree of robustness and efficiency. While, the MAR estimates based on ignorable dropout have large biases and RMSEs under nonignorable missing models, which makes their CPs much lower. (2) In the presence of outliers (Case1 and Case2), the non-robust estimates can impact greatly in terms of the biases, SDs, RMSEs and CPs, while the proposed estimates based on the robust QIF and hybrid-GEE methods still perform well, which indicates that the influence of outliers in responses and/or covariates can be reduced. The biases and RMSEs of the MAR estimates become more larger due to nonignorable missing models and outliers. Among the two contaminated cases, the SDs and RMSEs of all estimates increase as the contamination level increases, but the proposed robust QIF and hybrid-GEE estimates have much smaller magnitudes of increase. (3) Among the two robust QIF estimators, it can be seen that the estimates R-QIF AR (1) have smaller or comparable RMSEs than these based on the estimates R-QIF CS , which indicates that the AR(1) structure best approximates the true correlation structure. Among the two robust hybrid-GEE estimators, the estimates R-Hybrid 0.4 have smaller RMSEs. These findings are consistent with our theoretical result that the choice of correlation matrix does not affect the consistence, but will affect the efficiency. (4) When n = 500, the SDs and RMSEs become smaller and the proposed four estimators have similar performance.
Misspecified propensity
In the second simulation, we investigate the performance of the proposed estimators when the propensity is misspecified. In specific, we consider the same settings as in the first simulation with normal and multivariate t errors, expect that Here, the unconditional dropout percentages for four time points are about 18%, 35%, 52% and 66% for j = 1, 2, 3, 4. Use the same method to estimate ξ j . In this case, the working propensity model is misspecified so that we can see the robustness of the proposed estimators. Simulation results under multivariate t errors with the AR(1) and CS correlation structure are presented in Tables 3-4. The results under normal errors with the AR(1) and CS correlation structure are given in Tables S5-S6 of the Supplementary Material. We have the similar results as in the first simulation. The proposed estimators have robust and efficient results, even the working dropout propensity model is wrong.
Variable selection
In the third simulation, we assess the finite sample performance of variable selection based on the proposed estimators with SCAD penalty in terms of model complexity (sparsity), Table 3. Biases, standard deviations (in parentheses), relative mean squared errors (RMSEs) and coverage probabilities (CPs) in the second simulation under multivariate t errors with the AR(1) correlation structure.
The iteration stops when solutions converge to a satisfying precision. For a prespecified value ζ , setβ j = 0 if |β j | < ζ and we apply the algorithm in Owen [5] to computeλ. Table 5 only reports the results for n = 200, 500 and p = 10, 20 with the AR(1) error based on three information criteria: BIC, BICC and EBIC, for selecting the tuning parameter ν. The rest of the results under the CS error are presented in Table S7 of the Supplementary Material. We obtain the mean square errors (MSE) defined by MSE(β) = (β − β) T (β − β). Columns 'C' and 'IC' are measures of model complexity, with 'C' representing the average number of nonzero coefficients correctly estimated to be nonzero, and 'IC' representing the average number of zero coefficients incorrectly estimated to be nonzero. The simulated results of the oracle model (i.e., the model using the true predictors) are also reported.
From Tables 5 and S7, it can be seen that: (1) the proposed variable selection methods can select all three true predictors and the average numbers of zero coefficients incorrectly estimated to be nonzero are close to zero in most of cases; (2) the simulated MSEs of the proposed methods based on the BIC, BICC and EBIC are close to that of the oracle EL, especially for larger sample sizes; (3) in terms of MSEs and ICs, it is interesting to note that the BIC and BICC have similar performance and the EBIC has the best performance in most of cases. These findings imply that the model selection results based on the proposed approaches are satisfactory and the selected models are very close to the true model; (4) based on these results, in practice we recommend to use the information criteria EBIC for selecting ν.
Application to HIV-CD4 data
For illustration, we apply the proposed methods to the AIDS Clinical Trial Group 193A described in Section 1. In this study, the CD4 counts were collected from 316 patients who took the daily regimen containing 600 mg of zidovudine plus 2.25 mg of zalcitabine. We consider the first four follow-up times, 8, 16, 24, 32, as four time points j = 1, 2, 3, 4, and use the CD4 counts in four time intervals, (4,12], (12,20], (20,28], (28,36], as the study variable y ij for j = 1, 2, 3, 4, because the realised follow-up time points might be a little different from the scheduled time points. A few patients had more than one measurement in one time interval, in which case we use the last record in that interval as y ij at time point j. Some patients returned to the study after they dropped out of the study. For simplicity, the measurements after they dropped out of the study are not used in the analysis. There are two continuous covariates: age (x ij1 ) and follow-up time (x ij2 ). We use the working propensity model (10) and (·) = [1 + exp(·)] −1 . Because of adverse events, low-grade toxic reactions, declining CD4 cell counts before the study end, the desire to seek other therapies, death and some other reasons, patients are willing to dropout. Hence, the follow-up times are treated as covariates u ij may affect the dropout. The ages are always observed and thus are used as instruments z ij .
The point estimates and their normal-approximation-based confidence intervals by bootstrap with replication size 300 are reported in Table 6. It can be seen that: (1) all proposed estimates of β 1 are statistically significant negative, which is reasonable since we have known that the number of CD4 counts of these patients keeps decreasing as time goes on and the trends become worse for those with lower CD4 counts. However, all the confidence intervals of the non-robust estimates include 0, which indicates the time effect on the number of CD4 counts is not significant; (2) the proposed estimates of β 2 are statistically significant positive, which indicates that patients with earlier ages infected by the HIV are more likely to have lower CD4 counts. On the contrary, all the confidence intervals of the non-robust estimates include 0 and show the infected ages effect on the number of CD4 counts is not significant; (3) the estimates based on the MNAR and complete-case (CC) assumptions are different in most of cases. Therefore, the ignorable dropout assumption Table 6. Estimates, standard errors (SE) and confidence intervals (CI) for the HIV-CD4 data based on the proposed methods when the instrument variable is age. MNAR CC is questionable; (4) compared with the non-robust estimates, our proposed estimates have smaller standard errors and shorter confidence intervals. A sensitivity analysis is implemented by choosing the follow-up times as the instrument z ij and the ages as the covariates u ij , respectively. The estimates and standard errors are reported in Table S8 in the Supplementary Material. It can be seen that our proposed estimates using z ij =age have much smaller standard errors, which also indicates that we should use the age as the instrument variable.
Discussion
In this paper, we first make use of the IPW method and an instrument variable to deal with the nonignorable dropout. To achieve the robustness against outliers, the Huber's score function and Mallows-type weights are then applied to build the robust and bias-corrected GEEs. Based on the QIF and hybrid-GEE, we construct two classes of improved estimators that can incorporate the within-subject correlations under an informative working correlation structure. The existing two-step GMM and EL approaches are used to obtain the proposed estimators. In addition, we propose the robust variable selection through the penalized robust EL by combining the profile robust EL and the SCAD methods together. The simulation results show that our proposed estimators perform well even when the working dropout propensity model is wrong; the proposed variable selection methods can efficiently select significant variables and estimate parameters simultaneously.
The above methods are presented with balanced data, that is, m i = m. In practice, longitudinal data may not be measured with the same subject size, and could be unbalanced due to experimental constraints. To configure the proposed methods for unbalanced data, we apply the transformation matrix to each subject. Similar in Zhou and Qu [51], we create the largest subject with a size m, which contains time points for all possible measurements, and assume that fully observed clusters contain m observations. We define the m × m i transformation matrix M i for the ith subject by removing the columns of the m × m identity matrix, where the removed columns correspond to unmeasured data/time points for the ith subject. Through the transformation,ĝ i (β) is replaced bŷ ⎞ ⎟ ⎠ .
Therefore, for unbalanced data with dropout, parameter estimation and variable selection also can be implemented using our proposed methods in Sections 2. We can show that the asymptotic results of Theorems 3.1, 3.2 and 3.4 still hold for unequal subject sizes using the similar way in the proofs of the theorems. When y ij is obtained under some treatments, baseline covariates measured prior to treatments, such as age group, gender, race, and education level, are related to the study variable but are unrelated with the propensity, when the study variable and other covariates are conditioned, and thus can be considered as instruments. A fully observed proxy or a mismeasured version of the missing response may be treated as an instrument. A sensitivity analysis of studying some or all different decompositions of x ij = (u ij , z ij ).
Several further problems need to be investigated. First, the dimension of the covariates in regression models is assumed to be fixed, thus it is interesting to extend our approach with diverging numbers of parameters [52]. Moreover, the efficiency can further be improved by incorporating population level information based on empirical likelihood approach. Second, in our simulations the constant c is fixed. Actually, if the errors are normally distributed and there is no contamination, the best choice of c is ∞; if the errors follow a heavy-tailed distribution, c should be chosen to be a small positive value. Thus, the choice of tuning constant c in Huber's score function may impact the estimation efficiency such that we may propose a Huber's score function with a data-dependent tuning constant as in Wang et al. [53] as follows. Onceφ is obtained, the data-dependent value c can be estimated bŷ τ (c) = n i=1 m j=1 I(|ê ij | ≤ c) 2 nm n i=1 m j=1 I(|ê ij | ≤ c)ψ 2 (ê ij ) + c 2 I(|ê ij | > c) with an initial estimate β * . Note thatτ (c) is not a continuous function of c, in practice, the optimal c can be determined byτ (c) in the range of c ∈ [0, 3] and we obtain the c value that makesτ (c) the largest. Third, we may consider the model with heterogeneous effects and apply concave penalty functions to subgroup analysis [54]. Fourth, the efficiency of proposed GMM estimatorξ j depends on the choice of s j (y i , x i , r i , ξ j ), which may not be optimal since we only use the first order moments of data, i.e., (1, − u T ij , − z T ij , − y T i(j−1) ) T . Other moments or characteristics of may provide more information and, hence, result in more efficient GMM estimators. | 8,299.2 | 2022-08-11T00:00:00.000 | [
"Mathematics"
] |
Numerical simulation of the unsteady cavitation behavior of an inducer blade cascade
One source of unsteadiness in turbopump inducers consists in a rotating cavitation behavior, characterized by different cavity shapes on the different blades, which leads to super-or subsynchronous disturbances. This phenomenon is simulated for the case of a simple two-dimensional blade cascade corresponding to a typical four-blade inducer. A numerical model of unsteady cavitating flows was adapted to take into account nonmatching connections and periodicity conditions. Single-channel and four-channel computations were performed, and in the latter case, nonsymetrical unstable flow patterns were obtained. Limits of stability according to the mass flow rate and the cavitation number are presented. Qualitative comparisons with experiments, instability criterion, and the mechanisms of instabilities are also investigated.
Numerical Simulation of the Unsteady Cavitation Behavior of an Inducer Blade Cascade
Olivier Coutier-Delgosha, * Yannick Courtot, † Florence Joussellin, ‡ and Jean-Luc Reboud § Institut National Polytechnique de Grenoble, 38041 Grenoble Cedex 9, France One source of unsteadiness in turbopump inducers consists in a rotating cavitation behavior, characterized by different cavity shapes on the different blades, which leads to super-or subsynchronous disturbances.This phenomenon is simulated for the case of a simple two-dimensional blade cascade corresponding to a typical fourblade inducer.A numerical model of unsteady cavitating flows was adapted to take into account nonmatching connections and periodicity conditions.Single-channel and four-channel computations were performed, and in the latter case, nonsymetrical unstable flow patterns were obtained.Limits of stability according to the mass flow rate and the cavitation number are presented.Qualitative comparisons with experiments, instability criterion, and the mechanisms of instabilities are also investigated.
I. Introduction
T O achieve operating at high rotational speed and low inlet pres- sure, rocket engine turbopumps are generally equipped with an axial inducer stage working in cavitating conditions (Fig. 1).Cavitation develops on suction side of the blades and at inducer periphery near the tip.Peripheral cavitation concerns all cavitating structures that appear near the shroud casing at the inducer inlet, as backflow of the pump 1 and cavitating tip vortices. 2The presence of vapor induces disturbances that can result in substantial performance losses.Moreover, when inlet pressure is decreased from cavitation inception to breakdown of the pump, unsteady phenomena may appear, associated with different blade cavitation patterns.Experimental results point out two main types of cavitation instabilities: a self-oscillation behavior of cavitation sheets, whose mechanism was studied in cavitation tunnels and analyzed in detail by many authors, 3−5 and a rotating cavitation behavior, mainly observed in inducers, which shows different sizes of cavitation structures in the different blade-to-blade passages of the machine and leading to super-or subsynchronous perturbations.
This last phenomenon strongly depends on the cavitation development in the machine.A typical example is given in Fig. 2: At cavitation inception, only a steady and balanced flow pattern with one short attached cavity on each blade is observed from flow visualizations.When the cavitation parameter is slightly decreased, a steady and alternate cavitating configuration appears (only on four blade inducers) with alternatively one short and one long cavity.For a lower cavitation parameter, 6 just above breakdown, an unsteady flow pattern called rotating cavitation can be identified: Unbalanced attached cavities are observed in the different channels, their distribution rotating faster than the inducer, 7,8 which leads to large radial loads on the shaft. 9Finally, near the breakdown of the inducer, a steady and balanced flow pattern with fully developed cavitation is observed.
These instabilities induce some strong radial forces that may perturb the rotor balance and important pressure fluctuations in the lines.They must be quantified and controlled to avoid any major effect on the global pump behavior.
Over the past few years, numerical models have been developed to predict the cavitation instabilities in inducers.They are based on stability analyses and linear approach and take into account the total flow rate variations through a cavitating blade-to-blade channel, 10,11 or calculate the flow around attached cavities. 12,13To improve the understanding and the prediction capability of cavitation instabilities, numerical and experimental analyses are carried out in France through collaborations between the Laboratoire des Ecoulements Géophysiques et Industriels, the Rocket Engine Division of Fig. 2 Cavitation patterns and performance evolution as the cavitation number decreases in four-blade inducer. 7ECMA Moteurs and the French space agency Centre National d'Etudes Spatiales.A two-dimensional model was developed 14−16 to simulate unsteady cavitation phenomena in inducers, such as pulsating cavities or vapor cloud shedding.The liquid/vapor mixture is considered in this model as a single fluid whose density varies from the liquid density to the vapor density, with respect to the local static pressure.The model has been validated on numerous cases, such as venturi-type sections or hydrofoils in cavitation tunnels, and the results demonstrated a good agreement with experiments.Its specificity is a reliable simulation of the cyclic behavior of self-oscillating cavities. 16,17n the present paper, the numerical model is applied to the simulation of the other instability observed in inducers, namely, the nonsymmetrical cavitation pattern.This first attempt is performed in a two-dimensional blade cascade corresponding to a four-blade inducer.The main objective was to take into account the interaction phenomena between the different blade-to-blade passages and to evaluate their effect on the unsteady behavior of the cavitation sheets on each blade.We focus in the present study on the ability of the numerical model to distinguish stable configurations from unstable ones, with a special attention paid to the mechanisms of instabilities.
The computational domain is the blade-to-blade geometry, that is, a (m, r θ) representation, where m is the meridian coordinate, r the radius (here constant), and θ the revolution angle.The twodimensional blade-to-blade channels were drawn by cutting the three-dimensional inducer geometry at constant radius R c equal to 80% of the tip radius R (Fig. 3).The computational grids of the four channels must be identical to ensure a precise detection of unstable cavitating behaviors.Indeed, different grids would induce spurious numerical errors that could be held responsible for the appearance of instability in the flowfield.Unfortunately, this condition cannot be imposed with a single mesh applied to the whole geometry because of the orthogonality of the cells that is required by the numerical model in the computational domain.Therefore, four identical separate grids are used for the four channels with three connections and one periodicity condition.It can be seen in Fig. 4 that it is quite impossible to obtain matching cells at these boundaries because of their curvature due to the high inclination of the blades in the (m, r θ) representation.Therefore, a new method to treat nonmatching boundaries is presented in this paper.It is based on an interpolation technique that guarantees, through its integration inside the algorithm, both mass and momentum conservation.
Computations were performed with different values of the cavitation number σ ,topoint out stable or unstable cavitating flows.Our present objective is to demonstrate the capability of the numerical model to simulate the mechanisms of nonsymmetrical flow arrangements.Results of the calculations are also compared to experimental observations performed previously at the Centre de Recherches et d'Essais de Machines Hydrauliques de Grenoble (CREMHyG) laboratory.
II. Physical Model
The cavitation model is based on a single-phase flow approach, which assumes that only one fluid is considered. 18This fluid is characterized by a density ρ that varies in the computational domain with respect to a state law.When the density in a cell equals the liquid density ρ l , all of this cell is occupied by liquid, and if it equals the vapor density ρ v , the cell is full of vapor.Between these two extreme values, the cell is occupied by a water/vapor mixture that we still consider as one single fluid.The void fraction α v = (ρ − ρ l )/(ρ v − ρ l ) can be defined as the local ratio of vapor contained in this mixture.If the cell is full of vapor, then α v = 1.However, if a cell is totally occupied by liquid, α v = 0.
Through this simple model, linking the void ratio α v to the state law, we implicitly treat the fluxes between the phases, without the supplementary assumptions required in the case of a two-phase model.
With regard to the momentum fluxes, our model assumes that locally (in each cell) velocities are the same for liquid and for vapor: In the mixture regions, gas structures are supposed to be perfectly carried along by the main flow.(The friction forces are high, compared to the buoyancy forces.)That hypothesis is often assessed for this problem of sheet-cavity flows, in which the interface is considered to be in dynamic equilibrium. 19,20The momentum transfers between the phases are, thus, directly linked to the mass transfers.
Vaporization and condensation processes are managed by a postulated barotropic state law that links the density to the local static pressure.The fluid is supposed to be purely liquid or purely vapor when the pressure is higher or lower than the vapor pressure, respectively.The two cases are joined smoothly in the vapor-pressure neighborhood.It results in the state law presented in Fig. 5, whose only parameter is the maximum slope 1/A 2 min , where A 2 min = ∂ p/∂ρ.A min can, thus, be interpreted as the minimum speed of sound in the mixture.Its calibration was performed in previous studies 15,16 in the case of unsteady self-oscillation behavior of sheet cavitation.The optimal value was found to be independent of the hydrodynamic conditions and is about 2 m/s for cold water, with p vap = 0.023 bar, and corresponds to p vap ≈ 0.06 bar (Fig. 5).The use of this state law implies that no delay to vaporization or condensation can be considered: Density directly depends on the pressure.Nevertheless, other models including this physical feature 20 have been also implemented and tested.If the vaporization/condensation terms are correctly tuned, that is, the delay parameters are optimized, then very similar results are obtained.In the present case, we try to simulate large-scale fluctuations of the whole cavitation sheets attached to the blades.To obtain numerically smooth cavities, we use a nondimensional ratio A min /(R c ) of 0.1.Rotation speed corresponding to a celerity of sound of 2 m/s is then lower than the experimental value.
The main numerical problem of our single-fluid approach results from the difficulty to manage different flow behaviors: incompressible flow in the areas containing pure liquid or pure vapor and a highly compressible flow in the areas of transition between liquid and vapor.
III. Numerical Model
To solve the time-dependant Reynolds-averaged Navier-Stokes equations associated with the barotropic state law presented earlier, the numerical code applies the SIMPLE algorithm on twodimensional structured curvilinear-orthogonal meshes, with modifications to take into account the cavitation process.The details of the numerical resolution were presented by Coutier-Delgosha et al. 16 It is based on an implicit method for the time discretization, and the HLPA nonoscillatory, second-order convection scheme proposed by Zhu. 21A complete validation of the method was performed, and the influence of the numerical parameters was widely investigated.The results are reported in Ref. 16.
A standard k-epsilon RNG model of turbulence is used.This model does not allow any simulation of the self-oscillation behavior of sheet cavitation, as reported by Coutier-Delgosha et al. 22 The effects of the two-phase mixture compressibility on the turbulence structure must be taken into account to simulate this phenomenon correctly.A modification of the standard model was proposed in Refs.15 and 16 to solve this problem efficiently.In the present study, only the rotating cavitation pattern is investigated.This is a first attempt to predict numerically the cavitating coupling between the channels, and so the interaction of its mechanisms with the self-oscillation of each cavity is not suitable.Thus, the standard turbulence model is applied.
IV. Connection of Nonmatching Boundaries
The boundary condition setting is based on two rows of dummy cells generated around the computational domain.Connections or periodicity conditions between two frontiers are obtained by transferring variables from inner cells to dummy cells.In matching boundary cases (identical grids on the two sides of the frontier, Fig. 6a), no interpolation is necessary, and the procedure involves no supplementary numerical error.In nonmatching boundary cases (Fig. 6b), special care must be paid to constrict the errors introduced by the interpolations and to respect the conservative character of the resolution.Indeed, the spurious generation of mass or momentum inside the domain, through connections or periodicities, would be very prejudicial to the rate of convergence.
We present here the general features of the process indicated in Fig. 6b.It consists in transferring the information from rows A1 and A2 to rows B1 and B2 and from rows A ′ 1 and A ′ 2torowsB ′ 1 and B ′ 2.
A. Geometry of Dummy Cells
In the case of connections or periodicity conditions, the shape of the dummy cells is of first importance because their width, their length, and their curvature strongly affect the computation.Any geometrical difference between the dummy cells and the inner corresponding ones would enhance the numerical errors by creating some spurious mass and momentum flow rates.Therefore, these cells must be as similar as possible to the cells of the original corresponding row.
B. Interpolations of Variables
All variables are transported from the computational domain to the dummy cells, that is, from A rows to B rows, through interpolations.The kind of interpolation depends on the transmitted variable.The transfer aims to ensure the conservation of both mass and momentum.In other words, the mass quantity passing through one of the two frontiers must be as close as possible to the mass quantity passing through the second one.The same conditions are required for the momentum fluxes.
1) The velocity component u (tangential to the frontier) and the pressure P are linearly interpolated.
2) The density ρ is transmitted so that the quantity of mass ρ S in each cell of the final row equals the sum of quantities ρ i S i in the cells or part of cells of the original row (Fig. 7).3) The transmission of the velocity component v (normal to the frontier) is a little more complex because it must satisfy the conservation of both mass and momentum.Because staggered grids are used, v is not located at the center of the cells, but on their northern and southern frontiers.As can be seen in Fig. 8a, v is transmitted from lines 3 ′ ,4 ′ to lines 1 ′ ,2 ′ and from lines 1, 2, 3 to lines 4, 5, 6.Note that v is calculated only on one of the two frontiers, here on line 1, and the result is transmitted to the other one, line 4.The procedure guarantees the equality of fluxes on these two lines, without altering the convergence rate.Unfortunately, mass and momentum conservations cannot be both obtained with only one variable v.T osolve this problem, we took advantage of the specificity of the pressure correction algorithm, which is based on two separated resolutions of momentum and continuity equations.
C. Integration in the Algorithm
The velocities must be transmitted each time they are modified inside the computational domain, that is, two times per iteration: after the resolution of the momentum equations (step 1) and after the velocity correction involved in the pressure-correction method (step 2).
We considered that the first step was based on equilibrium between the momentum fluxes and the pressure gradients.Thus, the equality of the momentum fluxes on the connected frontiers must be verified during this step, and a B transfer (Fig. 8b) is performed just before.However, the second step is based on equilibrium between mass fluxes.These have to be equal on the two frontiers during this step, and an A transfer is performed immediately before.
V. A pplication to a Two-Dimensional Blade Cascade
The numerical model presented was applied to the calculation of the cavitating behavior of a four-blade cascade representing a complete rocket engine turbopump inducer.The objective was the simulation of the four sheets of cavitation attached to the blades and their unsteady coupled behavior.
A. Single Channel Computation
First only one blade-to-blade channel was considered.The uniform flow velocity imposed at the mesh inlet is deduced from the flow rate, and a periodicity condition is applied between the two sides of the channel (Fig. 9), according to the procedure presented in the preceding section.A uniform static pressure is imposed at the domain outlet, far enough from the trailing edge to avoid any influence of the boundary condition on the pressure field around the blade.Possible effects of the lines upstream and downstream from the inducer on its cavitating behavior are not taken into account in this approach.
A 190 × 30 structured mesh is used (Fig. 10).A special contraction of the grid is applied in the expected cavitating areas, that is, around the leading edge in the axial direction and on the suction side of the blade in the transverse direction.Along the solid boundaries, the k-ε turbulence model is associated with standard laws of the wall, and so a grid contraction is applied on both sides of the blade to constrict y+ at the first grid point between 35 and 120 for the Reynolds value used.The study of the influence of the Reynolds number would need a remeshing near the blades surfaces and was not performed for that first application.
To characterize the flow around the blade, a slow decrease of the cavitation number is simulated.First, a stationary step is performed, which imposes the reference flow rate and an outlet static pressure high enough to avoid any presence of vapor in all of the computational domain.During the following time steps, this pressure is decreased progressively, and vapor appears on the blade suction side.Large time steps ( t = 0.1T ref ) are imposed during that slow transient, to minimize the time-dependent terms in the equations and to obtain quasi-steady flowfields.Figure 11 shows the flow characteristics obtained for a cavitation number equal to 0.1.It can be seen that the shape of the sheet of cavitation is directly governed by the pressure field.The obstruction generated by the cavity in the flow also clearly appears on the streamlines representation.This obstruction increases when the cavitation number is still lowered, and it finally results in the drop of the blade performance, that is, an increase of the pressure inlet in the present case.
The entire behavior is recapitulated in Fig. 12, which shows the evolution of the blade performance as the cavitation number is lowered from initial noncavitating conditions down to the final performance drop.It consists successively in a quite stable evolution (for σ>0.2),afirst pronounced drop at σ = 0.2, and then a sudden increase of the head (σ = 0.1) just before the final blockage.
Although a large time step was used, the chart strongly fluctuates, which indicates that the cavitating flow is fundamentally unstable.Moreover, some numerical instability is observed when reaching the final head drop: The flow rate is imposed strictly at the inlet and the cavitation blockage is, thus, especially abrupt.The simulation was also performed with t = 0.025T ref and t = 0.4T ref to investigate the influence of the time step.The more t is decreased, the more the sheet of cavitation fluctuates, but no noticeable difference can be observed for the mean performance evolution.The influence of the mesh on this mean performance was also tested 23 with a finer grid composed of 220 × 35 cells, and less than 2% of difference with the standard grid was obtained in cavitating conditions (σ<0.4).
The decrease/reincrease of the performance (0.025 <σ <0.1) was more closely investigated, to understand the mechanisms of this surprising behavior.Observation of the velocity fields indicates that this phenomenon is due the interaction between the attached sheet cavitation and the boundary layer on the blade: Figure 13 shows the modifications due to cavitation of the relative velocity W on the suction side of the blade.Velocities are reported in two sections located at midchord and at the trailing edge, respectively.Because of the blade orientation, the variations of W are almost consistent with the evolution of the tangential relative velocity W u .
The curves corresponding to σ = 0.12 show a diminution of W inside the boundary layer, and an increase of W outside, compared with very low cavitating conditions (σ = 0.4).The predominant effect at this stage is the increase of W in the main part of the section, which leads to the progressive decrease of the blade performance.For σ = 0.08, an opposite effect appears at midchord: W is increasing inside the boundary layer, while decreasing outside.Nevertheless, this effect is not strong enough to propagate to the trailing edge, where W is still globally increasing, which explains the important drop of the performance.For σ = 0.024, it can be observed that the earlier effect has propagated all along the blade, and it was also amplified: W has much decreased in the major part of the sections (leading to a better pressure rise), although it notably increased close to the blade.
The progressive growth of the sheet of cavitation is responsible for these variations of W. Figure 14a shows a scheme of this mechanism in the case of two consecutive blades, instead of the present periodicity condition.As σ is lowered, the cavity width expands and the fluid at the blade leading edge is pushed upstream.This fluid is then moved by a part of the main flow toward the bottom of the channel and highly accelerated.It induces a jet effect on the suction side of the adjacent blade, which constricts the boundary layer closer to the blade.The result is an increase of the velocity close to the wall and a decrease in the other part of the section.
When σ still decreases (Fig. 14b), this effect disappears because of the obstruction generated by the cavity: This explains the final performance drop.
B. Four-Blade Cascade Computation: Steady Behavior
Time-accurate computations are performed on the four-blade mesh composed of four connected earlier grids, that is, 28,000 cells.The periodicity condition is then applied between the fourth and first channels, as in the real runner.The time step was chosen to put emphasis on the investigation of the low-frequency fluctuations of the attached cavity, without being perturbed by the local unsteadiness in each cavitation sheet wake (cloud shedding): therefore, it is fixed equal to 1% of the blade passage time T ref .Thus, the self-oscillation behavior of the cavities is not simulated.Successive time-accurate computations are performed at fixed cavitation number and nominal flow rate coefficient.
First cases with quite high cavitation number (σ ≈ 0.175) lead to stable and symmetrical small cavities attached on each blade (Fig. 15a).The coupling between the four channels does not generate in that case any supplementary unsteady effect: The four cavities remain identical, and their shape becomes constant in time after the initial transient, as in the single-channel computation.This behavior is shown in Fig. 15b, which presents the time evolution of the cavity in the first channel: It completely stabilizes after the initial fluctuations.(This transient from t/T ref = 0t o1 0corresponds to the growing of the attached cavity from the noncavitating initial condition.)Figure 16 shows only small oscillations in the mass flow rate through each passage.Therefore, in this configuration the four-channel coupling has only a very small effect on the cavitation unsteady behavior.
C. Four-Blade Cascade Computation: Unsteady Behavior
When the cavity length is increased (σ = 0.15),anunsteady configuration spontaneously appears.cavities are then different on the successive blades: three large cavities and a little cavity are simulteously obtained.This unbalanced cavitating structure propagates from blade to blade in time.That phenomenon takes place spontaneously, only from the very small perturbations due to numerical errors of truncation.
Figure 19 shows the repartition of the mass flow rate in the cascade.After the initial transient, periodic fluctuations take place with different phases in the four channels.Their amplitude increases when the cavitation number is slightly decreased from 0.15 to 0.135.Visual observation (Fig. 17) and a fast Fourier transform analysis of cavity length fluctuations signal (Fig. 20) show that in the stator frame the phenomenon is about 50% faster than the inducer rotation speed.As observed in the experiments, this value slightly decreases with cavitation parameter.However, the numerical frequencies of that supersynchronous phenomenon remain about 25% larger than experimental values, as already observed in analytical models by Tsujimoto et al. 10 and Joussellin and de Bernardi. 11oralower cavitation parameter (σ = 0.08),astable configuration appears (Fig. 21), with alternate long and short cavities on the blades and different flow rates in the successive channels.Such a configuration is observed experimentally but at a cavitation number higher than the supersynchronous range (Fig. 2).
Figure 22 shows the cavitating performance computed by averaging the head coefficient obtained with the four-channel simulations at fixed cavitation number and nominal flow rate coefficient (points △).I nthe whole range of nonsymmetrical sheets of cavitation, the head coefficient is larger than the one obtained from the single-channel computation: the hollow in the chart is considerably reduced.This means that these special patterns of the flowfield (rotating supersynchronous cavitation and alternate cavitation) result in a diminution of the averaged losses in the blade cascade.
This effect can be interpreted on the basis of the analysis performed earlier in the case of the single-blade calculation.Indeed, rotating and alternate cavitation lead to the enlargement of some of the sheets of cavitation, compared to the stable configuration.The growth of these cavities pushes the fluid upstream at the leading edge.This fluid is then accelerated by the incoming flow, which results in the jet effect observed on the blade suction side in the adjacent channel (Sec.V.A).Thus, the flow in this channel is accelerated in the boundary layer, and decelerated above, which globally boosts the performance of the cascade.Nonsymmetrical flow patterns can, thus, be considered as a self-adaptation of the flow to reduce the losses in the blade-to-blade channels.
On the other hand, the final head drop is reached at higher σ because of the lack of symmetry: The biggest of the cavities are larger than the one obtained in the case of a single-grid computation at the same cavitation number.
D. Comparison with Experiments
Quantitative comparison between the three-dimensional inducer and the two-dimensional blade cascade is not directly available because of three-dimensional effects and the variation of the hub-toshroud ratio that is not taken into account.However, computations were performed at different operating conditions, by the varying of the cavitation number and the flow coefficient between 0.9 ref and 1.2 ref .The results are compared to experimental data in a qualitative way.The performance charts obtained are given in Fig. 23 with the limits of all nonsymmetrical flow arrangements, that is, rotating cavitation and alternate blade cavitation.For each flow rate, these limits are defined by two extreme values on a σ scale, respectively σ + and σ − .
In all cases, the final head drop is predicted by the model at a too high σ with respect to the experimental value (0.06 instead of 0.02).The cavitation parameter range of rotating cavitation σ = (σ + − σ − ) increases when the flow rate coefficient decreases.That result agrees with experimental observations performed by Pagnier et al. 8 and Yokata et al. 1 At reference flow coefficient φ ref , the experimental range of nonsymmetrical flow configurations is about σ exp = 0.07 (Fig. 2).Numerical results of σ are plotted with respect to the flow rate coefficient in Table 1: The comparison shows that the experimental range corresponds to the numerical result at φ/φ ref between 1 and 1.05.That result can be explained qualitatively by an important difference in inlet conditions between experiments and the numerical prediction: In the experiments, a partial obstruction of the inlet section due to a backflow area in the vicinity of the casing is observed.This particular flow pattern is shown in Fig. 24 (see Ref. 24), with S flow indicating a whole cross section, S back the size of the backflow, and S cav the area occupied by cavitation near the shroud.This phenomenon is not taken into account in the present simulations, which modifies the cascade inlet condition: In experiments at nominal flow rate the obstruction leads to a higher component C m of the velocity than in the simulation.For a higher simulated flow rate, inlet conditions are recovered, and σ is consistent with σ exp .
A comparison between the experimental breakdown chart at nominal flow rate and the one obtained by the four-channel computation is presented in Fig. 25.Although the present model, mainly because of the passage from three dimensional to two dimensional, cannot precisely quantify the location of breakdown and instability, a general correct agreement is obtained with experiments.
E. Instability Criterion
The evolution of the instability range according to the mass flow rate was investigated, to improve the understanding of the mechanisms that govern the onset and the conclusion of nonsymmetrical flow pattern.
Noted in Fig. 23 that the ending of instability occurs just at the beginning of the final drop of the cascade performance.Indeed, it has already been said (Sec.V.A) that this drop was due to the obstruction of the blade-to-blade channels by the sheets of cavitation.Thus, a critical size of the cavities could be directly responsible for the conclusion of the alternate blade cavitation.This assumption is confirmed in Fig. 26a, which presents the same six head drop charts as functions of the classical parameter σ/2α, which is usually considered as the key parameter to determine the length of the cavity in a given configuration.For all of the mass flow rates, instability vanishes for σ/2α ≈ 0.5/0.6.This value is fully consistent with the criterion obtained by Tsujimoto et al. 25 in the case of experimental results obtained with a three-blade inducer.
With regard to the onset of instability, this criterion cannot be applied, as can be seen in Fig. 26a.Indeed, according to the model, rotating cavitation appears earlier, that is, for a smaller length of cavitation sheets, when the mass flow rate decreases.This effect could be related to the increase of angle of attack, which enhances the flow separation at the blade leading edge and, thus, increases the obstruction generated by cavities, even for very small cavities.Thus, the inception of rotating cavitation would allow reducing the losses, as noted in Fig. 22.This additional influence of the flow incidence at the leading edge is confirmed by the instability map as a function of σ/α 3 (Fig. 26b).An almost constant value of σ/α 3 = 350 is obtained, which could be a criterion for instability inception in blade-to-blade calculations.
VI. Conclusions
We have presented two-dimensional computations performed on a four-blade cascade geometry representative of the behavior of a real three-dimensional inducer.This work implied the development of a nonmatching connection treatment, based on an interpolation process adapted to the SIMPLE algorithm.Attention was focused on the study of nonsymmetrical flow patterns that occur in inducers, that is, rotating supersynchronous cavitation and alternate cavitation.These two flow configurations were successfully predicted by the numerical model at several mass flow rate conditions.Comparisons between single-channel and four-channel computations revealed that nonsymmetrical flow patterns allow suppressing the hollow in the blade cascade performance before the final breakdown.The analysis of the velocity field on the blade suction side in the case of a single-blade computation suggests that this reduction of the losses could be linked to an important acceleration of the flow in the channels where the cavity is small.This jet effect modifies the boundary layer on the blade suction side, which globally leads to a decrease of the losses for the whole cascade.Instability criteria independent from the flow rate were found, for instability inception and conclusion.
This work is pursued to clarify the effects of rotation speed and fluid parameters and to assess the prediction capability of the model.We intend more particularly to take into account the mentioned three-dimensional effects.Therefore, a full threedimensional model is developed to predict cavitating flowfields in inducers.The final objective is to apply the numerical method presented in the present paper to three-dimensional computations, to predict the unsteady effects associated with cavitation in the real three-dimensional geometry.
NomenclatureA min = minimum speed of sound in the two-phase mixture, m/s C(C m , C u )= v elocity vector in fixed frame, m/s C p = dimensionless static pressure, ( p − p ref )/( 1 2 ρU 2 ) P = total pressure, P + 1 2 ρC 2 ,Pa P 1 , P 2 = total pressure at inlet and outlet, Pa p = local static pressure, Pa p ref , p vap = reference pressure (inlet pressure), vapor pressure, Pa p 1 , p 2 = static pressure at inlet and outlet, Pa r, R c , R = inducer radius, radius corresponding to the blade cascade, tip radius, m S = nondimensional surface of a grid cell S flow = cross section of the blade-to-blade channel, m 2 T ref = time corresponding to the passage of one blade in the fixed frame, s U = training velocity at inducer radius R c , R c , m/s W(W m , W u ) = relative velocity vector, m/s α =fl ow incidence at the blade leading edge, rad α v = local void fraction ρ l ,ρ v ,ρ = nondimensional density of the liquid, of the vapor, of the mixture ρ ref = reference density ρ l σ =c a vitation number, ( p 1 − p vap )/( 1 2 ρU 2 ) , ref =fl ow coefficient, C m /(U ), and reference flow coefficient
Fig. 6
Fig. 6 Information transfer between segments 1 and 2 in case of a) matching and b) nonmatching boundaries.
Fig. 12 Fig. 13
Fig. 12 Single-channel computation; cavitation characteristic of the cascade at φ = φ ref and associated length of the attached cavity (ratio 5:1 between horizontal and vertical scales).
Fig. 18 Fig. 19 Flow
Fig. 18 Cavity length evolution on the four blades: amplification of the unsteady coupling and phase shift (σ = 0.15).
Fig. 23 Fig. 24
Fig. 23 Effect of the flow rate on the cavitation characteristic and on the instability range: --, limit of nonsymmetrical arrangements and , experimental instability range Φ = Φ ref .
Fig. 25
Fig. 25 Comparison between the experimental performance chart and the result of the four-channel computation at nominal flow coefficient. | 7,936.6 | 2019-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
A richly interactive exploratory data analysis and visualization tool using electronic medical records
Background Electronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve. Methods We develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors’ states and transitions over time. Results This paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients. Conclusions We developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.
Background
Electronic medical records (EMRs) are now widespread and collecting vast amounts of data about patients and metadata about how healthcare is delivered. These small datacenters have the potential to enable a range of health quality improvements that would not be possible with paper-based records [1]. However, the large amounts of data inside EMRs come with one large problem: how to condense the data so that is easily understandable to a human. The volume, variety and veracity of clinical data present a real challenge for non-technical users such as physicians and researchers who wish to view the data. Without a way to quickly summarize the data in a human-understandable way, the insights contained within EMRs will remain locked inside.
Many EMRs are also not flexible enough to accommodate the information needs of different types of users. For instance, clinicians often try to combine data from different information systems in order to piece together an accurate context for the medical problems of the patient who is in the room with them. Clinical researchers, however, may be primarily interested in finding population level outcomes or differences between cohorts. Administrators use EMR data to inform healthcare policy, while patients who use EMRs may be interested in comparing their health to their peers or tracking their own health over time [2]. Unfortunately, little support exists in current EMR systems for any of these common use cases, which hampers informed decisionmaking.
Visual analytics, also known as data visualization, holds the potential to address the information overload that is becoming more and more prevalent. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces [3,4]. It can play a fundamental role in all IT-enabled healthcare transformation but particularly in healthcare delivery process improvement. Interactive visual approaches are valuable as they move beyond traditional static reports and indicators to mapping, exploration, discovery, and sense-making of complex data. Visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. In healthcare, data visualization has already been used in the areas of patient education, symptom evolution, patient cohort analysis, EHR data and design, and patient care plans. This enables decision makers to obtain ideas for care process data, see patterns, spot trends, and identify outliers, all of which aid user comprehension, memory, and decision making [5].
Our objective is to create a visually interactive exploratory data analysis tool that can be used to graphically show disease-disease associations over time. That is, the tool presents how a cohort of patients with one chronic disease may go on to develop other diseases over time. The study used chronic kidney disease (CKD) as the prototype chronic disease, users could easily change the software tool to visualize a different disease. In the previous study, we have verified that such a system can significantly raise the efficiency and performance of practicing physicians and clinical researchers who desire to use EMRs for their research projects [6,7]. Expected cohort trajectories are of great interest in clinical research. Our main task, then, will be to identify underlying chronic diseases and explore what happens over time to the being diagnosed patients and what comorbidities they develop over time.
System design
The system is designed based on data transformations that are required to perform longitudinal cohort studies. The transformed data are connected by a sequence of adjustable operators. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time. The visualization is driven by the transformed data, and the user feedback is directed to the corresponding operators, and completing the iterative process.
Data transformation
The data transformation steps behind the visual analysis process are illustrated in Fig. 1. The transformation order follows the analysis process from raw patient records to the final visualization. Assume that there are N patients and M unique factors. As the top-most chart shows, the raw sequence of a patient can be treated as a discrete trajectory with non-uniformly distributed records along the time axis. We define the patient trajectories as P = {p 1 , …, p n , …, p N } and the set of factors as F = {f 1 , …, f m , …, f M }. A patient trajectory is an ordered sequence of K n records: p n ¼ r n;1 …; r n;k ; …; r n;k n À Á , where each record consists of a factor set and a timestamp: r n,k = (F n,k , t n,k ), F n,k ⊂ F. Note the timestamp of each record is relative and not necessarily the actual record date. In the cohort study, we are interested in the temporal and populational patterns on the course of CKD. Therefore, it makes more sense to align each patient trajectory by their days before and after being diagnosed with CKD.
When the user specifies the time windows: T = (t 1 , …, t l , …, t L ), the patient trajectories are partitioned based on their timestamps. Records in the same time window are merged into one: The end results are patient trajectories regulated in time, P 0 n ¼ r 0 n;1 ; …; r 0 n;l ; …; r 0 n;L n , where the timestamps are regulated by the time windows, and each record's factor set represents all the factors observed on that patient within the time window. When the user requests for patient clustering, the patients at each time window are clustered based on a certain similarity measure and become a set of cohorts: C l ¼ c l;1; …; c l;h ; …; c l;H l È É , where C l ⊂ Ρ and it represents a set of H i cohorts at time window t l .
We define the cohort trajectory network as G = (V, E), where each node V = (v l,h |v l,h = c l,h ) represents a cohort at a time window, and each edge Ε = {e l,i,j |v l,i → v l + 1,j , |c l,i ∩ c l,j | > 0} represents the association between two cohorts at consecutive time windows where their members overlap. The network G is used to drive the visualization in the end of the process.
Data & control flow
As shown in Fig. 2, data flows through a sequence of operators, which are adjustable and associated with different interactions by the user. The interaction workflow is designed from the user's point of view, and it implements.
Once the user specifies the important factors for the study, the system scans the raw patient trajectories record by record, filters, and aggregates the factors accordingly. Similarly, the time windows defined by the user also changes the way the system partitions and aggregates the trajectories over time. The two operators, cluster nodes and filter edges, implement multiple techniques to support the analysis tasks of finding cohort and filtering associations, respectively. It is important to note that there is no once- Fig. 1 Data transformation processes. The data transformation steps behind the visual analysis process followed the analysis process from the raw patient records to the final visualization and-for-all operation for any analysis task. Each cluster or filter operator has its strengths and its limitations, these are the reasons why we should be carefully employed.
(1) Frequency-based Cohort Clustering: Frequencybased clustering allows one to follow one's basic intuition to see the "main idea" of data. Cohorts with higher cardinalities are preserved while minor ones are considered less important and merged. Our system allows the user to specify a threshold x for the cardinality, and it merges cohorts of sizes less than the threshold into the "others" group.
(2) Hierarchical Cohort Clustering: Given a time window, each patient is characterized by the comorbidity of factors within the window. We consider the similarity between two unique comorbidities as the set relation of their factors. For example, two sets of factors {f 1 } and {f 1 , f 2 } are partially overlapped by the common factor f 1 . In consideration of such similarity, we apply hierarchical clustering to extract cohorts with similar comorbidities. The resulting clusters are hierarchical and the user can specify the desired number of clusters. With more clusters one is able to describe the characteristics of each cohort more accurately, but more clusters introduce more nodes, more associations, and thus higher visual complexity. On the other hand, fewer clusters create less visual complexity at the expense of potentially overlooking some essential but smaller structures.
Given the set of factors: s i = F ' i,l at a time window t l for a patient p i , we define the similarity between two patients with the Ochiai coefficient [8], which is a variation of cosine similarity between sets: Variance-based Association Filtering: The importance of an association lies in how confident we are able to make an inference from it. We can extract the statistically important associations by ranking and filtering their variances. Our system demonstrates this capability by adopting one particular type of variance, which is defined as the outcome entropy of the associated cohort. Such entropy can be calculated by the conditional probabilities of the different outcomes of the given cohort: We can see that the entropy is minimized when the patients in a cohort at the current time window all go to another cohort at the next window. In contrast, it is maximized when the probabilities of patients who go to other cohorts are uniformly distributed. Our system allows filtering important associations by adjusting the entropy threshold. When the threshold is high, all associations are shown in spite of their variance; in the extreme case when the threshold is zero, only the associations of zero entropy will be displayed; in other words, it only visualizes the associations between fully overlapped cohorts.
Visualization design
Our system visualizes the cohort trajectories network model that we discussed in the previous section and summarizes it. The user can use it to assess important features such as cohort comorbidity, cohort distributions, and their associations across time windows, etc. We design the visual encoding and the optimization strategies in a way to maximize the legibility of the presentation.
(1) Visual Encoding: We encode the dimensions of the visual space similarly to OutFlow, where the x-axis encodes the time information and the y-axis is used for laying out the categories (comorbidities) [9]. We also visualize the associations between the cohorts as ribbons.
The visualization must convey the characteristics of both the cohorts and the associations. It is common to encode cardinality to the nodes and edges [10,11] as such information allows the user to assess the frequency-based distribution. Our system encodes cardinality as the nodes' or edges' height. Each cohort is labeled to show its dominant characteristics. It lists the common factors shared by all patients in this group. If there are factors not shared by the entire group, we indicate it by appending an asterisk to the label. In addition, we map colors to unique comorbidities and assign each node its corresponding color. The edge color is determined by the two nodes it connects, and we use gradients for smooth transitions.
The visual encoding of our system is tailored for the CKD cohort study; however, it can be easily changed to display other relevant information. For example, instead of showing the cardinality, the edge can encode other statistical measurements that reveal set relations [12].
(2) Optimization: The overlap between cohorts could be complex and thus increase the number of edges as well as the number of edge crossings. It could impact the legibility of the visualization. Since the y-axis is nominal and the ordering between the categories is flexible, we can arrange the node's vertical positions to reduce the amount of crisscrossing and thus resolve visual clutter.
The algorithm we apply to minimize edge crossing is modified from an existing library and is a heuristic iterative relaxation method [13]. The algorithm sweeps back and forth along the x-axis and adjusts the node vertical positions based on two objectives: (1) minimize the edge length, and (2) resolve node overlaps. It utilizes simulated annealing, so the process ends in a predictable time. The result is an approximation but the algorithm allows us to get reasonable results in an interactive rate.
In addition, the z-ordering (front to back of the screen) of the edges should be considered as well in order to maximize legibility [11]. We choose to place smaller edges on top of the larger ones to reveal the outliers.
Interaction methods
The system interface consists of two views: trajectory view and summary view. The trajectory view is timebased and displays an overview of patient trajectories that the user can interact directly with. It also highlights the trajectories of selected patients. Summary view presents the characteristics of the selected patient group. For example, it shows the distributions of gender, age, and factors, etc. It is also interactive and provides additional functions such as querying by patient metadata information.
Most data items (patients, factors, etc.) in the system are selectable, and the system automatically searches for related items and highlights such associations with visual links. For example, the user can select a cluster of patients by clicking on a node or an edge in the trajectory view. The patients selected are highlighted as red regions in each node and link. The highlighted regions also encode the cardinality as heights so it shows the proportion of the patients selected comparing to others. In the meantime, the highlighted edges reveal the paths traveled by the selected patients. In addition, the user can also select a factor, and all patients having this factor will be highlighted. This enables the user to observe the global distribution of a particular factor.
Pilot study Data sources
The original data source for this paper is from Taiwan's National Health Insurance Research Database (NHIRD), a longitudinal database which contains International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes for disease identification as well as procedure codes. The database contains health information for one million people over 13 years (1998-2011). We extracted 14,567 CKD patients who had eleven common comorbidities.
Preparing to visualize clinical data involves a series of logical steps [4,14]. The first step in the data visualization process is selecting the patient cohorts. Figure 3 shows the visualization with only 17 observed factors. The x-axis Fig. 3 Time course of 14,567 CKD patients clustered by comorbidities. 14,567 CKD patients clustered according to comorbidities on the timeline. The x-axis showed the timeline covering 12 years before and after each patient who got a diagnosis of CKD, while the y-axis presents the clusters of trajectories for each CKD patient shows the patients' conditions over the timeline before and after getting CKD diagnosis; the y-axis presents the arrangement of trajectories for each CKD patient, which they were aggregated together with the same comorbidity clusters. However, the outcome of visualization was too difficult to interpret and understand. The tool would be more useful for users if we could provide selection and aggregation function to associate their target patient groups.
Another challenge in the cohort identification process is to standardize the large diversity and inhomogeneity of comorbidities in the database [4]. Due to those highdimensional data, such as electronic medical records, would lower the homogeneity between data items, we used a divide-and-conquer approach to classify patients into relatively uniform within each groups. Figure 4 shows an overview of this process.
Factors
A factor is a general term used to describe a single criterion that is used to separate patients into cohorts. The factors are derived from diseases and procedures and are the fundamental elements that characterize a patient in our system. In the CKD cohort study, there are tens of thousands diseases and procedure codes that one could use to separate CKD patients. Defining the right set of factors is not a trivial task because including unnecessary factors that are either redundant or irrelevant to the analysis objectives increases the computational cost as well as jeopardizes the interpretability of the visualization. Our system is flexible enough to allow the user to define a set of factors by selecting independent ICD-9 codes or aggregating correlated ones based on the user's domain knowledge. In this study, we worked with nephrologists to define 17 related criteria that users can visually explore concerning chronic kidney disease ( Table 1). The 17 factors represent the most related diseases and procedures that follow a diagnosis CKD.
Time windows
Visualizing EMR data over time also requires the ability to change the granularity of the x-axis (time). For example, in CKD there are several stages in its natural history. Within each stage, CKD can be relatively stable, but there is inhomogeneity between CKD patients at different stages. Therefore, we use time windows to refer the time duration or interval (i.e., 1-month, 1-year, 2-year…etc.) To decide on a time granularity is a manual process that is often best judged by humans [3]. The results are patient trajectories partitioned over time, which accentuates the differences between cohorts.
Patient groups
While patient comorbidities within each time window are expected to be stable, comorbidities are not stable over the entire population across all time windows.
Our system handles this problem by using clustering methods which make clear the underlying comorbidity distributions within each patient group. The end results are cohorts that have reliable distributions of comorbidities.
Visual examination
Once the time windows are defined and the cohorts are extracted, the quality of the visualization can be evaluated by examining the associations between cohorts. For instance, the user might want to examine how cohorts merge or diverge over time. Our system reveals not just associations that would otherwise be impossible a person to notice, but also allows users to interact with the underlying data immediately to facilitate "what-if" scenarios. Sometimes, however, the quantity or variance of the associations could be large and thus lead to visual clutter problems. Therefore, our system also allows the user to rank and filter the associations based on their statistical importance. This way, the user can limit themselves to exploring visual changes that are also in fact statistically significant both visually and mathematically. At any step of the visual analysis process, the user can go back and change the settings for factors, time windows, patient clustering, and comorbidity association filtering. For example, if the user wants to explore the temporal patterns in finer detail and examine if there are local and short-term patterns, the user can add more time windows to the context; on the other hand, if two or more stages exhibit indistinguishable patterns, the user might want to merge those time windows as they do not convey extra information. The user can also change the Fig. 4 Classifying patients into uniform cohorts. The flow showed an overview of the data analysis process in this study. The visual analysis process was based on CKD research dataset parameters to refine how patients are grouped or how associations are filtered. This iterative process continues until the user obtain a result that he/she is satisfied with.
We use the CKD as a model chronic disease to demonstrate the analysis process, but the process can be applied to the study of other diseases as well. For example, if the user wants to study the clinical trajectories of diabetics, the user can define a list of factors related to diabetes. Then the user can apply the same process to set up time windows, cluster patients, and explore cohort trajectories.
Ethical approval
This type of study was not required the Institutional Review Board review in accordance with the policy of National Health Research Institutes which provides the large computerized de-identified data (http://nhird.nhri.org.tw/en/).
Exploring cohort structures
In this study, we build an exploratory data analysis tool that depicts the trajectories of 14,567 CKD patients' comorbidities over time. We partition the records into multiple 2-year time windows. Researchers often have different factors-of-interest for different windows of CKD. In the pre-CKD stage, they are interested in common diseases such as hypertension, diabetes; for end-stage CKD factors, they are interested in critical procedures such as dialysis, renal transplantation, or patient death. We filter the factors of interest according to each time window.
Since there are too many comorbidities to visualize clearly as shown in Fig. 3, we apply frequency-based cohort clustering to extract the dominant cohorts. As Fig. 5 shows, the trajectories are simplified where larger cohorts are kept and smaller ones are merged into a single "others" group (light green for others without CKD and light orange for others with CKD). From the overviews, we can learn about the prevalence of different comorbidities and their proportions in the population. For example, we can see from Fig. 5 that the number of patients with a single disease such as hypertension (HTN) (brown) and diabetes (DM) (dark blue) shrinks as the time approaches year 0, which means that patients start to exhibit other diseases. The user can lower the threshold to reveal smaller sized cohorts as shown in Fig. 6.
Exploring associated relationships
Another goal of exploratory data analysis is to uncover unexpected associations between two variables. In this study, we demonstrate the exploring associations between hemodialysis (HD) in early stages of CKD and other diseases and procedures. More specifically, we want to identify the driving factors that may lead to hemodialysis and the downstream consequences.
First, we divide CKD patients according to CKD severity: (1) pre-CKD: before the patient's first CKD diagnosis, (2) first year-of-CKD, and (3) post-CKD: a year after the patient's first CKD diagnosis. Second, we filter CKD patients according to pre-determined criteria that nephrologists determined to be clinically important. For the first-year-of-CKD stage, we focus on which patients will go on to require hemodialysis; for the post-CKD stage, we watch other common diseases and procedures related to CKD patients: Death, peritoneal dialysis (PD), and renal transplant (RTPL); for the pre-CKD stage, we watch all 17 diseases/procedures. As a result, there are 835 unique combinations at the pre-CKD stage, two at the first-year-of-CKD stage and nine at the post-CKD stage.
Since there are only a total of 11 CKD/disease or CKD/ procedure combinations for first-year-of-CKD stage and the post-CKD patients, we can visualize their clinical courses without any simplification processes. However, there are too many combinations at the pre-CKD stage to be visualized directly. For simplicity, we first group them into one single cluster and focus on the last two time windows. As Fig. 7a shows, we find that 70.2 % of the patients who took hemodialysis in the first year of CKD did not develop any other diseases or procedures related to CKD, while the rest of them either required peritoneal (PD) or renal transplantation (RTPL), or died. Some of the patients who were not on hemodialysis in the first year also died; however, the mortality rate seems lower. We also notice that more than half of the patients who didn't require hemodialysis in the first year are not associated with any of post-CKD factors of interest. This means they were either in stable condition after the first year or their following treatments were not recorded.
To see stronger associations between the pre-CKD factors and HD during the first-year-of-CKD stage, we must filter out the associations that are not helpful. For example, if a group of similar patients are associated with both "CKD" and "CKD|HD" clusters, it's hard to tell whether this combination of factors will lead to hemodialysis or not. We can rule out all those unconfident associations by filtering according to the variance of their associations. We set a strict threshold 0.0 for the variance so that the association is kept only when it is 100 % confident. After the filtering, 32.6 % of the 835 unique combinations are taken out because their associations with the first-year-of-CKD stage are not confident. Figure 7b shows that the remaining associations only covers 17.4 % of the population. This means the pre-defined 17 factors might not be good explanatory variables to discriminate patients taking or not taking hemodialysis in the first year of CKD.
Next, we perform hierarchical clustering on the patients at the pre-CKD stage and generate ten groups of similar patients, as shown in Fig. 7c. Note there are three groups labeled "*", which seems confusing at first as they could have been merged into one group. In fact, the three groups have different factor distributions. They are labeled "*" because none of the groups have a common factor shared by all members in the group. To avoid confusion, the user can assign a custom label to describe the nature of the group. When we select and highlight the group who has a common factor of systemic lupus erythematosus (SLE), we find that none of them required more serious procedures such as renal transplantation or died. Figure 7d is a zoomin view showing the structure of the selected "SLE,*" group. We also notice that the proportion of patients requiring hemodialysis in the first year of CKD in the "SLE,*" group (3.14 %) is one-third of the proportion in the entire population (9.54 %).
Responsiveness
Our system is a web-based (http://sankey.ic-hit.net/) and is tested with a commodity desktop machine (CPU: 2.66 GHz Quad-Core, Memory: 8GB 1066 MHz DDR3) as the application server and another desktop machine as the client. Most of the back end programs are written in Python, and the front end programs are written in Javascript and HTML5.
The system caches the transformed data after each operation in the data control flow (as shown in Fig. 2) to reduce unnecessary processing time and improve user end responsiveness. There are four major types of user interactions: defining factors, partitioning time windows, merging patients and filtering associations. The first two interactions usually happen at the beginning of a study and occasionally happen in major revisions. On the other hand, the rest of two interaction types are much more frequent in the analysis process. Caching the less frequently updating results helps us reduce unnecessary processing time.
We measure the time elapsed for each process using the system timer. For 14,567 patients and 6,031,579 records, it takes 6 min to filter and aggregate factors of the entire data set, and 25 s to partition the data set into three time windows. However, such operations are taken only a few times throughout the analysis and thus do not require immediate response. More frequently performed operations
Discussion
We present a system to visually analyze the comorbidities associated with CKD by using a large-scale database containing 14,567 patients. We visualize the results using a Sankey diagram to help practicing physicians and clinical researchers investigate the outcome of this complex disease based on comorbidities or procedures that these patients have.
Building a visually interactive exploratory data analysis tool is not without several challenges. First, direct visualization of all the patients can easily lead to overplotting. Second, in this dataset, there exists tens of thousands of risk factors pertinent to CKD patients. It is not apparent how to best discriminate and visualize these factors to bring out structures of interest in the data. After c To perform hierarchical clustering on the patients at the pre-CKD stage and generate ten groups of similar patients. d When we highlighted the group who had a common factor of systemic lupus erythematosus (SLE), we found that none of them took the more serious procedures such as renal transplantation or died. Note: There are three groups labelled "*" because of the groups have no common factor shared by all members in the group all, one of the main goals of data visualization is to bring out unexpected patterns in the data, which is best achieved by unsupervised machine learning methods. Figure 3 shows an unfiltered visualization of the CKD and 17 associated comorbidities and procedures. You may find out that the visualization is too complex to comprehend. It would be useful to select, aggregate, and visualize factors associated with patient groups. We have developed an interactive visualization system to support such operations.
Temporal visualization
Time-series information are traditionally of particular interest when analyzing EMRs. [15] Much prior work has suggested presenting patient history longitudinally [16][17][18]. Real world data usually has prohibitively high visual complexity due to its high dimensionality or high variance. Thus, several simplification methods have been proposed. Bui et al. suggested using folder as well as non-linear spacing [19]. In the V-model project, Park et al. compressed the causality relationship along a linear timescale to an ordinal representation to carry more contextual information of the event [20]. In addition to abstracting time to use the horizontal screen real estate more efficiently, there are methods to save the vertical real estate. Bade et al. implemented a level-of-detail technique that presents data in five different forms based on its source and the row height available [21]. Our method simplifies the visual complexity of patient trajectories by aggregating records over time, clustering patients and filtering associations between cohorts.
Query-based visual analytics
In many real world cases, the user can narrow down the scope and reduce the complexity of the data by querying based on his or her domain knowledge. Systems of this kind allow the user to specify the pattern of interest and can enhance the analytic process with advanced interfaces [22,23]. However, it is not always easy to translate an analysis task into proper queries [24]. For temporal event queries, Wang et al. proposed an interactive system to support querying with higher level semantics such as precursor, co-occurring, and aftereffect events [25]. Their system outputs visual-oriented summary information to show the prevalence of the events as well as to allow comparison between multiple groups of events [26]. For overview specific tasks, Wongsuphasawat et al. proposed LifeFlow, a novel visualization that simplifies and aggregates temporal event sequences into a tree-based visual summary [27]. Monroe et al. improved the usability of the system by integrating interval-based events and developing a set of user-driven simplification techniques in conjunction with a metric for measuring visual complexity [13,28]. Wongsuphasawat et al. also extended LifeFlow into a Sankey diagram-based visualization, which reveals the alternative paths of events and helps the user understand the evolution of patient symptoms and other related factors [29]..
In spite of their effectiveness in guided or well-informed analysis, query-based systems fall short for exploratory analysis where the user may not have a well-defined hypothesis and simply wants to explore and learn the data.
Exploring inhomogeneous data
High-dimensional data items are less homogeneous and harder to compare with each other. It is harder to associate, rank, or filter those items meaningfully. Some have proposed that data be sliced and diced by dimension or item and separated into homogeneous subsets [4]. It has been proven that, by carefully selecting projection methods, a system can incorporate multiple heterogeneous genetic data and identify meaningful clusters of patients [30]. Our work is an example of the slice-and-dice concept, where we partition the record time into multiple dimensions and group patients within each time window.
We would like to investigate the possibility of using more sophisticated feature extraction methods in future work. In this case, we define the factors by hand with domain knowledge and group the patients based on the factors by a simple set similarity metric or a frequencybased metric. However, the combinations of factors are noisy and the variance within each cluster are usually high. Furthermore, there are still thousands of unused factors that may provide additional insights. Such problem could potentially be addressed with the help of correspondence analysis.
More optimizations can also be made to enhance the visual rendering of information as well. First, for conveying the association between the clusters, in this work we only visualize the cardinality of the association and filter them by variance. There are other measures of proportionality available which can help evaluate the association of comorbidities [31]. We would like to study each method's role and effectiveness by conducting different analysis tasks. Second, for conveying and comparing the nature of each cluster, in this work we only present such information as text that shows the dominant factors of the cluster and indicate uncertainty. However, the underlying differences are non-binary and high-dimensional. Getting the system to effectively extract and present the subtle differences between the clusters could be the key to improving visual pattern depiction.
Finally, it is possible to improve the computational performance by parallel data processing. Some of the steps in the analysis process are easily parallelizable while others, such as patient clustering, are not. We also intend to investigate more advanced database structures for efficient data management. | 8,036 | 2015-11-12T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Local magnitude estimate at Mt . Etna
In order to verify the duration magnitude MD we calculated local magnitude ML values of 288 earthquakes occurring from October 2002 to April 2003 at Mt. Etna. The analysis was computed at three digital stations of the permanent seismic network of Istituto Nazionale di Geofisica e Vulcanologia of Catania, using the relationship ML = logA+alog∆−b, where A is maximum half-amplitude of the horizontal component of the seismic recording measured in mm and the term «+alog∆−b» takes the place of the term «− logA0» of Richter relationship. In particular, a = 0.15 for ∆<200 km, b=0.16 for ∆<200 km. Duration magnitude MD values, moment magnitude MW values and other local magnitude values were compared. Differences between ML and MD were obtained for the strong seismic swarms occurring on October 27, during the onset of 2002-2003 Mt. Etna eruption, characterized by a high earthquake rate, with very strong events (seismogram results clipped in amplitude on drum recorder trace) and high level of volcanic tremor, which not permit us to estimate the duration of the earthquakes correctly. ML and MD relationships were related and therefore a new relationship for MD is proposed. Cumulative strain release calculated after the eruption using ML values is about 1.75E+06 J higher than the one calculated using MD values. Mailing address: Dr. Salvatore D’Amico, Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, Piazza Roma 2, 95123 Catania, Italy; e-mail<EMAIL_ADDRESS>
Introduction
In order to know the size of an earthquake without considering the produced effects, Richter proposed the definition of magnitude and related it to the maximum amplitude of the ground displacement.
The «local magnitude» ML (Richter, 1935), is defined with the relationship where A is the maximum amplitude peak to peak measured in mm, recorded by a standard Wood-Anderson seismometer with natural period of 0.8 s, magnification 2800 and damping factor 0.8.The quantity «−logA0» is defined empirically with respect to a reference earthquake, which describes the variation of maximum amplitude (A) of the event related to the epicentre distance (∆).Geometric spreading, elastic attenuation and scattering of seismic waves, therefore, influence the amplitude decay.Richter fixed A0 (∆) level at 1 µm for a distance of 100 km.Later, to evaluate magnitude in a more practical approach, principally when the recording of strong earthquakes is clipped in amplitude, empiric relationships were developed using the duration of the seismic event by Solov'ev (1965), Tsumara (1967) and many other authors.
In the last twenty years, earthquake magnitudes were always estimated at Mt. Etna volcano with the duration of the seismic event using appropriate relationships.Caltabiano et al. (1986), used the Serra Pizzuta Calvarina (ESP) station of Permanent Seismic Network run by Istituto Internazionale di Vulcanologia (IIV) of the CNR of Catania, for the following relationship: . . .log log M 1 367 2 068 0 212 where τ is the duration time of the event in seconds and ∆ is hypocentre distance in km.These authors studied a dataset of 70 earthquakes with hypocentre distance within 11 km for an extremely local relationship.The difference between P-and S-waves arrival time was used to estimate the hypocentre distance.Istituto Nazionale di Geofisica e Vulcanologia (IN-GV) supplied the reference magnitude.
Later, Cardaci and Privitera (1996) introduced a new relationship to calculate duration magnitude for the permanent seismic network of IIV, based on methodology proposed by Real and Teng (1973).The dataset analysed by the authors was composed of 198 earthquakes recorded between 1990 and 1994, the reference magnitude, supplied by Istituto Nazionale di Geofisica e Vulcanologia, is comprised between 2.0 and 3.5 and was estimated on stations far from Mt. Etna using the duration of event recording.As routine, the magnitude of earthquakes recorded by the Permanent Seismic Network of (INGV-CT) is calculated using the duration of the seismic event recorded on a drum recorder using the relationship of Caltabiano et al. (1986).The reference station was ESP until 1999 and thereafter EMA.
Magnitude of Mt. Etna earthquakes
Usually, when an event is «truncated» by the occurrence of another seismic event, the duration is estimated by amplitude decay.
Recent seismic swarms, which occurred during the opening of the eruptive fractures of the last eruptions (2001 and 2002-2003), were characterized by a high earthquake rate, with very strong events (seismogram results clipped in amplitude on drum recorder trace) and high level of volcanic tremor.Figure 1 duration of each earthquake and the related magnitude.In order to verify the duration magnitude calculated for the earthquakes of 2002-2003 Mt.Etna eruption, we worked to simulate a Wood-Anderson seismometer and then computed local magnitude with the relationship (Lahr, 1999) where A is maximum half-amplitude of the horizontal component of the seismic recording measured in mm and the term «+alog ∆ − b» takes the place of the term «−logA0» of Richter relationship.In particular, a=0.15 for ∆<200 km, b = 0.16 for ∆< 200 km.The approximation for this parametric form is smaller than 0.2 in comparison with correction values for source-receiver distance from Richter's table (Di Grazia et al., 2001).∆ is hypocentre distance in km and is calculated by the relationship where D is epicentre distance in km, H is the depth of the earthquake in km b.s.l. and Q is the altitude of the station in km a.s.l.
The 2002-2003 Mt. Etna eruption
In the night between October 26 and 27, 2002 a seismic swarm occurred in the central upper part of Mt.Etna.It was the start of a new eruption of Etna that formed fissures on both the NE and S flanks of the volcano.On October 27 eruptive fissures opened on the higher flank of the volcano produced high fire fountains, evolving into ash columns (Calvari et al., 2004).On October 29, numerous tectonic structures on the eastern flank of the volcano were activated through seismic swarms, causing serious damage to S. Venerina village and in the neighbouring areas on Mt.Etna's eastern flank (Azzaro and Mostaccio, 2003;Azzaro and Scarfì, 2003).
The eruption gave rise to a huge lava emission from both fracture fields and powerful explosive activity from the southern one.After 94 days the eruption ended on January 28, 2003.
Much of the seismicity occurred during the first day of the eruption, while the remarkable clusters of earthquakes on the southeastern flank are largely related to the 29 October seismic crisis.
An overall number of 862 earthquakes (MD≥ ≥1) were recorded by the permanent seismic network run by INGV-CT.Maximum magnitude observed was 4.4 and 56 earthquakes exceeded M D = 3.0.
Data analysis
The dataset used in this work is composed of 288 earthquakes occurring between October 2002 and April 2003 (fig.2).The magnitude (MD) of these earthquakes is between 1.0 and 4.4.The focal depths of the earthquakes are concentrated within the uppermost 5 km below sea level (b.s.l.).
To study the relationship for the local magnitude we used the digital stations ESP, EMV and EMG.The first is equipped with a Lennartz LE-3D/20s seismometer; the others (EMV and EMG) are equipped with Lennartz LE-3D/1s sensors.The former is a broadband seismometer with corner frequency ω0 = 0.05 Hz (20 s), output voltage k = 1000 V/m/s and damping h = =0.707; the LE-3D/1s have a corner frequency ω0 = 1.00 Hz, output voltage k = 400 V/m/s and damping h = 0.707.The errors on the epicentre and hypocentre coordinates are smaller than 2 km.
Methodology
For each selected earthquake (fig.3a), we calculated a Discrete Fourier Transformation (DFT) on the horizontal components of the seismic record (fig.3b).The velocity response curve of the seismometer (fig.3c) is defined by the relationship where k is the sensitivity of the transducer in V/m/s, ω is the angular frequency, ω 0 is the natural period, and h is the damping.This kind of Analytic locations of the earthquakes were performed by HYPOELLIPSE routine (Lahr, 1999), using a onedimensional VP velocity model with 7 plane-parallel layers (Hirn et al., 1991) as described in table I. sensor has no calibration coil and it is very difficult to know the real technical parameters, for this reason we used the data reported in the factory datasheet.Velocity response curve was transformed in displacement response curve (fig.3d), by multiplying it for the angular velocity ω (Bath, 1974) before correcting the velocity spectrum.
Multiplying the response curve of a standard Wood-Anderson seismometer (fig.3e), with static magnification 2800, damping 0.8 and natural period 0.8 s (Richter, 1935), with the correct displacement spectrum (fig.3f) we obtained signal of fig.3g.The simulated Wood-Anderson seismogram (fig.3h) was obtained with DFT inverse of fig.3g.
We used the software DaDisp 4.0 to analyse the digital signals on the whole seismic record.The half-amplitude A, used to obtain the magnitude, was calculated as a mean of N-S and E-W components.Urhammer and Collins (1990), verified that the 2800 static magnification is determined by the manufacturer and it is not the static magnification determined from measurements of the natural period and tilt-sensitivity.
They suggest using the value of approximately 2080.Assuming a static magnification of 2800 (as has been common practice, e.g., Bakun et al., 1978;Kanamori and Jennings, 1978;Luco, 1982;Del Pezzo and Petrosino, 2001) will lead to a systematic over estimation of ML by an average of 0.13 ML units (Urhammer and Collins, 1990).
We decided to apply the 2800 static magnification value to this work because it is more widely used in common practice and the calculated ML is comparable with ML calculated from stations of different networks.A linear regression between MD and ML values was performed, excluding the October 26 and 27 dataset.The relationship obtained is .
Comparison with duration magnitude
. M M 0 6668 1 008 with coefficient of variation of the regression R 2 = 0.7737.
Figure 5 shows the magnitude ML (at ESP station) with respect to time origin.Earthquakes with higher magnitude were recorded during the opening of the eruptive fractures (October 26 and 27, grey squares), while the earthquakes with the smaller magnitude were recorded only at the end of the eruption.
We may assume that the estimate of the magnitude MD on October 26 and 27 is not perfectly correct.In fact, as mentioned above, when there are many earthquakes in a short time, or when there is higher amplitude of the volcanic tremor, it is more difficult to read the real duration of the earthquake.Moreover, the strongest earthquakes recorded on the drum recorder, show clipped amplitude (see fig. 1).For these reasons it is very difficult to estimate the amplitude decay.This amplitude saturation infers only the trace drawn by the drum recorder pen and it does not affect the digitally recorded signal.
This theory is in agreement with fig.6, where the difference MD − ML (at ESP station) with respect to time origin is shown.The graph again highlights MD higher than ML except for earthquakes on October 26 and 27, 2002.Figure 8a,b shows velocity and Wood-Anderson simulated traces of two seismic events.The signal-noise ratio at EMV station is smaller than other two stations, both during the eruptive phase (fig.8a), with a high level of volcanic tremor, and at the end of the eruption (fig.8b).
We think that EMV values are affected by site-effect overestimating magnitude values with respect to ESP and EMG values.This siteeffect is more evident for earthquakes with M L >1.5, while for the other earthquakes the signal-noise ratio at EMV station is very small and it is not possible to estimate the maximum amplitude peak to peak clearly.
Following Gasperini (2002), we proceeded to a calibration of a new relation of M D on the basis of the dataset of ML magnitudes excluding the October 26 and 27 earthquakes, and of the duration values.In our analysis we computed the coefficients of a linear regression, equivalent to the Caltabiano et al. (1986) with coefficient of variation of the regression R 2 = 0.770.
The linear regression computed is nevertheless affected by errors on duration estimate.In order to reduce the errors in the coefficients we performed another linear regression excluding data with residuals obtained from previous re- with coefficients of variation of the regression R 2 = 0.873.The magnitude (Mbb) is calculated using the Richter relationship (1935) as a mean value from the horizontal components of the AIO and VAE stations (MEDNET, 2003).
Comparison with magnitude values calculated using different methods
The moment magnitude (M W) is calculated with Kanamori's relationship (Lay and Wallace, 1995) .
A good agreement is shown between ML values and new MD values.Moreover, we observe that, with a few exceptions, Mbb values agree with ML values, and MW values agree with ML values for ML > 2.7.
Strain release
Figure 10 shows the cumulative strain release (Joule 1/2 ) calculated by MD (thin line) and by ML values (thick line).The strain release value of the earthquake was obtained as the square root of the energy E in Erg, which is estimated with Richter's relationship ( 1958) where M is the magnitude.
In the figure two periods are highlighted: the former (dark grey) indicates the seismic swarm occurring on October 27, corresponding to the opening of the eruptive fractures, while the latter (light grey) represents the eruption beginning on October 28 and ending on January 27.
A marked difference of cumulative strain release between MD and ML series is observable during the seismic sequence of October 27, due to the underestimated M D values as seen in figs.4 and 6.In the later period, where it is easier to estimate the duration of the earthquakes, the strain release values are comparable.After the eruption the overall difference of strain release calculated from ML values and from MD values is about 1.75E+06 J 1/2 .
Conclusions
The aim of this work was to estimate local magnitude by simulating a Wood-Anderson seismometer and compare the results with different magnitude scales.
ML values calculated at ESP and EMG are absolutely coherent, while EMV values are overestimated by 0.5.
ML values compared with Mbb and MW values show a good correlation.
MD values seem to be overestimated.Although it is difficult obtaining reliable MD values with this dataset for the strongest earthquakes, corresponding to the opening of the eruptive fracture a try to recognise a relationship between ML and MD was performed.A new duration-magnitude scale is proposed.
The dataset used and the M L values calculated are reported in the Appendix.
In conclusion, it is remarkable that in environments with high seismic noise, such as Mt.Etna volcano, the magnitude estimates based on the measurement of the ground amplitude are more reliable and that some care must be taken in using a magnitude scale based on coda duration for low values of magnitude when the noise level is high.As the correct estimate of seismic parameters is important for a quantitative evaluation of volcano dynamics, the Wood-Anderson magnitude scale should be routinely determined together with duration-magnitude in volcano monitoring.A software program to reach these objectives is in preparation at this time.
At present the Mt.Etna Permanent Seismic Network, of the Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania (INGV-CT), consists of 30 stations.The seismic signals are acquired continuously and are transmitted via radio to Centro Acquisizione Dati Sismici (CADS) of the INGV-CT where they are digitally saved with a sampling rate of 125 Hz.
Fig. 2 .
Fig. 2. Mt.Etna map.The grey squares indicate the stations used to calculate the local magnitude (ML) or the duration magnitude (MD).The earthquakes epicentres are indicated with crosses; earthquakes occurring on October 26 and 27, 2002 are indicated with asterisks.
Fig.
Fig. 3a-h.The DaDisp worksheet used to simulate a seismic signal recorded by standard Wood-Anderson seismometer.a) Velocity seismic signal recorded by geophone; b) DFT of velocity seismic signal; c) velocity response curve of geophone; d) displacement response curve of geophone; e) Wood-Anderson response curve; f) velocity spectrum divided by displacement response curve; g) corrected spectrum multiplied by Wood-Anderson response curve; h) Wood-Anderson simulated seismic signal.
Figure 4
Figure 4 shows the magnitude values ML calculated at ESP compared with the corresponding magnitude values MD.The grey squares indicate earthquakes occurring between October 26 and 27.MD values are overestimated respect to ML and are scattered for about 1.0 units on M D axis below ML = 2.2.This data scattering is due to error on estimate of earthquake duration.Moreover, grey squares dataset shows a different trend with respect to white squares dataset.
Fig. 4 .
Fig. 4. Comparison between MD and ML at ESP station.The grey squares indicate the earthquakes occurring from October 26 at 00:27 GMT and October 27; the thick black line of linear regression and relative equation refers to white squares dataset.
Fig. 6 .
Fig. 6.Difference MD−ML (at ESP station) with respect to time origin.The grey squares indicate the earthquakes occurring on October 26 at 00:27 GMT and October 27.
Fig. 5 .
Fig. 5. ML calculated at ESP station with respect to time origin.The grey squares indicate the earthquakes occurring on October 26 at 00:27 GMT and October 27.
Figure 7 compares the M L values, at EMG and EMV, with ML values at ESP; there is a good agreement between ESP and EMG values; the EMV values have a good agreement above ML> > 1.5, but are overestimated by about 0.5.
Fig. 7 .
Fig. 7. Relation between ML estimated at EMG station (white), at EMV station (grey) and at ESP station.
Fig. 9 .
Fig. 9. Relation between Moment magnitude MW (white squares), local magnitude (Mbb) at broadband stations of MedNet (black squares), duration magnitude MD (grey triangles) calculated with the new relation and local magnitude ML at ESP station.
Figure 9 plots
Figure 9 plots MD values obtained from the new Duration-Magnitude scale (grey triangles) versus M L values.We also compared the local magnitude values estimated by the broadband stations of MedNet seismic network located in Sicily (black squares) and the moment magnitude estimated by seismic moment (white squares).The magnitude (Mbb) is calculated using the Richter relationship (1935) as a mean value from the horizontal components of theAIO and VAE stations (MEDNET, 2003).
Fig. 10 .
Fig. 10.Cumulative energy strain release in the period October 14, 2002-April 5, 2003.Thin line represents strain release calculated by MD values, while thick line represents strain release calculated by ML values.Dark grey area highlights October 27 day and light grey area the eruptive period (October 28, 2002-January 27, 2003).
Table I .
One-dimensional VP velocity model.
formula, of ML with both logτ and log∆ as independent variables.This gives Appendix.Dataset used for the analysis and ML values. | 4,267.2 | 2005-12-25T00:00:00.000 | [
"Geology"
] |
The S100 Family Heterodimer, MRP-8/14, Binds with High Affinity to Heparin and Heparan Sulfate Glycosaminoglycans on Endothelial Cells*
The S100 family proteins MRP-8 (S100A8) and MRP-14 (S100A9) form a heterodimer that is abundantly expressed in neutrophils, monocytes, and some secretory epithelia. In inflamed tissues, the MRP-8/14 complex is deposited onto the endothelium of venules associated with extravasating leukocytes. To explore the receptor interactions of MRP-8/14, we use a model system in which the purified MRP-8/14 complex binds to the cell surface of an endothelial cell line, HMEC-1. This interaction is me-diated by the MRP-14 subunit and is mirrored by recombinant MRP-14 alone. The cell surface binding of MRP-14 was blocked by heparin, heparan sulfate, and chondroitin sulfate B, and the binding sites were sensitive to heparinase I and trypsin treatment but not to chondroitinase ABC. Furthermore MRP-8/14 and MRP-14 did not bind to a glycosaminoglycan-minus cell line. MRP-14 has a high affinity for heparin ( K d (cid:1) 6.1 (cid:2) 3.4 n M ), and this interac- tion mimicked that with the endothelial cells. We therefore conclude that the MRP-8/14 complex binds to
The S100 proteins are a family of small (10 -14 kDa) calciumbinding proteins (1,2). The majority of the S100 genes are tightly clustered together on chromosome 1q21 in man and chromosome 3 in the mouse, but the individual proteins are expressed in distinctive cell types. Generally, the functions of S100 proteins are poorly characterized. However, there is increasing evidence that some S100 proteins have extracellular activities, particularly in the immune response. Several S100 proteins have been reported to act as chemoattractants with potencies in the 10 Ϫ10 -10 Ϫ13 M range. Thus, S100L (S100A2) from the lung acts as a chemoattractant for eosinophils (3); psoriasin (S100A7) acts as a chemoattractant for neutrophils and CD4ϩ T lymphocytes (4); murine MRP-8 (CP-10; S100A8) acts as a chemoattractant for myeloid cells (5); human MRP-8 acts as a chemoattractant for peridontal ligament cells (6); and S100A12 (ENRAGE) acts as a chemoattractant for human monocytes (7) and neutrophils (8).
Because the S100 proteins appear to have extracellular functions, there has been an interest in the nature of the receptors for these proteins and the downstream events that they might induce. The chemoattractant effects of two S100 proteins, S100L and CP-10, are sensitive to pertussis toxin, suggesting a receptor interaction linked to small G proteins (3,9). The proinflammatory protein S100A12 binds to the receptor for advanced glycation end products (RAGE) 1 (7). RAGE is a scavenger-type receptor belonging to the immunoglobulin superfamily that signals to the NFB pathway following ligation. In addition to S100A12, it also binds advanced glycation end products, amyloid fibrils, and amphoterin (reviewed in Ref. 10). Recently S100B and S100A1 have also been shown to bind RAGE (11); thus, it has been speculated that RAGE may be a general receptor for the S100 family of proteins.
The MRP proteins MRP-14 (S100A9) and MRP-8 (S100A8) are expressed by myeloid cells and some secretory epithelium (12). In myeloid cells, MRP-8 and MRP-14 form a heterodimer that constitutes 45% of the cytosolic protein in neutrophils and 1% in monocytes (13). Determining the function of these proteins has been difficult, particularly because their abundance has lead to a propensity to contaminate functional assays. Recently, the MRP-8/14 heterodimer isolated from keratinocytes (14) and myeloid cells (14 -16) has been demonstrated to bind to a class of unsaturated fatty acids, including arachidonic acid. MRP-8/14 has been reported to aid uptake of arachidonic acid by binding CD36 (17), which is now recognized as a fatty acid transporter protein (18).
Immunohistochemical studies have localized MRP-8/14 to venules associated with extravasating myeloid cells (19). In this study we show that, in human inflammatory disease, the source of the MRP proteins is not the endothelium but the associated myeloid cells. Therefore, we have sought to identify the molecules to which MRP-8/14 binds on endothelium. Our findings suggest that the primary binding partner is not a protein receptor but a sulfated glycosaminoglycan structure.
Monoclonal antibodies (mAbs) 1H9, 1F5, and 6F5, specific for human MRP-14, and mAb 7C12, specific for human MRP-8, were generated as a result of mouse immunization with human rMRP-14 and rMRP-8, respectively. The mAbs were characterized by Western blotting and enzyme-linked immunosorbent assay with the assistance of Jane Steele (Imperial Cancer Research Fund Central Cell Services). The FabЈ preparations were made by the Immunopure Fab preparation kit (Pierce) according to the manufacturer's instructions. The monospecific rabbit anti-MRP-14 and MRP-8 sera have previously been described (21). mAb 10E4, specific for heparan sulfate, was from Seikagaku Corp., and mAb CS-56, specific for chondroitin sulfate, was from Sigma. CD45 mAb, clone 2B11/PD7/26 was from DAKO. The anti-CD36 mAb, CLBIVC7, was donated by Dr. Ian Dransfield (University of Edinburgh), and the rabbit anti-RAGE by was donated by Dr. Ann-Marie Schmidt (New York).
The human CD36 construct in pcDNA3.1 was a kind gift from Dr. Maria Febbraio (Cornell University). The human RAGE cDNA was generously donated by Dr. Igor Bronstein (University of York), and Dr. Paula Stanley (Imperial Cancer Research Fund) inserted it into pIRES2-EGFP and pEGFP-N2 (both from CLONTECH).
All glycosaminoglycan and modified heparin preparations were purchased from Sigma, with the exception of [ 3 H]heparin (PerkinElmer Life Sciences). These preparations were made up at 1 mg/ml in HBSS (without Ca 2ϩ or Mg 2ϩ ) buffered with 10 mM HEPES, pH 7.4 (H-HBSS), just prior to use in the assay, except hyaluronan, which was solubilized in 0.3 M Na 2 HPO 4 , pH 5, at 50°C.
Cell Culture-The human microvascular endothelial cell line, HMEC-1, (22) was generously donated by Dr. R. Bicknell (Imperial Cancer Research Fund) with the permission of Dr. T. Lawley and maintained by culturing on gelatin-coated flasks in Dulbecco's modified Eagle's medium (Sigma) supplemented with 10% fetal calf serum (Bioclear), 10 ng/ml epidermal growth factor (Sigma), and 1 mg/ml hydrocortisone (Sigma). Before use the cells were cultured on tissue culture plastic for one passage, removed with 0.5 mM EDTA in PBS, and washed in H-HBSS containing 0.2% BSA (fatty acid free, ICN) for use in assays. The CHO-KI cell line and clone pgsA-745, which is mutated in xylosyl transferase and is unable to synthesize heparan or chondroitin sulfate GAGs (23), were obtained from the Imperial Cancer Research Fund Cell Services Department and Dr. John Gallagher, respectively, and were maintained in Kaighn's modified Ham's F-12 medium supplemented with 10% fetal calf serum. These cells were harvested as above.
CHO Cell Transfections-The CHO-KI cell line and GAG-minus clone pgsA-745 (see above) were transfected using LipofectAMINE (Invitrogen) with the CD36 construct, and expression was detected by mAb CLBIVC7 after 48 h. RAGE was similarly transfected into the same CHO cell lines. Successful transfection (50 -70% total cells) was detected by both green fluorescent protein expression and Western blotting for RAGE using the rabbit anti-RAGE antibody.
In Situ Hybridization-Specific localization of the mRNAs for MRP-8 or MRP-14 was accomplished by in situ hybridization using antisense riboprobes. Templates for riboprobe synthesis were constructed by subcloning cDNA inserts encoding full-length human MRP-8 (282 bp) or MRP-14 (345 bp) into BamHI-digested pBluescript II KS (ϩ) plasmid vector (Stratagene). Complementary RNA probes labeled with 35 S-UTP (ϳ 800 Ci/mM; Amersham Biosciences, Inc.) were prepared as run-off transcripts from HindIII linearized plasmids using T7 RNA polymerase. The presence of hybridizable mRNA in all compartments of the tissues studied was established in near serial sections using an antisense -actin probe.
All in situ hybridization was done on 4-m sections of formalin-fixed, paraffin-embedded tissues. The methods for pretreatment, hybridization, washing, and dipping of slides in Ilford K5 for autoradiography were essentially as described previously (24). Autoradiography was at 4°C (two exposures/section for 5 and 7 days for MRP-8 and for 10 and 16 days for MRP-14), before developing in Kodak D19 and counterstaining by the method of Giemsa.
Cell Surface Binding Assay-The cells were resuspended to 2 ϫ 10 6 cells/ml in chilled binding buffer (H-HBSS containing 0.2% BSA, 1 mM CaCl 2 , 1 mM MgSO 4 , and 10 M ZnSO 4 ) containing the respective MRP proteins and blocking agents. The cells were incubated on ice for 40 min and then washed three times with binding buffer. The cells were resuspended in rabbit anti-MRP-14 (1:1000) or mAb 7C12 (anti-MRP-8; 10 g/ml) in binding buffer and incubated on ice for 30 min. After washing, the cells were then incubated with fluorescein isothiocyanate-conjugated goat anti-rabbit IgG (1:400; Sigma) or biotinylated rabbit antimouse IgM (10 g/ml; DAKO) followed by phycoerythrin-streptavidin (1:100; Jackson ImmunoResearch). After washing, the cells were resuspended in PBSA ϩ 2% formaldehyde and analyzed by flow cytometry using a FACScan (Becton Dickinson).
To determine the divalent cation dependence of rMRP-14 binding to cell surfaces, the cells were resuspended in binding buffer containing 1 M rMRP-14 and either no divalent cations; 1 mM MgSO 4 and 10 M ZnSO 4 ; 1 mM CaCl 2 and 1 mM MgSO 4 ; 10 M ZnSO 4 and 1 mM CaCl 2 ; or all three divalent cations. After the cells were incubated on ice for 40 min, they were washed three times with binding buffer containing the same divalent cations and then another three times with binding buffer containing all three divalent cations. The amount of bound rMRP-14 was determined as above.
Heparin Binding Assay-96-well Immunlon 1 plates (Dynex Technologies) were coated with mAb 1H9 at 100 g/ml in PBSA overnight at 4°C. The plate was then blocked with 2% BSA in PBSA for 1-2 h at room temperature and washed three times with H-HBSS. 0.1 M rMRP-14 in H-HBSS containing 2% BSA and divalent cations (1 mM CaCl 2 , 1 mM MgSO 4 , and 10 M ZnSO 4 ) was added, and the plate was incubated for 1 h at room temperature. After washing three times with H-HBSS/cations containing 0.1% Tween 20, 100 l/well [ 3 H]heparin in H-HBSS/cations/BSA with and without blocking agents was added to the plate. The plate was incubated for 1 h (unless otherwise stated) at 37°C and then washed. The bound [ 3 H]heparin was solubilized by 0.5 M NaOH containing 1% SDS for 30 min at 37°C. The contents of each well were added to 5 ml of liquid scintillation mixture (Ecolite ϩ; ICN) and counted (Beckman scintillation counter LS6500). The specific binding was determined by subtraction of binding in the presence of 100 g/ml cold heparin.
In assays with blocking agents, a parallel experiment was performed to test whether the blocking agent was able to compete the rMRP-14 from the anchoring mAb. The plates were coated, blocked, and incubated with 0.01 M rMRP-14 as above. The rMRP-14 was then treated with blocking agent as in the [ 3 H]heparin binding assay. After washing, the amount of bound rMRP-14 was determined by enzyme-linked immunosorbent assay using rabbit anti-MRP-14 followed by horseradish peroxidase-conjugated goat anti-rabbit immunoglobulin (DAKO), both diluted 1:1000 in H-HBSS/cations/BSA. After washing, the bound antibody was detected by O-phenylenediamine (Sigma) according to the manufacturer's instructions. The absorbance at 492 nm was read by a Multiskan plate reader (Titertek).
The salt sensitivity of rMRP-14 binding to heparin was analyzed by the above assay with an additional wash step (washing three times with H-HBSS/cations with and without NaCl) following the incubation with [ 3 H]heparin. Like the blocking assays, a parallel experiment with rabbit anti-MRP-14 determined that the rMRP-14 was not released from the mAb 1H9.
Enzymatic Digestion of HMEC-1 Cell Membrane GAGs-HMEC-1 cells were resuspended at 1 ϫ 10 6 cells/ml in binding buffer containing proteinase inhibitors (20 g/ml phenylmethylsulfonyl fluoride, 1 g/ml aprotinin, and 10 g/ml leupeptin) and various concentrations of heparinase I (Sigma), or alternatively, H-HBSS containing 0.2% BSA, proteinase inhibitors, 0.5 mM EDTA, and various concentrations of chondroitinase ABC (Sigma). The digestions were performed for 4 h at room temperature. Alternatively, the cells were resuspended in PBSA containing 0.5 mM EDTA with and without 0.25% trypsin for 5 min at room temperature. After enzyme treatment, the cells were washed four times in binding buffer and used in the cell surface binding assay as above. The removal of heparan sulfate or chondroitin sulfate was monitored by staining the cells with mAb 10E4 (10 g/ml) or mAb CS-56 (1:100), respectively, followed by biotinylated rabbit anti-mouse IgM (10 g/ml; DAKO) and then phycoerythrin-streptavidin (1:100; Jackson Immuno-Research). As a control for protease activity in the GAGases, the integrity of cell surface proteins was assessed by staining with mAb E1/2.8 (anti-CD44; 13 g/ml) or mAb P5D2 (anti-CD29; 10 g/ml) followed by fluorescein isothiocyanate-conjugated goat anti-mouse IgG (1:400, Sigma).
Expression of MRP-8 and MRP-14 in
Vivo-Immunohistochemical studies have localized MRP-8/14 to venules featuring extravasating myeloid cells (19). A survey of noninflamed tissues revealed little positive staining for MRP-8 and -14 (data not shown). However, in inflammatory conditions, such as found in Crohn's disease, small venules frequently stained with both anti-MRP-14 ( Fig. 1A) and anti-MRP-8 (data not shown). The staining of the two subunits was always coincident. The CD45-positive total leukocyte infiltrate (Fig. 1B) contained CD15-positive neutrophils and CD68-positive monocytes (data not shown) but was negative for MRP-8 and -14 (Fig. 1A). Together these results suggest that in an inflammatory setting, myeloid cells lose expression of MRP-8 and -14 during transmigration, and this can be associated with MRP positive endothelium.
To investigate whether the source of the endothelial-associated MRP-8/14 is the endothelium itself or the transmigrating leukocyte, we tested for MRP mRNA expression within blood vessels of small intestine tissue from Crohn's disease. When small vessels positive for MRP-8 protein (Fig. 1C) were examined by in situ hybridization, no MRP-8 mRNA could be detected in the endothelial cells (Fig. 1D). In contrast, associated myeloid cells were positive for both MRP-8 protein and mRNA. Identical results were obtained for MRP-14 (data not shown). These results indicate that the endothelial cells do not synthesize the MRP proteins found on their surface and provide direct evidence that associated leukocytes are the source of the deposited protein.
To further test human endothelial cells for their ability to synthesize the MRP proteins, we stimulated the microvascular endothelial cell line HMEC-1 (22) for 20 h with a concentration range of the agonists tumor necrosis factor ␣, interleukin-1␣, interferon-␥, or lipopolysaccharide. As determined by immunohistochemistry and enzyme-linked immunosorbent assay for the MRP-8/-14 complex, the HMEC-1 cells were found not to express the MRP proteins even after stimulation (data not shown).
The MRP-8/14 Heterodimer and MRP-14 Protein Bind to Endothelial Cells-The mechanism by which the MRP proteins are tethered to the endothelium was investigated by measuring the binding of these proteins to the endothelial cell line, HMEC-1 cells. The purified native complex of MRP-8/14 bound to the HMEC-1 cells in a monophasic and saturable fashion, as detected with a specific rabbit anti-MRP-14 serum ( Fig. 2A) and the anti-MRP-8 mAb, 7C12 (Fig. 2B). This binding was mirrored closely by recombinant MRP-14 (rMRP-14; Fig. 2A), but rMRP-8 did not interact with the same endothelial cell line (Fig. 2B).
Anti-MRP-14 mAbs 1F5, but not 1H9, blocked the interaction of rMRP-14 with HMEC-1 cells (Fig. 2C). mAb 1F5 also inhibited the binding of the native MRP complex to the same cells, but again this was not affected by mAb 1H9 (Fig. 2C). mAb 1F5 did not block the detection of the MRP proteins by preventing the binding of the anti-MRP-14 antiserum, because the mAb similarly reduced the amount of MRP-8/14 complex rMRP-14 has been demonstrated to bind both calcium (25) and zinc (26) ions. The binding of rMRP-14 to HMEC-1 cells required the presence of both Ca 2ϩ and Zn 2ϩ but was independent of a third divalent cation, Mg 2ϩ (Fig. 3). This result suggests that only the calcium and zinc ion-bound conformation of MRP-14 was able to bind to endothelial cell surfaces.
Heparin Blocks MRP-14 Binding to Endothelial Cells-Chemokines are immobilized onto the vascular lumen by binding to GAG structures on endothelial cells (27,28). To evaluate the contribution of GAGs to MRP-14 binding, a range of GAGs were used as blocking agents. When the GAGs were titrated from 0.1-100 g/ml, heparin potently inhibited the binding of rMRP-14 to HMEC-1 cells with an IC 50 Յ 0.1 g/ml (approximately 7 nM; Fig. 4A and data not shown). Both heparan sulfate and chondroitin sulfate B (dermatan sulfate) reduced the binding of rMRP-14 to endothelial cells but were less potent than heparin. Chondroitin sulfate A (chondroitin-4-sulfate) and chondroitin sulfate C (chondroitin-6-sulfate) were poor inhibitors. Hyaluronic acid (data not shown) and keratan sulfate did not affect the interaction.
Dextran sulfate often interferes with protein-GAG interactions that are dependant on sulfation of the GAG. 100 g/ml dextran sulfate, but not 100 g/ml dextran, inhibited rMRP-14 binding to HMEC-1 cells (Fig. 4B). Together these data show that the endothelial receptors for MRP-14 are highly modified and sulfated GAG structures.
The Interaction between rMRP-14 and Heparin-rMRP-14, which was immobilized through the nonblocking mAb 1H9, bound [ 3 H]heparin. In the absence of MRP-14 or in the presence of 100 g/ml cold heparin, the binding of [ 3 H]heparin was eliminated, demonstrating that the interaction was specific ( Fig. 5A and data not shown). After 1 h of incubation, the binding curve demonstrated that the interaction was saturable and of high affinity, with Scatchard analysis determining the K d to be 79 Ϯ 44 ng/ml (n ϭ 5; Fig. 5B). As the K d and maximum binding determined after 30 min and 2 h did not significantly differ from those determined after 1 h, the binding was considered to be at equilibrium, and the 1-h time point was chosen for further studies. Using the mid-point value of the heparin molecular mass range (i.e. 13,000 Da) yielded a K d ϭ 6.1 Ϯ 3.4 nM. Therefore, MRP-14 has very high affinity for heparin, as compared with most GAG-protein interactions (29). mAbs 1F5 and 6F5 inhibited the interaction between [ 3 H]heparin and rMRP-14 (data not shown). This blocking indicates that the properties of MRP-14 binding to heparin mimic the binding of MRP-14 to endothelial cell surfaces.
The Nature of the MRP-14 to Heparin Interaction-To investigate whether the interaction between rMRP-14 and heparin was dependent on sulfation, modified preparations of heparin were used to block 100 ng/ml [ 3 H]heparin binding to rMRP-14. Heparin blocked the interaction with an IC 50 of 10 -100 ng/ml (ϳ0.7-7 nM; Fig. 6A). This IC 50 is slightly lower than the concentration of [ 3 H]heparin, which probably indicates that the two preparations of heparin differed somewhat. Removal of the amino-linked sulfate groups of heparin (de-N-sulfated heparin) reduced the potency of heparin as a blocking agent by about 3 orders of magnitude. N-Acetylation of the de-N-sulfated heparin (N-acetyl heparin) did not further affect the blocking of the heparin-rMRP-14 interaction. Removal of O-linked sulfate groups from this N-acetyl heparin (N-acetyl-de-O-sulfated heparin) completely removed the capacity of heparin to block the interaction. Parallel experiments demonstrated that these preparations of modified heparin at 500 g/ml did not significantly reduce the amount of rMRP-14 immobilized on mAb 1H9 (data not shown). These results suggest that the binding of rMRP-14 to heparin is dependent on both N-and O-linked sulfate substitutions.
To further investigate the nature of the interaction between heparin and rMRP-14, a salt wash was used in the [ 3 H]heparin binding assay. [ 3 H]Heparin binding to rMRP-14 was disrupted by washing with assay buffer containing 0.5 M NaCl (Fig. 6B). A parallel experiment demonstrated that a 0.5 M salt wash did not reduce the amount of rMRP-14 anchored by mAb 1H9 (data not shown). This indicates that the interaction between rMRP-14 and heparin was largely ionic in nature, further supporting the involvement of the sulfate groups of heparin.
MRP-8/14 Binding to GAGs on Other Cell Types-Next we wanted to confirm that MRP-14 and the complex were binding specifically to GAGs and to exclude the possibility that soluble heparin sequesters the MRP-14 from binding to another receptor on endothelial cells. Therefore, we took advantage of a CHO-KI-derived cell line, pgsA-745, which is defective in GAG synthesis (23). rMRP-14 bound in a saturable manner to the parental CHO-KI cells, but binding to the GAG-minus CHO cells was completely absent (Fig. 7). Similarly, the MRP-8/14 complex at 1 M bound to the CHO-K1 but not the pgsA-745 cells. This result further confirmed the recognition of GAGs by MRP-14.
MRP-14 Binds to Heparinase I-sensitive Endothelial Proteoglycans-Because GAG moieties vary greatly between tissues, the blocking data by GAG preparations isolated from other tissues often give a false impression of the nature of the target GAG. Consequently, we evaluated the contribution that chondroitin sulfate and heparan sulfate made to MRP-14-binding sites on HMEC-1 cells by digesting these structures with specific enzymes. Heparinase I digestion of HMEC-1 cell surfaces consistently reduced the number of MRP-14-binding sites by 60 -70% (Fig. 8A). However, chondroitinase ABC treatment had little effect on the number of MRP-14-binding sites (Fig. 8B). The enzyme was further titrated between 0.2 milliunit/ml and 8 units/ml without affecting rMRP-14 binding (data not shown). The removal of the GAGs was confirmed by the depletion of the heparan sulfate-specific 10E4 epitope (Fig. 8A) and the chondroitin sulfate specific epitope CS-56 (Fig. 8B). The loss of rMRP-14 binding following heparinase I treatment was specific, because removal of binding sites was not affected by proteinase inhibitors but was inhibited by 0.5 mM EDTA. In addition, the digestion by both enzymes did not reduce the expression of the other GAG species or the abundantly expressed membrane proteins CD29 and CD44 (data not shown). The binding of MRP-14 was also eliminated by trypsin treatment of the HMEC-1 cells (Fig. 8C). Again this was mirrored by the depletion of the 10E4 epitope (data not shown). These Involvement of CD36 and RAGE as MRP-8/14 Receptors on Endothelial Cells-CD36 has been reported to act as a receptor for MRP-8/14 (17), and RAGE has been proposed as a general receptor for S100 proteins (7). Therefore, we evaluated the contribution of these receptors to MRP-14-binding sites on HMEC-1 cells. Resting HMEC-1 cells express little CD36 as determined by fluorescence-activated cell sorter analysis (Fig. 9A, inset), and they express no RAGE as demonstrated by Western blotting (Fig. 9B, inset).
Because under certain circumstances endothelial cells can express CD36 and RAGE molecules, we evaluated the relative contribution of these molecules when overexpressed in CHO-K1 cells and the GAG-less mutant pgsA-745 cells. No increased binding of rMRP-14 to either CD36 transfected cell line could be seen, as compared with the mock transfected controls cells (Fig. 9A), although the transfectants expressed substantial levels of CD36 (Fig. 9A, inset). Identical results were also obtained for the MRP-8/14 (data not shown). Similarly RAGE was also transfected into the CHO-K1 cells and the GAG-less mutant pgsA-745 cells and was well expressed (Fig. 9B, inset). As with CD36, we observed no increased binding of MRP14 or MRP-8/14 (data not shown) to either cell line compared with mock transfected controls (Fig. 9B). Therefore under our assay conditions, neither CD36 nor RAGE is capable of binding a detectable amount of rMRP-14 or MRP-8/14 complex even when the receptors are highly expressed. Thus, under conditions when endothelial cells express both or either of these two scavenger type receptors, they are unlikely to contribute significantly to the number of MRP-8/14-binding sites.
DISCUSSION
The MRP-8/14 heterodimer is associated with the endothelium of venules near to sites of inflammation (Ref. 19 and this study). Here we show for the first time that MRP-8/14-positive vessels adjacent to an inflammatory site do not synthesize the MRP proteins but bind MRP-8/14 that appears to have been released by transmigrating myeloid cells. The possibility of synthesis under some conditions is not entirely eliminated, because a murine endothelial cell line is reported to express MRP-8 mRNA (30). However, the human microvascular cell line, HMEC-1, could not be stimulated to produce MRP-8/14. The fact that endothelium can bind MRP protein leads to the question of the nature of the receptor which, by histochemical analysis, appears to be abundantly expressed (Fig. 1).
Here we have shown that the MRP-8/14 complex and rMRP-14, but not rMRP-8, bind to the endothelial cell line HMEC-1. Two anti-MRP-14 antibodies prevented the binding of both the complex and rMRP-14, suggesting that it is the MRP-14 subunit of the complex that interacts with the endothelial cells. The rMRP-14 binding to the endothelial cells was blocked by heparin, with heparan sulfate and chondroitin sulfate B being less potent inhibitors. Interestingly, these three GAGs all contain significant amounts of iduronic acid, which is thought to be structurally important for many specific protein-GAG interactions (31). The MRP-14-binding sites were susceptible to digestion with both heparinase I and trypsin but not chondroitinase ABC. Thus, we conclude that the MRP complex can bind to heparan sulfate structures of endothelial cell surface proteoglycans. We also demonstrate that rMRP-14 binds heparin directly and that this appears to be dependent on ionic interaction with the N-and O-linked sulfate substitutions of the GAG.
GAG modifications vary greatly between tissues. The sulfation pattern recognized by MRP-14 appears to be widespread, because the recombinant protein binds to several cell lines, including T lymphoblasts, neutrophils, myeloid cell lines, COS cells (data not shown), and CHO cells, with a similar or slightly reduced affinity compared with HMEC-1 cells. The only tested cell line to which rMRP-14 did not bind was a GAG-minus CHO cell mutant, thus confirming the nature of the MRP-14 receptor. In addition, the amount of rMRP-14-binding sites and therefore the target GAGs do not alter in expression following endothelial cell stimulation (data not shown), suggesting that these GAGs are stable membrane structures.
rMRP-14 binds to heparin with a high affinity for a GAGprotein interaction (K d ϭ 6.1 Ϯ 3.4 nM) which ranges from 10 Ϫ9 to 10 Ϫ5 M. For example chemokines, such as RANTES (regulated on activation normal T cell expressed and secreted) and interleukin-8, that also bind to endothelial GAGs have K d values in the M range (28). Identifying heparin binding sequences in proteins has been difficult but consensus motifs, such as XBBBXXBX, XBBXBX, or TXXBXXTBXXXTBB have been suggested (29). MRP-14, MRP-8, and other S100 proteins do not contain these motifs (data not shown). It is therefore likely that the heparin-binding motif is formed as a result of the tertiary structure of MRP-14. This conjecture is backed up by the finding that 15-20 residue peptides spanning the se- The binding of MRP-14 to the endothelial cells is dependent upon Ca 2ϩ and Zn 2ϩ . The structure of several S100 proteins undergoes conformational change in the presence of Ca 2ϩ , which includes the exposure of a putative receptor binding cleft (32). In addition, the structure of Ca 2ϩ occupied S100A7 (psoriasin), a close homologue of MRP-14, is altered on ligation of Zn 2ϩ (33). Ca 2ϩ and Zn 2ϩ are known to bind to the MRP-8/14 complex and induce a conformational change (16). Therefore, we propose that the divalent cation regulation of the MRPs will be critical for their function.
It has been reported that the scavenger receptors CD36 (17) or potentially the RAGE might serve as cell surface receptors for the MRP-8/14 heterodimer. Interestingly, the proinflammatory functions of S100A12, a close homologue of MRP-14, are attributed to ligation of and signaling through RAGE (7). RAGE also mediates the neuronal outgrowth stimulated by S100A1 and S100B proteins (11). The HMEC-1 cells in this study expressed little CD36 and no detectable RAGE with neither, therefore contributing to the observed binding to the endothelial cell line. Because activated endothelia can express both of these scavenger receptors, we next transfected CD36 or RAGE into GAG-expressing and GAG-lacking CHO cells to compare the binding by these receptors to that by GAGs. No increase in MRP-14 or MRP-8/14 binding could be detected to either CD36-or RAGE-expressing CHO cells even without the background of GAGs. This suggests either that the MRP proteins do not bind these receptors or that our assay is not sensitive enough to detect any binding. Although this result does not discount a low level of MRP-8/14 binding to these receptors, it seems unlikely they contribute significantly to the overall number of binding sites on target cells or to the depo-sition of MRP-8/14 on the endothelium as observed in vivo.
A recent paper by Srikrishna et al. (34) reported that MRP-14 and MRP-8/14 complex bind to novel carboxylated N-glycans on endothelial cells and that the binding was blocked by the N-glycan-specific mAb GB3.1. This interaction may be distinct from the GAG binding we describe here, because mAb GB3.1 binding is insensitive to heparin (up to 250 g/ml) and heparan sulfate (up to 50 g/ml). 2 It is interesting to speculate that the N-glycan reactivity may account for the heparinase I-insensitive binding of MRP-8/14.
The function of the MRP complex within inflammation is poorly defined. MRP-14 antiserum reportedly inhibit transmigration of monocytes expressing MRP complex (35). The results in our study could provide a mechanism for this observation because cell surface bound MRP-8/14 could act as an endothelial cell receptor. In addition, the immobilization on GAGs is also consistent with and in fact provides a localization mechanism for an anti-oxidant activity of MRPs in protecting the endothelium against oxidative damage by leukocytes, as proposed by Geczy and co-workers (37).
The binding of the MRP complex to GAGs resembles that of the chemokines. It is thought that immobilization on proteoglycans prevents chemokines from being washed away in the blood flow, localizing them to the site of inflammation (27,38). Additionally, signaling by these inflammatory mediators is believed to be enhanced by their presentation on endothelium to rolling leukocytes expressing their receptors (28). Chemokines then signal to inflammatory cells via interacting with G protein-coupled receptors. Similarly GAGs may facilitate the binding of the MRP complex to an additional receptor still to be 2 G. Srikrishna and H. Freeze, personal communication. identified that might then signal into the cell.
In summary, we have demonstrated that the major receptor for the MRP-8/14 complex on endothelium is a heparan sulfate moiety. The widespread expression of such GAGs suggests a certain nonselectivity in MRP complex binding. However, it is probable that a specific binding stimulus induces release of these proteins from myeloid cells on vessels near an inflammatory site where they will function. Such stimuli have still to be identified. Our study provides a mechanism for the presentation of the vessel-associated MRP-8/14 and may, as a basis for further investigation, help elucidate the extracellular functions of the MRP proteins. | 7,086.8 | 2002-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
One Village One Product Movement in Rural Economic Empowerment
The one village one product movement is one of the village community empowerment programs, which aims to develop superior village products that are considered to have more competitive competitiveness. With the one village one product program, it is hoped that it can grow and motivate villagers in developing their business activities, so that they can increase community income and welfare, and can support rural economic growth. Through the synergy of the use of village funds in community economic empowerment, superior village products can be produced that are suitable and appropriate to drive the community's economy. For this reason, the involvement of village residents in the process of determining the economic empowerment program is very necessary, so that the superior village products that are determined are business activities that have been produced by most of the village community, and are truly in accordance with their needs. Community economic empowerment programs must be able to reach various aspects needed, including institutions, business management, technology transfer, and marketing, because the orientation of the development of superior village products is able to enter modern markets. Therefore, it takes the commitment of the village government to carry out guidance and development by providing the facilities needed in the economic empowerment program. The synergy of the village government and villagers can become social energy in driving the village economy through the one village one product program. Keywords : people's economy, empowerment, OVOP, village government. DOI : 10.7176/RHSS/10-18-04 Publication date :September 30th 2020
BACKGROUND.
The concept of development that is oriented towards economic growth has actually caused a wider social gap between rich and poor people. In the midst of high economic growth, there are still many poor and unemployed people who have not improved their standard of living and have even experienced a social decline. According to Soetomo (2010: 5), the factors that cause this condition are the lack of access to markets and resources, the weak ability to utilize natural and human resources, an unbalanced social structure and urban bias in the decision-making and allocation process. fund. For this reason, the implementation of development must pay more attention to its human aspects by highlighting its social and economic aspects (Socio-economic Development). Development must be oriented towards increasing productivity and economic growth with a more prioritized target of providing services to layers of society who live below an adequate standard. Efforts are made for development programs to be able to hit the target group directly, so that the results of development can be enjoyed significantly in the economic and social fields.
A development approach that prioritizes the process over the results, can allow synergy between development goals and the interests of the target group. A development approach that places the community as the subject of development will be able to parse the social problems faced by the lower classes of society and will easily produce development programs in accordance with the expectations of the wider community. Community involvement in the development process is not in the sense of mobilization but as a form of community participation based on their awareness and responsibility. Community involvement must start from problem identification and program formulation, through to the implementation and management of development program results. Community involvement in problem identification will be able to map the urgency of the problems faced, because many cases that occur in understanding social problems are carried out from what is visible on the surface, to something that appears on the surface is not necessarily the real problem. This problem is caused by structural and institutional obstacles, so that it requires commitment and willingness from all parties, especially the government. The treatment of the community as the subject or actor of development must be increased in the village development process through the development of the quality of human resources. According to Tjokrowinoto (1996: 29), efforts to develop human resources reach a wider dimension than just forming professional and skilled human beings who are in accordance with the needs of the system to be able to contribute to the development process, but emphasize the importance of human empowerment, including the ability to actualize all its potential as a human being.
Community development calls for a change that leads to progress, and even Sajogyo (1982: 32-82) explains that not every change is development, especially community development, especially if the process of change does not contain institutional and organizational changes capable of moving society independently. If the latter conditions are not fulfilled, then technological changes and economic changes will not function as expected, besides the less even distribution of benefits from these changes. Moreover, without the support of institutions and organizations that are able to move the community independently, it often results in the sustainability of the development process being hampered. This is because the community is more dependent on external resources and services, on the other hand, there is a lack of internal driving force in the context of exploiting the resources and potentials available in the community (Soetomo, 2010: 15-16). Therefore, a community institution that is able to mobilize and facilitate various joint activities in development is needed. These community institutions need an established existence within the village community, so that technological changes and changes in socio-economic structures can run effectively. Village community institutions must be able to identify resources and potentials that can be used for the benefit of the community, especially to increase income and welfare. The decision-making process in managing village development is more independent by community members, as well as an effort to empower village communities in actualizing their potential in order to fulfill their dignity and dignity as human beings. Community empowerment means providing power, according to Korten (1987: 7) power as the ability to change future conditions through action and decision making. Development itself can be interpreted as an effort to build power by a society, among others in the form of increasing the ability to change future conditions (Soetomo, 2010: 404). Oos M. Anwas (2013: 49), empowerment (empowerment) is a concept related to power (power). The term power is often synonymous with an individual's ability to make himself or other parties do what he wants. This ability is good for organizing themselves, managing other people as individuals or groups / organizations, regardless of the needs, potentials, or desires of others. in other words, power makes other people the object of his influence or desires. Prijono & Pranarka (1996: 77) also states that: empowerment contains two meanings, namely: the first meaning is to give power or authority, and the second meaning is to give the ability to or enable. This definition implies that empowerment is giving strength or ability to those who are less / powerless. Or it can be interpreted as giving the authority or authority to carry out an activity which is meaningful to increase social activities. With community empowerment, people can make people survive in the face of social changes that occur, through the ability to control their own lives and strive to shape the future according to their wishes.
Social problems such as poverty are not only limited due to economic problems, but also due to limitations in utilizing their potential, as stated by Gunawan Sumodiningrat (1999: 67-68), economic empowerment is an effort to encourage, motivate and raise public awareness of the potential that exists. and efforts to develop it, which means efforts to accelerate changes in the people's economic structure so as to strengthen the position and role of the people's economy in the national economy. This structural change includes the process of changing from a traditional economy to a modern economy, from a weak economy to a more resilient economy. Public awareness of their potential is the key to the success of community empowerment efforts, because the grassroots are less aware of their abilities so that they have limited ideas and ideas in utilizing their environmental resources. Government initiatives in mobilizing and facilitating environmental resource management, through the OVOP (One Village One Product) movement by encouraging each region to develop superior products that have the potential to be developed. In the explanation of M Arief (2012), the program mission is developed based on three philosophies, namely: (1) is a globalizing local product, (2) produces products based on creativity and with their own abilities, and (3) simultaneously develops human resource capabilities.
The concept of one village one product (OVOP) is considered quite successful in Japan and Thailand in improving the welfare of rural communities. With the OVOP approach, each village is expected to formulate superior products that have competitive value and competitiveness in modern markets. The issuance of Village Law No. 6/2014 can be a momentum to spur development in the village, because the village is the spearhead of development and improving the welfare of village communities. Given the authority to manage village development, it can be freer to formulate village development programs that directly touch the people's economic activities. The one village one product program carried out by the Ministry of Industry is small and medium industries (IKM) which can be an impetus to determine the superior products of each village, which can be in the form of goods and services (tangible products) and / or intangible products in the form of works of art and culture local. Meanwhile, the driving force for the rural economy is home industry activities with products produced on a small scale, and carried out by family members who lack skills and limited business capital. If this gets the attention of all parties, especially the village government, it can be an alternative business that is the superiority of the village community as a driver of the village economy. Coaching programs through training and technology transfer can improve the management and quality of the products produced, so that they are able to compete in local, national and global markets. Most of the village community do not understand more rational business management, making it difficult to produce quality products. With a touch of modernity, it is hoped that it can increase product competitiveness and be able to reach a wider market, so that it can increase people's income and welfare.
LITERATURE REVIEW.
OVOP (one village one product) can be said to be a strategy in the implementation of the concept of a populist economy, where the people's economy emphasizes the empowerment of the people's economy, namely the Research on Humanities and Social Sciences www.iiste.org ISSN 2224-5766 (Paper) ISSN 2225-0484 (Online) Vol.10, No.18, 2020 empowerment of small-scale economic actors. The failure of development so far has been caused by the lack of equal opportunities for the people, especially those with small capital, to be involved in various national economic activities, so that development is only enjoyed by a small part of the community. Therefore, future economic policies must be able to encourage and grow a people's economy through equal opportunities in various economic activities of the country. It is undeniable that the great potential for our economic strength is the potential that comes from the activities of small and medium enterprises (MSMEs), but the existence of such large MSMEs does not contribute to economic growth. According to Sun'an and Abdurrahman (2015: 121), that: in fact every economic policy produced by the government (central / regional) must consider two sides, namely the goal of creating social justice and compromising it with economic growth. In some cases, the goals of social justice and economic growth are trade-offs, so caution is needed to take these economic policies. Therefore, the role of the small community must be given access and opportunity in the community empowerment process so that the empowerment program is in accordance with the needs of the people. Meanwhile, the government is obliged to facilitate by creating and encouraging the growth of people's initiatives and initiatives in meeting their needs in order to solve the problems they face.
The role of UMKM empowerment as the people's economic base must be increased through fostering efforts in more rational business management. Meanwhile, the conventional approach based on direct distribution of products to the community has failed and is not on target, because the conventional methods of policy taken are not in accordance with the hopes and desires of the lower class. The conventional way through direct assistance to the community creates more dependence on the community on the government, for that community empowerment must be oriented towards efforts to utilize the potential of the community so that they have the ability to independently manage the potential that exists in their environment. According to Edi Swasono (2015), the people's economy or grass-rots economy is a derivative of Indonesia's populist doctrine. Popular doctrine is a gospel based doctrine that is a Throne for the People. The people's economy is a form of a people-based economy, and an economy centered on the interests of the people (people-centered economy) which is the core of article 33 of the 1945 Constitution, especially paragraphs two and three (sc.syekhnurjati.ac.id › researchmhs ›BA ...). One of the flagship programs that are being developed by the government through the Ministry of PDTT is the one village one product movement (one village one product). This program aims to encourage the economic growth of rural communities. Each village is encouraged to find and develop a superior product that has different characteristics from products from other villages. In the OVOP concept, the public is given an understanding to be able to produce selected goods with high added value. One village is expected to be able to produce a competitive main product and be able to compete at the global level but still have the unique characteristics of the region. The products produced are products that utilize local resources, both natural and human resources. The One Village One Product movement is an effort to foster the spirit of the village community to be involved in determining existing village products to be developed into superior village products. There is a lot of potential generated by villagers, but they have not received serious touch and treatment from the government so that the products produced in the village are not able to reach a wider market. For example agricultural products of community members that produce durian fruit that are unique to other durians. Due to limited capabilities and social networks, the durian fruit produced by villagers should be initiated in such a way as to reach modern markets. For this reason, government commitment is needed in providing guidance to superior village products so that they have wider marketing access, so as to increase income and community welfare. Business activities developed by village communities, usually as business activities passed down from generation to generation as a legacy from their families or can be said to be a continuation of the business continuity that has been occupied by their parents, for example making tempe products, cassava tape, tofu, krupuk pulli, and various rural snacks . These rural business activities are usually only side businesses, which take advantage of the free time after the management of the agricultural sector so that the quantity and quality of the products produced are relatively limited and their marketing reach is very narrow or limited in their environment.
The business activities of the villagers are actually potential if they are developed in such a way that they can produce various types of products that can be packaged in a modern way. Therefore, the OVOP approach is not only implemented in products that have a global orientation, but should also be applied to village products with an orientation towards regional or national marketing. This is of course very useful in anticipating various similar products from outside entering the regional market, because products from outside have entered at the district, sub-district, and even remote rural areas. The OVOP movement in the perspective of village products is intended to empower village superior products by adjusting OVOP principles in order to increase the competitiveness of village products at the regional and or national levels. The principles of OVOP in the application of village products include: (1) Local yet Regional / National, which is the underlying principle in developing OVOP products, where village products not only reflect village pride but also can be accepted in regional and national markets in particular.
(2) Self-reliance and creativity, through this principle the development of village products as part of encouraging the independent movement of community members in managing their village product businesses. Emphasis on this principle fosters the independence of the community's businesses, and the government only plays a role in Research on Humanities and Social Sciences www.iiste.org ISSN 2224-5766 (Paper) ISSN 2225-0484 (Online) Vol.10, No.18, 2020 facilitating the needs of community members to develop their businesses. (3) Human resource development, is intended as an effort to improve business management in order to be able to produce quality village products and have more competitive competitiveness in order to enter modern markets. By adjusting the above principles, a wider range of business products will be possible to be developed and even each village can have at least one product as the village's superior product. Through more massive coaching, it is hoped that it can grow the village economy as well as an effort to increase the income and welfare of the community. Efforts to promote the capacity and economic efforts of the community can be carried out through several things, including: (1) Providing various trainings as well as increasing capital, market information, and appropriate technology. These market-friendly measures are provided selectively, transparently and firmly, accompanied by effective supervision. (2) Creating a healthy business competition climate and market-friendly intervention. Efforts to equalize go hand in hand with efforts to create a competitive market to achieve optimal efficiency. Thus, for example, the partnership relationship between large businesses and SMEs must be based on competence, not compassion. For this reason, the priority is to eliminate economic practices and behaviors that are considered fair and just by society, such as monopolistic practices, development with a progressive taxation system and deregulation aimed at eliminating high cost economies.
(3) Empowerment of people's economic activities is closely related to efforts to drive the rural economy. Therefore, efforts to accelerate rural development, including remote areas, minus areas, critical areas, border areas, and including other underdeveloped areas must be a priority. This is done, among others, by increasing rural infrastructure development in supporting the development of village-village linkages as a form of mutually beneficial production and distribution networks. (4) Utilization and use of land and other natural resources, such as forests, sea, water, air and minerals. Everything must be managed fairly, transparently and productively by prioritizing the rights of the local people, including the customary rights of indigenous peoples while maintaining the preservation of environmental functions (Salamah, 2014).
According to Zulkarnain (2003: 14) in Salamah (2014), the steps or efforts that must be considered in realizing or developing a populist economy are: (1) Identifying economic actors, such as cooperatives, small businesses, farmers, (2) ). Conducting a coaching program for these actors through a companion program, (3). Training education programs according to their needs when developing a business, (4). To coordinate and evaluate those involved in the coaching process, both in the development of capital, human resources, markets, market information, and the application of technology. Identification of economic actors is needed in order to know and understand the problems it is facing, which is the basis for providing guidance through education in order to hit the right targets. In addition, creating a business climate that enables the development of the potential of business / economic actors, and strengthens economic potential by facilitating increased education, and improving health status, as well as protecting and preventing unbalanced competition, and preventing the exploitation of strong economic groups to weak economic group. For this reason, according to Fachri Yasin, et al. (2002: ix), the development of a people's economy requires several things in its implementation, including: first, local government political commitment in the form of policies that are consistent and can be operationalized in the field. Second, including farmers, small businesses, and cooperatives in all aspects of agricultural development using a participatory approach. Third, the willingness and high commitment of the local government by including universities, non-governmental organizations, the private sector and others in coaching and training activities to support people's economic development. Fourth, providing capital assistance to farmers, small businesses and cooperatives in the form of credit, revolving funds and other assistance that is not burdensome. And fifth, good coordination between related agencies that are directly involved in the economic development of farmers, small businesses and cooperatives (Salamah, 2014).
From the main thought above, it can be stated as a hypothesis in this study, are: 1. Ho = There is no relationship between the movement of one village one product (One Village One Product) to the empowerment of the village economy, 2. Ha = There is a relationship between the movement of one village one product (One Village One Product) to the empowerment of the village economy. While the focus and direction of this research can be described in the following diagram: Village Economy Empowerment (VY) One Village One Product (VX) Research on Humanities and Social Sciences www.iiste.org ISSN 2224-5766 (Paper) ISSN 2225-0484 (Online) Vol.10, No.18, 2020
RESEARCH METHODS
This study uses a quantitative approach to determine the respondents' assessment of the One Village One Product Movement, and village economic empowerment. Data were collected through distributing questionnaires to 120 randomly selected respondents in 6 villages in Ngawi Regency. Each village was determined by 20 respondents using stratified random sampling, consisting of: village heads and village officials, BPD, LKMD, PKK, and village community UMKM-UMKM. The respondent's assessment is measured using a Likert scale with a gradation from very positive to very negative, in the form of words including: a) Strongly agree with a score of 5, b) agree with a score of 4, c) Doubtful with a score of 3, d) No agree with score 2, and e) Strongly disagree with score 1. While the data analysis method uses a regression analysis model with processing through SPSS.
RESULTS AND DISCUSSION. Correlation Test
To test the hypothesis, a correlation test was carried out between the One Village One Product (One Village One Product) Movement (VX) variable as the independent variable on village economic empowerment (VY) as the dependent variable. The results of the correlation test are as follows: From the table above, the correlation value between the variable One Village One Product Movement (one village one product) with the village economic empowerment variable is 0.790 with a p-value = 0.000. When compared with the value α = 0.05, it is known that the p-value = (0.000) <α (0.05). Thus, the Ha hypothesis is accepted, that is, there is a correlation between the One Village One Product Movement and the empowerment of the village economy.
Regression Test
The results of the regression calculation between the OVOP strategy variables on village economic empowerment are: (2) The One Village One Product Movement variable which is valued at 0.861 (positive) indicates the influence of the One Village One Product Movement on village economic empowerment. If the One Village One Product Movement (One Village One Product) increases by 1 unit, then the village economy empowerment will also increase by 0.861. Thus the One Village One Product Movement has a positive effect on the empowerment of the village economy.
Determination Test.
The coefficient of determination (R2) is used to measure how far the model's ability to explain variations in the dependent variable (Ghozali, 2006). The results of the determination coefficient test are: , while the remaining 34.8% is explained by other unexplained variables. in this research. The one village one product movement is a government effort to encourage and motivate the trust of the village community that the products produced have higher competitiveness, if managed with more rational business management. Through the one-product village movement, it is hoped that each village will have a superior product that can become an icon and at the same time increase rural economic growth. The management of rural products which will become icons of superior products, in their development will be initiated directly by the local government with the support of the village government through the use of village funds. The village community together with the village government conduct selectivity in determining village products which will later become superior village products. The involvement of the village community is expected to foster a collaborative business spirit that will be facilitated by the village government, starting from business management coaching and training, appropriate technology transfer, packaging, to marketing. With this joint commitment between the village community and the village government, it can increase the competitiveness of village products in modern markets and at the same time increase the income and welfare of the village community.
CONCLUSION
The one village one product movement is an approach to empowering rural communities through the economic business activity sector as a superior product. This movement is developing and quite effective in growing and mobilizing the confidence of the village community that the business products produced have a fairly good competitiveness if done professionally. Through cooperation among villagers, the one-product village movement is able to produce joint business activities in managing business products in a rational business manner. With the commitment and guidance carried out by the village government, it can facilitate the establishment of people's economic institutions as the basis for joint business management. The joint encouragement between the village government and the village community is able to increase the income and welfare of the community and at the same time encourage rural economic growth. | 5,986.4 | 2020-09-01T00:00:00.000 | [
"Economics",
"Business"
] |
On a Controlled Random Deployment WSN-Based Monitoring System Allowing Fault Detection and Replacement
This paper presents a sensor random deployment scheme to monitor 2-dimensional areas for constraining applications while providing a mathematical control of the coverage quality it allows. In addition, useful techniques to detect and repair sensor failures are also added to provide system robustness based on this scheme. In particular, mathematical formulas are developed to express the probability of complete coverage when the environment characteristics are varying, taking into account the deployment parameters. Moreover, a methodology is presented to adapt this scheme to the need of various WSN-based monitoring applications. A simulation is also performed to show the efficiency of the developed strategy, highlight some of its features, and assess its impact on the lifetime of the monitoring system it serves.
Introduction: Need for Failure Detection and Repairing
With evolving sensor technologies, a growing number of sensors can be installed to architecture for the management and control of systems monitoring 2D areas. As the monitoring applications utilize the sensor data for alarming and decision, it is essential that (a) the data acquired by the sensors are accurate and reliable; (b) the sensing coverage quality is maximized at any time; and (c) the sensors are acting properly. One can notice, in particular, that sensor faults become more frequent compared to the architecture's lifetime and that the deployment techniques affect deeply the quality coverage of WSN. The deployment strategies for WSNs providing area monitoring can be classified into three categories, namely, the static nodes placement with controlled deployment, the static nodes placement with random deployment, and the dynamic nodes placement with random deployment [1]. When it comes to the continuous monitoring of 2-dimensional areas (2D areas), all these techniques experience numerous drawbacks, either for their feasibility and the availability they provide or for the control they offer and the sensor lifetime they allow.
In particular, while the deployment strategies in the first class give optimal and guaranteed qualities of coverage in areas easy to access [2], these strategies fail hardly in accessible areas, since the sensors cannot be placed in positions chosen to ensure full coverage of the monitored area. On the other hand, the static node placement with the random deployment schemes operates by assuming that sensors are spread randomly in the monitored area [3,4]. These schemes do not ensure total sensing coverage and radio connectivity to report all collected events, because the distribution of the sensors may not be uniform in the given area.
Strategies in the third class allow dynamic node placement with random deployment [5,6], assuming that the deployed sensors are able to move within the monitored area. They mainly proceed through two steps. In the first step, the sensors are randomly spread in the monitored area. During the second step, the deficiency of coverage quality is compensated by commanding the sensors to move and change their positions to ensure the required quality of coverage. Monitoring applications using such strategies experience two major drawbacks. First, the node motion may cost a lot of energy. Second, the sensors may not be able to move properly to their new positions because of the nature of the monitored area and the obstacle irregularities they may face. On the other hand, efficient monitoring systems should be able to address different constraints. In particular, they should provide (a) total sensing coverage of the monitored area or at least a large part of it, if the supported application is satisfied with it; (b) a wireless sensor network capable of relaying any detected event to the central station(s) in a real time manner; (c) the optimization of the energy consumption to provide longer sensor lifetime and reduce cost operation; and (d) the detection and repair sensor failures to keep the area continuously and properly covered.
In actual fact, a faulty sensor cannot perform its monitoring function properly, but, as an alternative, it may provide false information and induce erroneous decision, thus making the system unreliable or overconsuming energy [7]. Therefore, it is necessary to detect such failures and adapt the network to the new situation by running correcting actions such as sensor replacement. When the system is redundant, removing a failing sensor will not result in a loss of accuracy. However, if that is not the case or when the WSN has to operate for long time, a technique is needed for detecting, isolating and replacing/correcting a faulty sensor.
In this paper, a multistep method is proposed to deploy sensors and detect, isolate, and repair the sensors in the network when they get faulty. The main contribution of this paper is 4-fold.
(i) First, it builds a dropping scheme, from air, for example, capable of providing tight control on the landing positions of the deployed sensors based on a landing pattern taking into consideration the characteristics of dropping environment and the sensor transporter. (ii) Second, a mathematical model is developed to control the sensing coverage quality and the quality of network communication provided by the deployed WSN, using deployed data relaying nodes. (iii) Third, it builds a monitoring scheme for the energy depletion control and the management of failures of the deployed sensors while allowing fault predictions using rule-based strategies. (iv) Fourth, it builds mechanisms to replace (or repair) faulty sensors and increase the network availability and lifetime using the proposed deployment scheme.
In particular, the provided model and techniques allow planning the WSN design in a way that increases the probability of network connectivity by controlling a set of parameters including, but not limited to, the dropping point locations, the number of deployed data relaying nodes and their range, the dropping altitude, and the errors associated with the variation of the landing patterns.
The remaining part of this paper is organized a follows. Section 2 develops the proposed deployment scheme of WSN for 2D area monitoring applications, in its static form. Section 3 extends the mathematical model to integrate the variation of the parameters involving the environments and the sensor transporter. Section 4 discusses techniques for the detection and prediction of sensor faults. Section 5 discusses different uses of the proposed deployment scheme and gives rule-based strategies for the prediction. Section 6 discusses techniques for faulty sensor replacement. Section 7 develops a numerical simulation of a system based on the proposed scheme. Section 8 concludes this paper.
Controlled Random Sensor Deployment for Area Monitoring
Assume that a WSN is to deploy in a 2D area to monitor the occurrence of some events. Assume also that the WSN has a hierarchical structure, in the sense that it has three layers. The first layer is formed by basic sensor nodes (SNs). The role of a SN is to detect the occurrence of prespecified events that help in monitoring a given area and to report the collected data to near nodes in the second layer (eventually through other SNs). The second layer is formed by communication nodes (CNs) acting as cluster heads for the SNs in the first layer and routing those sensor's reports to the nearest (sink) node in the third layer, as called analysis nodes (ANs). The ANs are responsible for (a) message analysis and prediction needed by the application served by the WSN; (b) sensor fault detection and localization; and (c) the energy management of all sensors in the WSN.
For the sake of coverage efficiency, the layer two nodes should constitute all the time a connected network. In the following two subsections, we define the major mathematical model to deploy sensors and determine their landing positions within the area to monitor and the deployment scheme, when the environment parameters do not vary during deployment. In addition, we assume that the nodes in layers 1 and 2 are dropped from air following a deployment pattern that we will formally define.
Deployment Patterns for Sensors.
We define a sensor deployment pattern (SDP) as a 5-tuple ( , ⃗ , ⃗ , , ), where is the number of sensors to deploy, is the set of the landing positions of the sensors assuming that the sensors are one by one dropped from an airplane (e.g., the helicopter), ⃗ is the speed vector of the sensor transporter, ⃗ is the wind forces experienced by the sensors during dropping and falling down, and is a fixed interval separating two successive dropping times of the sensors.
We assume in the following that is sufficiently small or that parameters ⃗ and ⃗ are unchanging during the deployment of the sensors. To determine the landing area of the sensors, we assume, for the sake of simplicity, that ⃗ has no vertical component ( ⃗ = ( , , 0)) and that Using the fundamental principle of dynamics, one can see that the motion of a sensor dropped from position = (0, 0, ℎ) will follow the following path until its landing: where , , and are the mass of the sensor, the universal acceleration, and speed of the airplane (assumed to be parallel to the -axis) on dropping, respectively. Assuming that the dropping position is equal to (0, 0, ℎ), then the landing time and position of the first sensor can be given, respectively, by Thus, it is easy to deduce that the landing time and position of the sensor , 1 ≤ ≤ , are given by Assume that the sensors have a sensing range equal to and a communication range equal to . To provide full monitoring, we first assume that meaning that after deployment, successive sensors can communicate with each other and that they guarantee sensing of the line between them. More precisely, we have the following result.
Proposition 1.
Let sensors have a deployment pattern defined by ( , ⃗ , ⃗ , , ). After deployment, the area monitored by the deployed sensors is the union, ⋃ ≤ ( , ), of the discs sensed by sensor , as located at , 1 ≤ ≤ .
Moreover, if ≤ 2 ≤ , the area contains a thick strip (or rectangle) containing the landing positions , 1 ≤ ≤ , and having a length and a width equal, respectively, to In addition, the WSN is radio connected.
Proof. Using (4), one can deduce that the landing point of the sensors is given by Thus, the distance between two successive landing points is equal to . Consequently, the deployed sensors can communicate with each other, since ≤ . On the other hand, one can see that the sensing ranges of two neighboring sensors and + 1 intersect in two points and (as depicted in the lower part in Figure 2) given by The distances between and and and +1 are given, respectively, by and . Therefore, the area covered by the sensors contains the rectangle of length and a width equal to , as characterized by vertices 1 , 1 , . . . , , . Its axis passes by the landing points , 1 ≤ ≤ .
When ≤ 2 , one easily concludes that the thick strip is reduced to a line containing 1 to .
Front-Sense Scheme.
Using the result achieved in the previous subsection, we can define a deployment scheme, called frontier sensor (Front-Sense) deployment scheme, that is capable of providing total sensing coverage of a 2D area to monitor, using a connected 3-layer hierarchical WSN. To explain this, we assume, for the sake of simplicity, that the area to monitor is a rectangle and that the sensing deployment patterns operate under the same wind and flight conditions. More varying conditions can be easily considered.
The Front-Sense scheme is a 3-step process operating as follows.
Step 1 (decomposing the rectangle into landing pattern-based zones). Assume that the length and the width of the rectangular area to monitor are equal to and , respectively, and that its south west vertex is (0, 0, 0). Let us partition it into strips of length and width + ( /2). The partition will need ⌈( + )/( + ( /2))⌉ strips, so that every point in the rectangular area will be in a sensing disk. We have denoted here the well-known ceiling function by ⌈−⌉. Figure 2 facilitates the presentation of the domain partition. Step 2 (determining the sensor deployment pattern for every strip). Let us assume that the sensor carrier is able drop the sensors from altitude ℎ. Since the strips have equal lengths in the rectangle, the deployment patterns to deploy sensors in every strip will differ only on the droppings point to guarantee full sensing coverage of the rectangle. Let The expressions are easy to set since the strips of odd order are similar, while the strips of even order are similar. Thus, the total number of dropped sensors is equal to the number of strips (= ⌈( + )/( + ( /2))⌉) times the number of sensors per strip (= ⌊ / ⌋).
Step 3 (computing the landing positions of sensors to deploy). The landing positions of the first sensors in the first strip and the second strip are given, respectively, by The positions of the deployed sensors in the first and second strip are obtained from 1,1 and 21 using transaction vectors The positions of the remaining sensors are deduced from the positions of the sensors in the first and second strip using translation vectors (0, −2 ( + 2 ) , 0) , 1 ≤ ≤ ⌊ 2 ⌋ . Figure 2 depicts the rectangular area to monitor and its partitioning into sensing disks. To monitor areas with general forms, we first construct the smallest rectangle containing the given area. Then, we can decompose the rectangle into strips with the width considered in Step 1. The strips are then shortened to the right size by considering the intersection between the area and the strips. The deduced strips have variable lengths. This makes the deployment patterns assigned to each strip different in the number of sensors they handle and the landing points.
Strips Area to monitor
Step 1 in Front-Sense scheme can be modified accordingly, and the number of sensors is deduced for every strip. Figure 3 depicts an example of area coverage area by five strips.
Coping with Environment Variations
Several assumptions have been made in the previous sections to make Front-Sense scheme presentation and explanation easy to understand. Assumptions have mainly included the airplane velocity, the dropping altitude, the wind intensity, the area form, and the geographic features of the area. In the following, we address these issues under two hypotheses: low variation and modeled variation.
Adapting the Scheme to Slow Variations: Pattern Management.
To cope with the variation of the wind speed, one can choose to measure more often the wind speed and deploy the sensors in a strip using as many sensor deployment patterns as possible, in the sense that the number of sensors in a pattern is fixed based on an estimated period of invariability of the wind speed. To implement this, let us assume that the wind speed statistics show that if the wind measurement is made at an instant 0 , then the variation of the wind intensity is negligible in the time interval [ 0 , 0 + ], for given positive . Then, the number of sensors to deploy during this interval using a pattern should be equal to ⌊ / ⌋. After dropping these sensors, a deployment pattern is reconstructed to continue dropping sensors in the strip, taken into consideration a new measurement. The estimation of can be made using variation methods including averages made on historical data. One can notice, however, that using multiple patterns per strip would make the width of the strip slightly vary from one pattern to another. This should be taken into consideration when proceeding with the next strip by placing appropriately this strip. In addition, adapting properly the control imposed by the inequality ≤ ⌈ / ⌉ would guarantee the continuation of the sensing coverage.
To cope with the dropping altitude, area flatness, and area vegetation (i.e., the variation of ℎ), we can assume that the variation ℎ is negligible in a given time interval [ 0 , 0 + ], for given positive . The above technique can be used to determine various deployment patterns and measurements of ℎ; however, one can notice that while the variation of the airplane speed only affects the -position of the landing points (by a maximum value ⋅ in the time interval [ 0 , 0 + ]), the variation of the altitude ℎ affects both the -position and -position of the landing points. Consequently, the landing positions can be adapted appropriately, assuming that the geography of the area to monitor is not so abrupt. Upon the occurrence of sudden changes in the geography of the area, the recomputation of the pattern may be triggered.
On the other hand, one can be convinced that to cope with the wind speed variation, a technique similar to the one used for ℎ can be implemented, provided that this variation is limited in time and intensity.
Coping with Fast Variable Parameters.
We consider in this subsection only the case where the wind intensity is varying under a uniform model. Assume now that sensors are to be dropped, from air, using the following sensor deployment pattern: Assume, in fact, thatduring interval of time [0, ], ( , ) is uniformly distributed inthe rectangle where and are positive numbers and 1 and 2 represent average values of the -intensity and -intensity of the wind. Figure 4: Random deployment on a strip.
Using (4), the dropping positions of the th sensor will occur in a rectangle , ≤ , centered at A simple computation of shows that The surface of is given by 4 (ℎ/ ) 2 and the actual landing is uniformly distributed in . Let be a strip of width and length axed on the points , ≤ , and denote in the sequel that The subsequent results estimate the quality of communication and sensing coverage of Front-Sense scheme based on the concept of deployment pattern. Figure 4 depicts the strip, the sensing range of sensor , and clarifies the conditions provided in the first result.
Proposition 2.
Consider the above notations and assume that the communication range and sensing range of the deployed sensors in satisfy the following conditions: Then, is totally sensed by the sensors, and the sensors are able to communicate with each other.
Proof. We first notice that the condition on shows that the distance between sensors and +1 is lower than , whatever is their landing points in squares and +1 . In fact, the largest distance between points in squares and +1 is lower than International Journal of Distributed Sensor Networks Now, let us consider the intersection of the sensing range of sensors and +1. The intersection is reduced to two points and given by Considering the condition on , one can deduce easily that and are the border of and thus that is completely sensed by the sensors.
The following result computes, in more general settings, the probability that the sensors are able to sense and communicate.
Theorem 3. Consider the above notations and assume that the communication range and sensing range of the deployed sensors in satisfy the following conditions: Then (1) the probability that the sensors are able to communicate with each other is given by where ( , ) is the area covered by +1 and the disk of radius centered on ( , , 0) in ; (2) the probability that main axis of is totally sensed by the sensors is given by where ( ) is the area covered by and the disk of radius centered on ( , 0, 0) segment +1 .
Proof. To demonstrate the first statement of the theorem, we only need to consider that the probability , that sensors and + 1 communicate together is given by since ( , , 0) can be everywhere in . This can be easily seen when noticing that ( , )/(4 ⋅ ) is the probability that sensors and + 1 can communicate together, knowing that sensor is placed on ( , , 0). Figure 4 depicts , the small square around ( , , 0), and ( , ).
The connectivity of the sensors in the strip is then equal to since, to get this, sensors and + 1 should be able to communicate for every ≤ − 1. Thus, Now let ( , 0, 0) be a point on the segment +1 , let and ( ) be the intersection of and the disk of radius centered at ( , 0, 0). Since ≤ ( /2), the point ( , 0, 0) can only be covered by a sensor in . The probability that a sensor in covers ( , 0, 0) is equal to ( )/4 . Thus, we have This concludes the theorem, since
Sensor Failure Detection and Prediction
Amongst the challenges that WSN-based monitoring applications are facing, one can mention the quality of service (QoS) provided by the network and the lifetime of the network. The latter depends largely on the energy consumption of sensors composing the network. Most important concerns of QoS include (a) the quality and the amount of the information that can be collected and analyzed about the observed objects; (b) the detection of sensor faults and the tolerance of the monitoring system to these faults; and (c) the quick recovery from a fault. In fact, a sensor fault can be defined as a deviation from the expected model of the function the sensor is assumed to perform. Faults can occur in different layers of WSN, but most commonly they occur at the physical layer, since sensors are most prone to malfunctioning and energy depletion. Major faults include calibration systematic faults, random faults from noise, energy exhaustion, and complete malfunctioning [8][9][10]. Calibration faults appear as drifts throughout the lifetime of a sensor node. Random noises induce random unwanted variations in the data reporting on the events detected by the sensors. Energy exhaustion occurs when batteries fail to provide the needed energy for detection and reporting.
Fault Classification.
Sensor faults can be defined through two overlapping viewpoints: data-centric and system-centric faults [8]. Faults in the first category can be observed in readings through the effect they produce in data. Faults in the second point of view are observed with physical malfunction, environment conditions' modifications, and inconsistencies of factors that are not expected to change throughout the lifetime of sensor.
The most common classes of sensors that have been used extensively in 2D area monitoring implement functions International Journal of Distributed Sensor Networks 7 for the sensing of temperature, humidity, light, chemical elements, and mobile objects. Major features for these sensors include sensor location, environment characteristics, system features (e.g., calibration, detection range, reliability, and noise), and date features (e.g., statistical measures, gradients, and distance from other readings).
In the sequel, we only consider the data-centric viewpoint since one can assume, for sensor-based monitoring, that all the features revealing faults can be deduced from the reports transmitted to the sink for analysis. In particular we distinguish the following features/faults. Temporal Gradients. We define a temporal gradient to be a rate of change, of a feature (or parameter), larger than expected over a short time window, despite what can be the value of the feature afterwards. In general, the determination of gradient is based on environmental context and models of the physical phenomenon to observe. It is a grouping of several data reports (or data samples) and not one isolated event. An example of gradient can be light intensity going through sudden and large changes.
Crossing Boundaries. A fault or the proximity of occurrence of a fault can be controlled by numerical metrics whose values are crossing a threshold. The crossing is typically an isolated sample or a sensor that significantly deviates from their expected temporal or spatial models. In particular, a temperature exceeding a high value for a point in a forest may reveal the occurrence of fire around that point. On the other hand, the level of remaining energy reported for a battery, powering a sensor, may show that the battery is running out of charge.
Zero Variations. Some faults can be defined as a series of data values (reported to the sink) that experiences zero or almost zero variation for a period of time greater than expected. Thus, a zero variation fault shows a constant value for a large amount of successive reported data. This value can be located outside, or within, the range of the expected values of the observed parameter. In particular, it can be either very high or very low.
High Noise. Noise is commonly expected in sensor data communication. Nonetheless, an abnormally high amount of noise may be an indication of a sensor problem. We define a noise fault to be sensor data exhibiting an unexpectedly high amount of variation. In fact, high noise may be due to a hardware failure or low energy batteries. Despite the noise, noisy data may still provide information regarding the phenomenon under monitoring.
Missing Data. A missing data requested by the monitoring protocol or a specific request sent by the sink may be a sign of fault. Moreover, missing periodic data for a relative period of time as expected can reveal a faulty sensor. Often, missing data is caused by the sensor generating the expected data of intermediate sensors in charge of relaying the data.
Fault Detection.
Let us consider only the detection of sensor faults in WSN-based monitoring applications of 2D areas. Various works have addressed this issue assuming that a large set of hypotheses can be made on the sensor network and the data for sensor fault detection [9]. Among the major assumptions, we consider the following. First, all sensor data should be forwarded to a central node (or sink), where the event processing is performed. Second, all data received by the sink is not corrupted by any communication error. Third, no security attack is targeting the data flowing through the network, its components, or its sensors when fault is occurring.
We also consider the following two requirements to be fulfilled.
R1: an event detected by a sensor will be detected in the near future, either by a neighboring sensor or by the same sensor. In the latter, the attached data should be different.
R2
: sensors reporting an event should be appropriately identified and correctly localized and the localization errors are bounded.
Sensors deployed for monitoring applications using our deployment scheme comply with these requirements. In fact, events collected for the applications we consider are related to moving objects in a 2D area (e.g., fire in a forest and intruders of the frontier line). Collected events are timestamped and include varying data (such as temperature or intruder position). Let us notice however that the location of a sensor is determined by the deployment pattern used to deploy it. The location is nothing but the center, , of the square where it falls down. The location error is controlled by the size of this square.
Let us now present the main two detection methods of sensor faults, as denoted by variance-based detection and profile-based detection. While the fist technique builds for resource consumption and predicts time of their exhaustion, the second handles the variance of a feature characterizing a given fault among the aforementioned list of faults. Both methods follow the same approach for fault detection: they first characterize the "normal" behavior of sensor reported data; then they identify the occurrence of significant deviations as faults and finally give some predictions on the related fault. However, for the sake of simplicity we consider only one type of fault for each method. In particular, we consider energy depletion and the temporal gradient.
Loss of Energy. Let be an integer (equal to 3 or 4, in general).
Let us assume that a sensor in the WSN has to send a message , ≤ , any time its energy reaches a level equal to ( / ) , where is the initial energy level of the sensor. The message is time-stamped and contains the identity of the sensor and its location. Upon receipt, the central station verifies the accuracy of the attached data, computes the depletion time, and estimates the remaining lifetime of the sensor. If = −1, then a request for the replacement of the failing sensor is sent.
International Journal of Distributed Sensor Networks
The following section discusses the rules followed to monitor the energy consumption and the prediction of loss in the case of border surveillance.
Temporal Gradient. Let us assume selected a window , a standard deviation , and threshold Th, on the arrival of a message reporting on an event observed by a sensor . The sink node computes the standard deviation of sample readings within a window . If it is above a threshold Th, the samples are compared with the simples of the neighboring sensors. If no correlation can be established in a corrected manner, the selection of , , and Th is performed through rules involving the nature of the 2D area, the application using the WSN, and the feature characterizing the fault.
Variation techniques have been developed to provide good estimation of the different parameters involved in the detection of faults [11]. While our method uses heuristics for detecting and identifying the fault types and exploits statistical correlations between sensor measurements to generate estimates for the sensed phenomenon based on the measurements of the same phenomenon at other sensors and reduce false positives, other techniques are time-seriesanalysis-based or learning-based [12] techniques.
Adapting the Front-Sense Scheme for Monitoring Uses
Various monitoring applications may benefit from the use of our deployment scheme. Among these applications, we consider in this section two special examples, namely, the border surveillance and wildfire sensing. We, first, present the architectural issues of the network that will be used for these applications. We will present the hierarchy of the network and the major functionalities. The network is formed by three layers built on the following three types of node.
(i) The sensor nodes (SNs) constitute the first hierarchical level. They are in charge of detecting the occurrence of an event of importance to the monitoring application. They also collaborate to relay the information gathered to the next layer in an optimized manner. SNs are assumed to know approximately their location information.
(ii) The relaying nodes (RNs) constitute the second hierarchical layer of the network. The RNs main task is to collect the data gathered by the SNs and collaborate to relay it to the next layer. They may include intelligent functions to help in handling energy consumption, coverage estimation, and fault detection.
(iii) The analysis nodes (ANs) form the third layer. Their function is to receive the events detected at first layer and correlate them, analyze and predict failures, operate object tracking, and coordinate actions.
Country Border Surveillance.
A country border surveillance application monitors either an area on the country border or a borderline. This type of applications is becoming a serious concern due to the increase of the risks of illegal border crossings aiming at controlling unauthorized importation of goods or terrorism actions. Border surveillance can be performed using specialized WSNs appropriately deployed. Typically, WSNs within these applications are interconnected and have to report on any event related to crossings. For this, they should provide efficient monitoring and a certain coverage level of the 2D area (or line) of interest, since the coverage can be total (when the 2D area is completely sensed) or partial (when, for example, it is done through several thick lines in the 2D area), [13,14]. The deployment scheme described in the previous sections can be used to provide total or partial coverage of crossborder actions, using sensor capable of detecting individual and animal motions. Indeed, total coverage of the network first and second layers in the 2D area can be realized using RDAM as operating according to Figure 2. A partial coverage can be achieved by guaranteeing the surveillance of several lines parallel to the border line. In both cases, the altitude and speed of the airplane, the wind speed, the sensing range, and the communication range of the nodes can be selected in way that the probability of total coverage can be as high as needed.
Rules to handle sensor faults in border surveillance handle mainly energy depletion, sensor location, and location of detected object. Rules include the following.
Energy Level Reporting Rule. Let 0 ≤ and 0 be the initial level of energy of a sensor; then the sensor should send a message every time its energy goes down to = ( / ) 0 , ≤ , indicating the time noticing this fact. Accordingly, the sink node receiving the messages adapts the lifetime of the battery and the remaining lifetime .
One should have > +1 ; otherwise, a fault is noticed.
Energy Threshold Crossing Rule. If the reported level of energy is lower than a threshold 0 , then the sink deduces that the sensor is entering a fault state.
Replacement Rule. If the reported level of energy is lower than a given threshold 1 , then the sink deduces that the sensor is getting to a faulty state soon. It then should trigger a sensor replacement procedure.
Object Location Rule. Knowing the limited speed of the detected object and that the positions of the deployed sensors are known with reduced errors, then important variations of observed object positions reveal faulty positioning function on a sensor.
Wildfire Sensing.
Monitoring the location and speed of advance of the fire front wave is a critical task to fight against wildfire and help in optimizing the allocation of firefighting resources while maintaining safety of the firefighters. A WSNbased fire monitoring system is a 2-area deployed fire alarm system that is able to remotely report the location of its components and the presence of a fire in their vicinity. Indeed, a wildfire can be monitored using multiple sensors that are able to detect smoke, carbon monoxide, methyl chloride, rapid temperature increases, windy speed, and other physical phenomena related to the occurrence and propagation of a fire. The use of sensors reduces the likelihood of false alarms without excessive complexity to the WSN. The data gathered is typically transmitted by radio in real time to firefighters equipped with radio receivers or to a sink, called the fire command center. The sensor can be dropped from air or be manually placed by firefighters over a predefined 2D area.
Monitoring wildfire presents the problem of wide covered areas requiring the transmission of a large amount of information through the network with the risk of significant energy consumption and hence limiting the lifetime of the network. Particularly, energy is crucial for the wildfire sensing because of the complexity of maintenance of the sensors and the replacement of empty batteries due to the difficulty of access to these sensors, in general. Another problem needs to be tackled by these systems; it is the fading effect. Due to the presence of vegetation, this leads to important problems such as the shadowing phenomenon.
An efficient wildfire monitoring should propose an optimized design capable of providing energy conservation, consideration of the quality of transmission, and spatial localization techniques for choosing the routing protocol. A solution based on DRAM can be built using the aforementioned 3-layer architecture, where we assume that detection is not made on image processing to detect energy related faults [15].
Fault detection can be done using the aforementioned rules for the faults related to energy. A library of rules for reporting faults can be built on the available models developed for the evolution, propagation, of smoke and temperature. These rules use contradictions or repeated errors observed on the reported information by neighboring sensors. Examples of rules include the following rules dealing with temperature.
Irregular Variation of Temperature Rule. A sensor, , detecting that the temperature has exceeded a threshold starts sending messages every seconds to report on the temperature and its neighbors are requested to report on the temperature in their vicinity. If the reported values are not coherent then sensor is experiencing a fault. For this, coherence cases can be distinguished.
Regular Variation of Temperature Rule.
A sensor, , detecting that the temperature is increasing and has exceeded a value lower than the sensor breaking temperature ( < ) starts sending messages every time the temperature exceeds = ( / )( − ), for a prespecified . Using these messages, the sink node(s) is able to estimate the remaining time to reach .
Deployment-Based Repairing of Sensor Failures
In this section, we develop techniques using our deployment scheme to plan the replacement of faulty sensors or sensors on their way to a faulty state. In addition, we will discuss techniques allowing reactive actions to reduce the loss of coverage (sensing or communication) due to the occurrence of faults.
Proactive Sensor Replacement.
Replacement of sensors for a given monitoring application based on DRAM is built using a 3-phase process in charge of (a) the detection of faulty sensors; (b) the prediction of the time of occurrence of faults; and (c) the computation of new deployment patterns and instant of sensor dropping. The detection of faults can be achieved through a library of rules, similar to those discussed previously, taking into account the nature of the activity to monitor and the models governing the evolution of fault related parameters. The prediction of fault occurrence is performed based on two things, the collection of messages helping in analyzing the temporal evolution to fault and a theoretical model governing evolution of related parameters, if any. The generation of the first message is often triggered when a threshold is reached, while the following messages are sent, by the concerned sensor or by its neighbors, on a time-based or event-based manner.
Upon receipt of the messages, the sink node can configure the related model, if any, and adapt to the reality of the situation to deduce the time to live before the occurrence of the fault. The actions in this step have been discussed in the previous section in the case of energy loss and temperature evolution.
On the other hand, the computation of new deployment patterns and instant of sensor dropping can be operated as follows: let ( , ⃗ , ⃗ , , ) be a deployment pattern that has been used to deploy sensor and let ( ) ≤ be the reference landing locations of the sensors, as computed in Section 2. Assume now that among the sensors are faulty and need to be replaced and let 1 , . . . , be the reference position of the faulty sensors. Then, a replacement deployment pattern, denoted by ( , → , → , , Σ), should be used to plan the replacement. The components of this pattern are defined and computed as follows: (i) is the first dropping position of the replacing sensors; (ii) → is the speed of the airplane during the dropping period; (iii) → is wind intensity during deployment, assuming that it varies a little; (iv) Σ = { 1 , . . . , } is a set of time values separating the dropping times of the successive sensors.
Assume that the period of dropping is selected so that the altitude ℎ (in ) and the speed → can be chosen in a way allowing the following conditions.
(1) The reference lending position of sensor , , ≤ , is very close to .
10
International Journal of Distributed Sensor Networks (2) The actual landing of sensor 1, 1 , is a rectangle close to the rectangle delimiting the actual landing point to 1 .
(3) The actual landing of sensor , , 2 ≤ ≤ , is a rectangle close to the rectangle delimiting the actual landing point to .
(4) , ≤ , is equal to the distance separating and ( +1) divided by the speed of the airplane. (5) The dropping of the new sensors before a given time is generally linked to the predicted times of the sensors going to fail.
The feasibility of the above conditions is easy to address, since they involve very simple constraints.
Sensing Coverage Maintenance.
It is clear that when a sensor goes down or is eliminated from the monitoring WSN supporting a 2D monitoring application, then the coverage quality is reduced, since a subarea of the 2D area under monitoring might not be sensed properly or some sensors might be disconnected.
To address these issues one can perform the following tasks.
Proactive Replacement. This task assumes that instant of failure of the sensors to replace can be predicted for a horizon ℎ (in time units). Then, a replacement procedure can be triggered to complete replacement of a set of sensors going to fail before one of can become faulty. However, this task may trigger a large number of replacement procedures, when the area is large and time between successive faults is reduced (due to limited battery lifetime, for instance). In addition, it is clear that the time it takes to replace the failing sensing and the rate of failing sensors per unit of time may have an effect on the quality of lifetime improvement. To highlight this let us, first, discuss the definitions of lifetime.
Two common lifetime definitions can be found in the literature [16]. The first considers the time when the first sensor in the network fails (i.e., dies or is out of energy). The second lifetime definition considers the time at which a certain percentage of total nodes run out of energy. This lifetime definition is widely utilized in general purpose wireless sensor networks. These definitions apply well for sensors deployed in a region to monitor some physical phenomenon occurring anywhere in this area. One can easily be convinced that utilizing proactive replacement will increase significantly the lifetime wireless sensor network, since the first sensors to fail will be replaced before energy shortage.
Increase of Sensing Range Temporarily. This task is executed when abrupt faults occur and reduce significantly the probability of sensing coverage. This task implements a 2-step process. In the first step, the probability of coverage is recomputed and compared to a prespecified threshold. The second step takes place when the computed probability is lower than the threshold. In that case the sensors in the vicinity of the faulty node are commanded to increase + + + + + + + + temporarily their so that the new probability becomes higher than the threshold. To show how the coverage is recomputed we consider the expression demonstrated in Theorem 3, where we assume only one faulty sensor. We then locate the terms involving the faulty sensor. We recompute them taking into consideration the remaining involved sensors. Then, we reinsert the modified terms to get the final result. The recomputation can be seen through Figure 5, which shows that the area close to the faulty sensor should be deleted in the computation of , , assuming that the sensor landing close to is faulty.
This technique, however, adds complexity to the coverage control, introduces some irregularities to mathematical model controlling the deployment, and may impact the lifetime of the wireless sensor network since the unbalanced distribution of ranges often causes the problem of energy hole, which may induce energy exhaustion of the sensor nodes in the hole region faster than the nodes in other regions [17,18]. On the other hand, one can agree that the replacement strategy along with a good balance between the time needed to replace failing sensors and the horizon of prediction would compensate such a possible reduction.
Simulation
In this section we show the performance of our system by discussing the variation of the radio and sensing coverage probability, in a first step, and by discussing the impact of the replacement strategy, in a second step.
Radio and Sensing Range Modeling and Simulation.
With no loss of generality, we only consider a monitored domain reduced to a thick strip, as depicted by Figure 6. The strip is International Journal of Distributed Sensor Networks 11 overlapped by three zones (or squares) where three sensors can land after they are dropped from air. A sensor is assumed to land on discrete positions (separated by meters). The parameters , , and are assumed to take their values in the following, respectively: In addition, we assume that = 12, 5 m and the distance between two successive square centers is equal to 70 m. In particular, when = 4, then 196 discrete positions can distinguished in the strip. Each sensor can land on one of 56 possible positions.
The simulation is performed as follows: each sensor is dropped randomly on the discrete positions in the related square. The probabilities of sensing and radio coverage are then computed. The drop operation is repeated multiple times and the average probabilities are computed. The resulting values of the mean values are plotted by varying , , and . The width of the thick line on which the sensing coverage is obtained is also integrated to analyze its effects on the collected results. In addition, the collected results are compared to those provided by a full random dropping scheme of the three sensors in the strip. Figure 7 shows the variation of the probability of radio connectivity for different values of , for a fixed value of the discretization step ( = 2m). One can notice in this figure that Front-Sense scheme performs better than the full random scheme when becomes higher than − and that the full scheme performs better when is smaller. The average increase observed in this figure is higher than 16%. This feature can be explained by the fact that when increases, more space is covered by radio range of the sensor and more opportunity is given to connectivity, when the sensor is assigned to a square.
In addition, this shows that when is higher than 2 , the connectivity is almost guaranteed. Moreover, when is lower than − Front-Sense scheme performs lower than the traditional random strategy, since it confines the sensor in smaller set of discrete positions. Figure 8 depicts the variation of the probability of radio coverage, with respect to the varying communication range , for different values of the intersquare distance − 2 . One can deduce from this figure that Front-Sense scheme performance decreases when − 2 increases and that it increases when increases. It approaches 1 when is higher than + 2 . This feature can be explained by the fact that when increases, more space can be sensed in the three squares. In addition, when the distance between blocks increases, the number of positions where the sensors cannot deploy on becomes important, the distance between the positions of two neighboring sensors increases, and the connectivity is reduced. Figure 9 depicts the variation of the probability of sensing coverage for different values of . Different values of the thick strip width are considered. In particular, one can notice that the probability of sensing coverage gets higher when the strip is reduced to a line ( = 0). It naturally increases with the growth of , since more space in the strip can be covered by the disk located on the sensors. It decreases when the width increases, since one can prove that the points on the main axis of the strip are the most covered by the sensors. In addition, the probability decrease can be more significant once the intersquare distance is high.
Let us notice, finally, that the effect of the discretization step variations on the probability values in the simulation is not significant when is sufficiently small.
Impact on Network
Lifetime. Let us now evaluate the effect of the sensors' replacement strategy, proposed in Section 6.2, on the network lifetime and assume that the domain to monitor is a thick strip, containing two lines of sensor drawn along the length of strip. We assume that the lines are 3 Km long and that 30 sensors are deployed on each line uniformly (so that they form squares of side 100 m). Every second square of sensors is assumed to contain in its center a DRN to which the sensors of the square report. Thus, one can see that the points of the two lines are fully covered by the sensors.
While the two definitions of lifetime, discussed in Section 6.2, apply to WSN-based monitoring systems in general, we believe that these definitions do not apply to WSN-based border surveillance systems where the objective of surveillance is not only to locate the individuals crossing the border but also to track them until crossing completion. For this, we provide a third definition that considers the time of failure of the first set of sensors allowing the crossing of an intruder without being detected. Applied to our simulation model, this definition considers the time where the first pair of sensors occurring on different lines and facing each other fail (or get out of energy). Figure 10 depicts the variation of the network lifetime.
To simulate the impact of the replacement strategy on sensors' lifetime, we assumed the following two issues.
(1) Lifetime modeling: when the sensor battery is fully charged, the sensor can send a maximum of 1000 packets to report detected events. Each sensor cannot send more than one packet per unit of time.
(2) Activity measuring: when not sending a message, the sensor performs normal functioning and consumes little energy. Normal activity during one time unit is assumed to be equal to 1/100 of the required energy to send a message reporting on crossing event.
During simulation two parameters have been varied to assess the impact.
(3) The replacement time: this is the time needed to deploy a sensor showing that it is getting out of energy. We varied the values taken by this parameter in the interval [150, 500], assuming that 100 s corresponds to the horizon of energy shortage. This means that when the remaining energy of a sensor reaches 1/10 of its initial value, a request is sent to replace the sensor.
(4) The number of targets crossing the monitored area: we considered three rates of targets attempting to cross the monitored area (from one side to the other). They are, respectively, equal to 2, 4, and 6 attempts per 10 time units.
We conducted simulations to measure the network lifetime and the number of replaced sensors to assess the effect of replacement strategy. The results of these simulations are represented in Figure 10.
Let us first notice that if the time to replace a sensor is lower than the time of shortage prediction, the simulation results show that the sensors are all the time replaced. That is why the plotted results start for replacement duration higher than 100 s. Two main observations can be made from the figure.
(1) When the replacement time grows from 150 s to 500 s, the network lifetime decreases and the number of replaced sensors becomes smaller. In particular, the number of replaced sensors reaches 40% in the case where one attempt is performed every 10 time slots and the time to replace is 150 s. Indeed, when the time to replace increases, the probability of the sensor requesting replacement goes down before it is replaced gets higher. (2) When more crossing attempts are performed per unit of time, the network lifetime gets smaller, for given value of the replacement time. In fact, when more attempts are performed, more sensors will report and more requests for replacement will be generated and the probability that a request is not answered will get more important.
Let us finally notice that if the number of sensing lines in the monitored area increases, then one can be convinced International Journal of Distributed Sensor Networks 13 that the lifetime of the network will increase. This feature comes from the fact that more sensors (belonging to different lines) would go to energy shortage before an undetected path occurs.
Conclusion
This paper presents a sensor controlled random deployment scheme to monitor bounded 2-dimensional areas while providing mathematical formulations to control the sensor and radio coverage quality it allows. In addition, techniques to detect and repair sensor failures are added to provide system robustness for a large set of WSN-based applications and increase network lifetime. In particular, expressions are set up to define the probability of total coverage when the environment characteristics are varying while taking into consideration real deployment parameters. The cases of two applications, the border surveillance and the wildfire sensing, are considered in some details to show that the approach is generic and that strategies can be conducted and assessed. | 12,315.8 | 2014-04-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Assessing adaptive and plastic responses in growth and functional traits in a 10‐year‐old common garden experiment with pedunculate oak (Quercus robur L.) suggests that directional selection can drive climatic adaptation
Abstract Understanding how tree species will respond to a future climate requires reliable and quantitative estimates of intra‐specific variation under current climate conditions. We studied three 10‐year‐old common garden experiments established across a rainfall and drought gradient planted with nearly 10,000 pedunculate oak (Quercus robur L.) trees from ten provenances with known family structure. We aimed at disentangling adaptive and plastic responses for growth (height and diameter at breast height) as well as for leaf and wood functional traits related to adaptation to dry environments. We used restricted maximum likelihood approaches to assess additive genetic variation expressed as narrow‐sense heritability (h2), quantitative trait differentiation among provenances (QST), and genotype‐by‐environment interactions (GxE). We found strong and significant patterns of local adaptation in growth in all three common gardens, suggesting that transfer of seed material should not exceed a climatic distance of approximately 1°C under current climatic conditions, while transfer along precipitation gradients seems to be less stringent. Moreover, heritability reached 0.64 for tree height and 0.67 for dbh at the dry margin of the testing spectrum, suggesting significant additive genetic variation of potential use for future selection and tree breeding. GxE interactions in growth were significant and explained less phenotypic variation than origin of seed source (4% versus 10%). Functional trait variation among provenances was partly related to drought regimes at provenances origins but had moderate explanatory power for growth. We conclude that directional selection, either naturally or through breeding, is the most likely and feasible outcome for pedunculate oak to adapt to warmer and drier climate conditions in the future.
| INTRODUC TI ON
Intra-specific trait variation (ITV) is an important feature in evolutionary biology as it is the result of several evolutionary forces that have worked on phenotypic variation in the past and provides the raw material for ongoing adaptation of species to various selective forces (Alberto et al., 2013;Benito Garzón, Alía, Robson, & Zavala, 2011;Bolnick et al., 2011). ITV comprises several sources of evolutionary drivers, including long-term selection, historic gene flow, and random genetic drift, which have left their particular imprints in phenotypes and genotypes (Albert, Grassein, Schurr, Vieilledent, & Violle, 2011). Moreover, given that individuals and populations are also characterized by the ability to change their phenotype depending on the environment they are exposed to, plastic responses and, in particular, genetic variation in plasticity (GxE) constitutes another important source of ITV. The latter is of notable importance for sessile organisms such as trees, since their natural migration velocity is certainly too slow to track their ecological optimum when environmental conditions change rapidly as expected under climate change (Aitken, Yeaman, Holliday, Wang, & Curtis-McLane, 2008;Bussotti, Pollastrini, Holland, & Brueggemann, 2015;Ghalambor, McKay, Carroll, & Reznick, 2007;Nicotra et al., 2010;Via & Lande, 1985). Disentangling adaptive and plastic responses in trees is of particular importance for climate adaptation and adaptive forest management, as well as for defining conservation goals for rear-edge tree populations (i.e., populations at the trailing edge of a distribution) under climate change, because both will have different implications for future ecosystem management (e.g., Aitken & Bemmels, 2016;Fady et al., 2016). The presence of adaptive variation can mean that trait variation is heritable and can therefore be passed on from one generation to the next, but also that populations probably experienced spatially varying selection in the past and therefore show divergence in their mean trait values in space. High heritability may suggest that breeding programs for more resilient genotypes are desirable (Harfouche et al., 2012), whereas strong quantitative trait differentiation among populations (e.g., Q ST ) implies that climatically preadapted genotypes exist and may be utilized in assisted gene flow and assisted migration schemes (Aitken & Bemmels, 2016). Different approaches have been used to investigate adaptive or plastic responses in plants such as studying trait variation across landscapes (e.g., Porth et al., 2015) and establishing common garden experiments, where ecotypes or provenances of the same species grow under equal environmental conditions (e.g., Sáenz-Romero et al., 2017). When replicated across several contrasting environments, common garden experiments can assess adaptive and plastic responses at the same time, assuming that a known family structure exists among trees within provenances (Matesanz & Valladares, 2014). Here, we analyzed data from three common garden experiments in which nearly 10,000 trees with known pedigree and provenance were planted across a rainfall gradient. Trees were analyzed for growth (height and diameter at breast height 10 years after planting) as well as for a number of functional traits with known importance for drought adaptation to assess the relative contributions of the various evolutionary drivers outlined above. We studied pedunculate oak (Quercus robur L.), a widespread wind-pollinated temperate forest tree in Europe that can reach ages of up to 800 years and that has considerable importance for the forest industry as well as for forest ecosystem functions in Europe (Ducousso & Bordacs, 2004). Pedunculate oak is a largely outcrossing tree species that has survived the last glacial maximum within three big refugia in the Balkan peninsula, southern Italy, and Iberia (Petit et al., 2002) and occurs largely sympatric with its closely related congener sessile oak (Quercus petraea) resulting in contact zones where inter-specific gene flow is realized (Petit, Bodénès, Ducousso, Roussel, & Kremer, 2004). Recent studies showed that Q. petraea exhibits significant imprints of local adaptation across the range of its distribution, that is, highest fitness was achieved where the variation between growth and provenance climate was small, and that the climate at seed origin explains a significant part of the phenotypic variation (Sáenz-Romero et al., 2017). Here, we test whether such a pattern holds true for its closely related congener on a smaller geographic scale by integrating functional traits with known importance for drought adaptation. Additionally, our study goes beyond the provenance level and takes into account putative additive variance and plasticity attributable to the effects of families (i.e., mother trees).
This permits to disentangle three sources of variation, that is, provenance-adaptive, single-tree-adaptive, and GxE, all of which have different implications for future management in a changing climate. For example, current national seed transfer guidelines of forest reproductive material in Europe still recommend the use of local seed sources following a "local is best" paradigm (e.g., Konnert et al., 2015), even though seed sources from warmer and probably drier regions might help to mitigate consequences of ongoing warming and progressively drier vegetation periods in the near future.
We hypothesize that populations of pedunculate oak exhibit adaptation to the local climate so that growth would decrease from the local maximum with increasing climatic distance from the seed source (Savolainen, Pyhäjärvi, & Knürr, 2007). Additionally, we hypothesize that heritability in growth traits is significant within and across provenances of pedunculate oak and can be potentially utilized in tree breeding. Finally, we tested whether functional traits in leaves and wood that are known to be involved in drought adaptation K E Y W O R D S adaptive plasticity, functional traits, genotype-by-environment interactions, heritability, local adaptation, tree growth can be used to explain growth differences among or within populations and may be used as candidates for selecting more resilient trees in the wild or in large-scale progeny tests.
| Plant material
The trees studied are part of a national provenance test series, in which several provenances of pedunculate oak are tested across five common garden experiments. For the current study, a subset of three common gardens and ten provenances were selected to provide a suitable bioclimatic gradient for both provenances and testing sites (Table 1). Briefly, the three test sites (Wels, Weyerburg, and Weistrach) belong to three different ecozones according to the Austrian forest seed zone classification (Kilian, Müller, & Starlinger, 1994) and follow an annual rainfall and continentality gradient from 590 mm (Weyerburg, hereafter called the "dry site") to 770 mm (Wels, "moist site"), with summer drought periods increasing from moist to dry sites. Seeds from 22 mother trees were collected in each of the ten provenances and sown in 2006 in an experimental nursery in Vienna. Mother trees were collected from registered local seed stands in Austria, Slovenia, Croatia, and Czech Republic.
Plants were brought to the testing sites as 1-year-old seedlings in planting containers and were planted in a 2 × 1 m distance matrix with a total of 110 plants in each provenance cell. Each provenance was replicated three times in each common garden with each of the 22 mother trees being randomly represented five times in each cell.
To account for family-level variation, a mother tree identifier matrix was created for identifying families in each of the cells. In total, 9,900 trees were planted and grown over the observation period of 10 years in the three common gardens ( Figure S1). Testing sites were regularly visited in the first years to remove competing vegetation (e.g., grasses and blackberry) in order to keep the seedling survival rate homogenous among sites, but no thinning or any other silvicultural treatment was applied in the first ten years.
| Climate data
Provenances and testing sites were climatically characterized by using long-term climate variables that were derived from a 10 × 10 km downscaled EUROCORDEX climate dataset (Jacob et al., 2014). Briefly, climate data were spatially downscaled to a 1km 2 resolution by applying the method described in Hamann, Wang, Spittlehouse, and Murdock (2013) and which is available in the ClimateEU database (available at http://tinyu rl.com/Clima teEU). The downscaled climate data were validated with observation data from the E-obs dataset (Klok & Tank, 2009) with a correlation coefficient of 0.93 (Chakraborty, 2019). We used a subset of 13 climate variables (Table S1) to assign provenances to climatic clusters with similar long-term growing conditions. For this, the elbow criterion for selecting the most likely number of clusters by visually inspecting the screeplot after performing a principal component analysis was applied. Analysis was carried out in R (R Development Core Team, 2008), and functions prcomp and autoplot from the cluster package were used for visualization (Maechler, Rousseeuw, Struyf, Hubert, & Hornik, 2019). Ten large leaves per branch were rehydrated for 24 h between wet paper towels in a dark room at 4°C. After removing the petioles, leaves were carefully blotted dry, saturation weight was measured with an electronic balance to 0.1 mg, and leaves were scanned with a desktop scanner at 150 dpi resolution and dried for 72 hr at 60°C. Specific leaf area (SLA, g/cm 2 ) was calculated as leaf dry mass/leaf area and leaf dry matter content (LDMC, g/g) as dry weight/ saturation weight.
| Traits
Subsamples of all ten leaves per branch (9 mm disk) were collected for stable carbon isotope analysis, a proxy for photosynthetic water-use efficiency integrated over the growth and expansion of the leaf (Farquhar, Ehleringer, & Hubick, 1989). Subsamples were pooled, ground to a fine powder in a ball mill (TissueLyser 2, Qiagen, USA) and analyzed in an isotope ratio mass spectrometer (Delta V Advantage; Thermo Scientific, USA). The 13C:12C ratio and C content of the plant samples were measured by elemental analyser-isotopic ratio mass spectrometer (EA-IRMS) with a FlashEA 1,112 connected to an IRMS Delta V advantage via a Conflow IV (Thermo Fisher Scientific, Bremen, Germany). C content was calibrated using a certified acetanilide standard. Stable isotope referencing was done with working standards referenced against the international certified standards NBS 22, IAEA-CH-6, IAEA-600, IAEA-NO-3, and IAEA-N-1. The isotopic composition of C is reported in delta (δ) notation relative to the Vienna Pee Dee Belemnite (VPDB). The precision of the measurements is ≤0.2%.
One additional leaf per sampled tree was rehydrated for sto- Samples were then rinsed in water, bleached in 14% NaClO, rinsed in water, stained in 5% safranin red, rinsed in water, and destained in 50% ethanol (each step lasted 5 min). Veins were imaged using the same microscope as above (pixel size: 0.582 µm), capturing at least 8 mm 2 of leaf area through multiple stitched images. Vein density was measured in ImageJ by thresholding the stitched image to create a binary image of the veins, then generating a skeleton of the veins and analyzing the skeleton using BoneJ's Analyse Skeleton function (Doube et al., 2010). This semi-automated method generated comparable results compared to hand measurement (data not shown).
| Testing for general patterns of local adaptation
First, we fitted a general linear model between growth traits (dbh and height) and distance between trial site climate and provenance climate following the quadratic regression model equation where Y is dbh or height of the ith provenance in a common garden, β 0 is the intercept, β 1 and β 2 are regression coefficients, D is the transfer (1) Y i = β 0 +β 1 D +β 2 D 2 + e i distance between site climate and seed source climate, and e is the residual variance. For calculating D, we used the differences between provenance origin and trial site location for mean annual temperature (MAT) and mean annual precipitation (MAP), respectively (i.e., positive D values indicate that provenances were transferred into colder/drier environments, negative values indicate that provenances were transferred to warmer/wetter conditions). Both climatic variables (MAT and MAP) had been shown to explain a considerable amount of variation in tree growth in earlier studies (e.g., Chakraborty et al., 2016;Wang, Hamann, Yanchuk, O'neill, & Aitken, 2006). This was done separately for each of the three test sites.
| Trait divergence among provenances and Q ST
We estimated whether provenances or climatic clusters are genetically differentiated in their growth (height and dbh) by applying Q ST , a measure of quantitative genetic differentiation that estimates the proportion of genetic variation in a trait among populations relative to the total amount of variation (Leinonen, McCairns, O'Hara, & Merilä, 2013). Q ST is similar to the widely used F ST (Wright, 1949) but takes into account only quantitative trait information without allelic variation at specific loci as in the case of F ST . Q ST was calculated as where σ 2 Pop is the variance among provenances or clusters, respectively, and σ 2 a is the additive genetic variance of a trait obtained from the relatedness among half-siblings of the same mother tree. We used the method developed by Gilbert and Whitlock (2015), which is implemented in the Q ST F ST Comp package in R (github.com/kjgilbert/ QstFstComp), but calculated only Q ST and its 95% confidence intervals without comparing Q ST to F ST , since no neutral diploid markers were available in this study. Q ST was determined across all sites as well as separately for each site.
| Additive genetic component and heritability
We estimated additive genetic effects in growth traits by calculating the narrow-sense heritability (h 2 ) across and within provenances from a mixed model of the form: for across-provenance heritability within sites (h 2 site , n=3330) and for within-provenance heritability within sites (h 2 prov , n = 330) with β being a vector of fixed effects (intercept), and p, b, and a random vectors of provenance (climatic cluster), block, and additive genetic effects, respectively. X and Z are incidence matrices assigning fixed and random effects to phenotypic observations in vector Y. Provenance (or climatic cluster) and block effects follow x ~ N(0,σ p,b 2 ) with σ p , b 2 being the Provenance (Cluster), or block variance, respectively. Individualtree additive genetic effects follow a ~ N(0, σ a 2 A) where σ a 2 is the additive variance and A the relationship matrix derived from a half-sib family structure of open-pollinated mother trees. In this model, we assumed that none of the progenies were full-siblings, since previous studies have shown that the proportion of full-siblings in wind-pollinated trees sampled in forest stands is usually very small and has only little influence on the estimated additive genetic variance (e.g., Bacilieri, Ducousso, Petit, & Kremer, 1996;Kjaer, McKinney, Nielsen, Hansen, & Hansen, 2012). Variance components were calculated using an animal model approach (Henderson, 1984;Wilson et al., 2010). The narrow-sense heritability was calculated as where σ 2 a and σ 2 e are the additive and residual variances, respectively. We employed the R package BreedR (version 0.12-4; github.com/famuv ie/breedR) which uses a restricted maximum likelihood (REML)-based variance estimator procedure allowing to infer random genetic effects at individual level. We used the average information matrix (function ai), which simulates standard errors from the asymptotic Gaussian joint sampling distribution, to estimate mean and standard errors of variance components. We considered the heritability estimate to be significant, when the lower bound of the 95% confidence interval for heritability was greater than 0.
| Variation in phenotypic plasticity of growth traits (GxE)
To test for variation in phenotypic plasticity (i.e., genotype-by-environment interactions) and to estimate its contribution to overall phenotypic variation, we formulated a mixed model as follows for height and dbh: With Y ijklm being the phenotype of the mth tree, belonging to the lth block nested within the ith site (B l (S i )), belonging to the kth mother tree nested within provenance j (M k (P j ), originating from provenance j (P j ) and growing in the ith trial site (S i ). e ijklm is a random error term, and S i P j and S i M k (P j ) are the crossed genotype × environment interaction terms separated for provenance-by-site and family-by-site interactions, respectively. Variance components were expressed as ratios relative to the total phenotypic variation as a percentage of variance explained by the single equation terms, and for this purpose only, all terms were treated as random effects in the model following x ~ N(0,σ x 2 ) with × being the single predictors, respectively. In order to test whether variation in plasticity was uniform among families, we used BLUPs (best linear unbiased predictions) for the GxE family-interaction term as predicted by the model in equation (6) and calculated Y ijklm = β 0 +β 1 S i +β 2 P j +β 3 M k P j +β 4 B 1 S i +β 5 S i P j +β 6 S i M k P j + e ijklm the ecovalence (i.e., stability of families across environments) according to Wricke (1962) as follows: where X ij is the observed trait of family i in environment j, X ̅ i. is the mean trait across families, X ̅ .j is the mean trait across environments, and X ̅ .. is the grand mean. Ecovalence was expressed as ratio between family sum of squares (ss fam ) divided by the total sum of squares across all families, where higher values indicate more plastic genotypes. We used an arbitrary threshold of 0.05 to define extraordinarily plastic genotypes and assigned extraordinary families to provenances in order to see whether they occur more frequently in some environments compared to others. Since our design does not explicitly allow for testing whether plasticity is heritable and therefore adaptive, we used this information just as a broad surrogate.
| Intra-specific variation in functional traits
Since our dataset for functional traits was much smaller compared to growth traits (270 versus 9,990), we used an ANOVA (analysis of variance) approach and treated site, provenance, and provenance-by-site as main effects and family as random effect in a linear mixed-effect model using the lme4 package in R. Given the limited number of trees that could be measured for functional traits, we decided to capture trait variation at the provenance level rather than at the family level by sampling a larger number of mother trees within provenances and test sites but not to replicate mother trees within site. We used Pearson-moment correlation and correlated functional and growth traits at the individual-tree level (tree-wise functional trait versus growth) as well as at the provenance level (provenance mean functional trait versus single-tree growth) to test whether functional traits can be used to select more vigorous or resilient trees. Finally, we calculated the summer heat:moisture index (SHM), an index used to describe the long-term drought regime in seed zones (Wang, Hamann, Spittlehouse, & Murdock, 2012), to compare mean functional trait values to climate at seed origin of provenances to test for adaptive patterns in functional trait variation where MWMT is the mean temprature of the warmest month in °C, and MSP the mean summer precipitation (May to September) in mm.
Higher SHM values indicate drier climatic conditions. We used different climatic variables for assessing intra-specific differences in growth (MAT) and in functional traits (SHM) in order to account for the fact that genecological differences in tree growth in many earlier studies were best explained by the average temperature regime (e.g., Wang, O'Neill, & Aitken, 2010;Jobbágy & Jackson 2000;Loehle, 1998), whereas adaptive differences in functional traits with importance for drought adaptation were best explained by climate variables indicating probability of drought occurrence (e.g., Lamy et al., 2014;Rungwattana et al., 2018). We calculated a linear model between the functional trait value at provenance level and the climatic variables at seed origin and reported slopes and p-values separately for the three test sites. p-Values were corrected for multiple comparisons by applying a Benjamini-Hochberg adjustment procedure (Benjamini & Hochberg 1995).
axis being clearly related to temperature variables (85.6% of explained variation), whereas the second principal component axis corresponded to precipitation regime (mean annual precipitation and mean summer precipitation; 13.4% explained variation). Cluster 1 consists of the two provenances from the northeast and southeast of Austria with stronger continentality and more frequent summer drought compared to the rest. Cluster 2 contains provenances from northern Austria (1, 2, and 8) characterized by a stronger Atlantic influence with lower mean annual temperature and a lower probability of summer drought occurrence. The three provenances from Slovenia and Croatia together were assigned to Cluster 3 with warmer mean annual temperature (0.6°C-1.6°C above average).
Finally, provenances 14 (southern Austria) and 17 (Czech Republic) were assigned to single-provenance clusters 4 and 5, and the latter was characterized by colder mean growing conditions of about −1.8°C compared to the overall mean.
Survival rate after 10 years was 94% (9,306 living trees), and mortality did not significantly differ between sites nor between provenances.
As expected, mean height and dbh after 10 years were higher at the moist site (height: 5.6 m; dbh: 5.4 cm) and lower at the intermediate
| Local adaptation of provenances and Q ST
Nonlinear models with the temperature transfer distance as quadratic term were highly significant in all three test sites (Table 2) and explained between 5% (moist site) and 14% (intermediate site) of the overall variation. Clusters 1, 2, and 4 performed best with growth decreasing toward both colder and warmer provenance climates ( Figure 2). The local maximum for height and dbh coincided with a temperature distance of approximately 0°C at the moist site, but shifted toward colder provenance climates in the intermediate and dry sites (i.e., cold cluster 2 increased growth toward drier conditions compared to the warm cluster 3, Figure 2a,b). Differences in mean annual precipitation between provenance origin and trial site explained less variation compared to MAT, and a classical bell-shaped response curve could not be revealed in most cases (Figure 2c,d). Figure 3).
| Additive variance and narrow-sense heritability (h 2 )
Additive genetic variance was highly significant for height and dbh when calculated across provenances within sites (h 2 site ) and remained also significant within most provenances within sites (h 2 prov ) despite the much lower sample size (3,330 versus 330; Table 3
| Variation in plasticity of growth (GxE)
Both GxE terms (provenance-by-site and family-by-site) were significant for height and dbh, but explained only a minor proportion of the overall variance when compared to the remaining terms (Table 4). As such, GxE terms explained 4% of total height variation (2.75% attributable to provenance × site, 1.25% attributable to family × site) and 3.8% for dbh variation (2% attributable to provenance × site, 1.75% explained by family × site).
There were no differences depending on whether provenance or climatic cluster was used as covariate. In comparison, site alone explained approximately 22% of the phenotypic variation for height and 16% for dbh, while provenance explained 9% of the variation for height and 3% for dbh (Table 4). Ecovalence of families was in general low with values fluctuating between 2.8E-05-0.028 for height and 2.6E-05-0.041 for dbh, and no family was characterized as extraordinarily plastic (Figure 4).
| Variation in functional traits, correlation with seed source climate, and relation to growth traits
Functional traits varied significantly among sites (p < .001 for SLA, LDMC, Leaf vein density, and δC 13 ; p < .01 for vessel area fraction), but also significantly among provenances (p < .001 for vessel area fraction; p < .01 for leaf vein density and hydraulic conductivity; p < .05 for vessel area). Significant provenance-bysite interactions appeared only in δC 13 (p < .05). Relative variance proportions explained by the three predictors are presented in
| D ISCUSS I ON
Trees can potentially respond to environmental selection pressure in three different ways: by migrating to more suitable growing sites, by directional selection within populations with the preferential survival of outlier phenotypes, or by adjusting their phenotypes under novel environmental conditions through phenotypic plasticity. In this study, we disentangled these three potential pathways in order to evaluate which of the scenarios will be most likely for pedunculate oak, an important temperate tree species in Europe which was shown to be vulnerable under increasing drought in the near future (Levanič, Čater, & McDowell, 2011). All three outcomes (migration, selection, and plasticity) have statistical counterparts that were employed in our study: Differentiation among populations along an ecological transfer distance, that is, climate, as well as Q ST , can be seen as indicators of local adaptation to home temperature regimes (Kawecki & Ebert, 2004;Sáenz-Romero et al., 2017). Second, additive genetic variation and significant narrow-sense heritability imply that directional selection has the capacity to drive adaptation to novel climate conditions and may be utilized in breeding for more resilient genotypes (Harfouche et al., 2012). Finally, when some genotypes are more plastic than others, GxE can become an important evolutionary feature and potentially drive adaptation to novel environments assuming that the phenotypic change is not maladaptive and that plasticity itself has a heritable basis (Pigliucci, 2005).
| Local adaptation of provenances and quantitative trait differentiation
We used growth expressed as height and diameter at breast height after 10 years as a strong surrogate for fitness, which is a reasonable assumption given that larger trees compete more effectively for light and will more likely survive density-dependent competition during the adolescent growing stage (Aitken & Bemmels, 2016;Alberto et al., 2013). We observed a clear pattern of local adaptation of provenances for mean annual temperature at all three sites resulting in decreasing dbh and height growth with increasing temperature transfer distance. Consequently, local seed stands (clusters 1 and 2) are still better adapted under current climatic conditions compared to "warmer" or "colder" provenances. In contrast, local adaptation of provenances was less significant for distance between moisture regimes expressed as mean annual precipitation. Replacing mean annual precipitation by mean summer precipitation or summer heat:moisture index resulted in the same pattern (data not shown). A reasonable explanation for this finding is that some provenances obviously can benefit from warmer and dry trial sites. Based on these findings, it seems likely that, at regional scale, temperature is a more important evolutionary driver of adaptation in pedunculate oak than moisture. This makes sense, given that late frost events in spring are likely to occur at all the three trial sites and that provenances from the southern cluster originate from regions with comparably mild winters (see mean coldest month temperatures in Table 1). This could explain the strong observed signal of local adaptation for mean annual temperature, since both climatic variables are highly correlated in our dataset (r = 0.91). Hence, actual seed transfer guidelines for forest reproductive material in Europe seem to be appropriate when recommending the use of local against foreign seeds. Ignoring such guidelines could lead to a loss in mean height after 10 years of approximately 2 meters at dry sites when the worst and best provenances in Figure 2 are compared. Our results corroborate a study on closely related sessile oak (Quercus petraea), which found analog patterns of maladaptation with increasing climatic distance from the provenances' source climate (Sáenz-Romero et al., 2017).
Surprisingly, differentiation among provenances found in our study was similar or even higher compared to Sáenz-Romero et al. (2017)
F I G U R E 4
Genotype-by-environment interactions for height growth (a,b) and dbh (c,d). Interactions on the Y-axis are given as BLUPs of interactions. Histograms in c) and d) show uniformity in plasticity among families expressed as ecovalence (fratio) counts other hand, seed transfer from the colder part of the distribution (e.g., climatic cluster 5) seems to be less problematic under current climate conditions, since the local maximum of the response curve shifted stronger toward negative D values from moist to dry sites ( Figure 2).
While the general and intuitive expectation is that seed material should probably be transferred from warmer to colder regions in order to track the ecological optimum when temperature is expected to rise in the future, our data add an important caveat. One possible reason is that some traits that confer adaptation to colder environments may also be beneficial in dry environments such as a higher resistance against freezing-induced embolism (Olson et al., 2018).
Generally, Q ST was in accordance with the results above since Q ST was highest at the intermediate site and lower at the moist and dry site (Figure 3 and Table 3; all Q ST results are significantly >0). Our data did not permit to compare Q ST to an estimate of historic gene flow among populations (such as F ST ) in order to control for effects of neutral genetic drift, as suggested by Gilbert & Whitlock (2015).
However, in combination with the response curve in Figure 2
| Heritability and genotype-by-environment interactions in growth traits
Heritability (h 2 ) is a measure that quantifies whether or not populations are able to adapt to environmental pressures via directional selection (Falconer & MacKay, 1989). The higher the heritability the higher is the populations' ability to change its mean phenotype toward the new optimum under natural selection (e.g., Kelly, 2011).
Additionally, high heritability permits to track the optimum under weaker selection pressure, which reduces the probability of genetic bottlenecks and can therefore avoid loss of genetic diversity due to genetic drift (Lacy, 1987). This is an important aspect given that large-scale tree mortality after drought events has already become more frequent and will further increase in the near future (Allen et al., 2010). We found high and significant heritability in growth reaching 0.64 for height at the dry trial site, which constitutes a promising basis for breeding programs from large collections of progeny tests. Interestingly, the heritability for height growth was substantially higher at the dry site compared to moist and intermediate sites, suggesting higher prediction accuracy when selecting candidate trees for dry environments under future adaptive forest management. This could in fact be a starting point for future studies aiming at identifying molecular variation at DNA level associated with higher drought tolerance in pedunculate oak.
Genotype-by-environment interactions were significant for growth but explained only a relatively small proportion of the phenotypic variation (4%). Plasticity was largely uniform among families and provenances, and ecovalence was generally low
| Functional trait variation, correlation with seed source climate, and relations between functional traits and growth
Functional traits analyzed in this study are related to drought adaptation of plants, and we therefore tested relationships between functional trait variation, growth, and seed source climate.
Unraveling strong adaptive signals of functional trait variation among provenances or strong correlations between functional traits and growth could assist selection of more resilient provenances or genotypes. We found a few strong associations between functional traits and dryness at seed origin, but with varying strength among test sites. Unexpectedly, provenances were poorly differentiated at the dry site for all functional traits, where no significant associations with seed source climate were revealed ( Figure 6). This seems to be in broad agreement with findings from Our analysis of intra-specific variation in this important tree species suggests that adaptive variation (h 2 and Q ST ) in growth is stronger than plastic responses (GxE). Although plastic responses have been intensively discussed as a potential evolutionary strategy for trees to avoid mismatches between biological requirements and environmental change (e.g., Corcuera, Cochard, Gil-Pelegrin, & Notivol, 2011), it is still unclear in the majority of investigated cases whether observed variation in plasticity is adaptive at all or may be simply maladaptive (Matesanz & Valladares, 2014). Moreover, narrow-sense heritability was significant and high for height and dbh at the dry trial site which may resemble best future climatic conditions in the temperate part of the Quercus robur distribution. Consequently, the evidence suggests that directional selection within populations will likely determine the future trajectory of Quercus robur under climate change. This has several implications for future adaptive forest management, since the high heritability in growth observed in the dry testing site (see Figure 3) can lead to breeding success and high prediction accuracy when testing candidate trees as potential gene donors capable of tolerating drier conditions. Based on our findings, transfer of genotypes from southern regions such as Croatia to eastern Austria could potentially lead to maladaptation in growth which is most likely caused by lower frost tolerance.
Therefore, we believe that the high uncertainty of assisted gene flow combined with the risks associated with these schemes (e.g., Grady, Kolb, Ikeda, & Whitham, 2015) calls for directional selection through tree breeding at moderate geographical scales.
ACK N OWLED G EM ENTS
We thank all field workers who helped planting and measuring the trees, in particular Franz Henninger and the staff of the BFW nursery in Tulln. We also thank the federal administrations of lower Austria, upper Austria, and Burgenland; the Landwirtschaftskammer Österreich for financial support; and Thomas Thalmayr (BFW) for technical support and preparation of figures. GTR was supported by the Austrian Science Fund (FWF), project number M2245
CO N FLI C T O F I NTE R E S T
None declared.
DATA AVA I L A B I L I T Y S TAT E M E N T
The raw data will be available under the Zenodo.org Digital Repository soon under https://zenodo.org. | 8,071 | 2020-06-18T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Students’ Perception of Digital Literacy Competence as Learning Sources
Accepted 28 Februari 2020 This study was carried out to define the students’ perception of digital literacy competencies as learning sources held by the students of English Education Study Program of STKIP Muhammadiyah Bangka Belitung as active users of internet pointed on the internet searching, hypertext navigation, content evaluation, and knowledge assembly by theory of Paul Gilster (1997). This research uses a descriptive qualitative method. Subjects in this research were 9 students determined using a purposive sampling technique, with the subject criteria is the students who are actively using the internet more than 4 hours a day to browse the internet as educational sources. The results of this research indicate that all of the respondents are not literate yet because they do not have all the indicators in digital literacy. But, only one of nine students answered all the indicators on digital literacy, the respondent has stated as well competencies in internet searching, hypertext navigation, content evaluation, and knowledge assembly. So, from the indicators of digital literacy competencies, it can be implied that only one of nine students who have a good perception of digital literacy as learning sources.
INTRODUCTION
In era 4.0, digital literacy has come up with a huge impact. Digital literacy is the ability to understand and use information from a variety of online sources. Digital literacy skills are used to deal with the internet's explosion of knowledge. The reality nowadays, internet users are growing up, many parties can easily access and deliver information freely to their users. The internet used by users has been varied and used not only to build a relationship, reach communication through social networking sites but also to browse information for literate knowledge. In the meantime, learning sources comes in a variety of platforms with their respective characters. Also, internet media platform is transformed as learning sources, especially in searching scientific issues as references. Even the digital era is blowing up, the use of the internet today has not been accompanied by the awareness of each user to think critically. Indonesia is stated as one of the biggest internet users in the world. Based on Asosiasi Penyelenggara Jasa Internet Indonesia (APJII) in 2017, Indonesian internet users are 143,26 million or 54,68%. And this percentage will increase over time in line with digital development. Because the digital era is huge up, it needs critical thinking and creativity to access and share the information. In this era, every person has to understand digital literacy as an important thing. Digital literacy must be understood as important as the ability to read, write, and calculate. As living in the digital era (one-touch and one-click), people have the chance to access the information fast, get interaction with others, but not all the contents of information deliver the positive content. Digital character is valuefree so that every person has the chance as a role in it. Because of the wide impact of digital sources, the public or the citizen must be a good user and good netizen. Digital literacy is the ability to understand and use information from a variety of online sources. Digital literacy skills are used to deal with the internet's explosion of knowledge. As stated by Davis & Shaw (2011) in Chabibie (2017) that digital literacy is the relationship ability through hypertextual information, in the meaning of reading on a non-sequential computerbased system or digital platform. Therefore, analyzing skills is an important factor. Gilster (2007) in Chabibie (2017), digital literacy means that the ability in reading, understand, and analyzing the variety of digital sources. It is important that people have the ability to reading and analyzing online information in order to get valid information or news. The ability to find new information that is accounted for will be more important around this quickening digital technology.
So that the reading ability of Indonesian people, especially the young generation need to be directed into understanding intelligence in reading comprehension to understand the digital information. For that, digital literacy must be supported as a learning mechanism that is structured in the curriculum.
Digital literacy is acquainted by Paul Gilster (1997), Gilster stated that every people has to fit the ability out to understand and use the information from various digital sources. And also he adds that digital literacy is the ability to using digital equipment in daily life. Hague (2010) said that digital literacy is the ability to create and share in different modes and form; to create, collaborate, and communicate effectively, as well ass to understand how and when using digital technology properly. While UNESCO sees digital literacy as modern life skills that need to be mastered. In line with Martin (2006) digital literacy is awareness, attitude, and the ability of individual in using the digital tools and facilities to identify, access, manage, integrate, evaluate, analyze, and synthesize the digital resource, build the new information, create the media expression, and communicate with others, in the context of certain life situations, to allow the social act constructively.
Digital literacy covers people understanding of digital content. People must be aware of every content on the internet is not equal in sharing information, they are not similar in quality of information. According to Basuki (2013) when someone accessing and browsing the internet more often, they will have the ability to differentiate whether the information has good or bad quality. From these statements, digital literacy covers the technical ability in using tools and ICT, as well as covers the people's knowledge and skills in understanding the contents, so the goal is able to design the new knowledge. Therefore, it can be stated that digital literacy is someone's competencies in operating digital media in finding, using, managing, creating, evaluating, and transferring the information properly, wisely, and responsibly.
According to Gilster (1997), someone is called literate in digital if they have the ability in four aspects of digital components; first, internet searching is the ability to use the internet and do various activities in it. This competency covers two components they are, the ability to find the information on the internet using the search engine, and the ability to do various activities. Second, hypertext navigation is the ability to read and understand the navigation of hypertext in the web browser. This competency covers four components, they are knowledgeable about hypertext and hyperlink along with how it works, knowledge about the differences between reading textbooks and browsing sources through the internet, knowledge about the web works, and ability to understand the web page characteristics. Third, content evaluation is the ability to think critically and give the assessment to what is found in online sources, as well as the ability to identify the validity and completeness information that is referred by a hypertext link. This competency covers five components, there are the ability to differentiate between information layout and content that is the users perception in understanding the web page layout visited, the ability to analyze the background of information that is people awareness to search for further information and the creator, the ability to evaluate web address through understanding the various domain for each country or institution, the ability to analyze the web page, and the knowledge about FAQ in newsgroup or group discussion. Fourth, knowledge assembly is the ability to arrange the knowledge, build a set of information from various sources along with the ability to collect and evaluate the fact or opinion properly without prejudice. This competency covers four components, they are the ability to search information through internet, the ability to design the personal newsfeed or news update notification or discuss several topics by joining or subscribing the newsgroup, or mailing list, the ability to do crosscheck or recheck about the information that is found, the ability to use all kinds of media to prove the truth information, and the ability to arrange information sources that is found in internet to the real life that is not connected in network.
Based on this theory, the researcher sets the theory based on Paul Gilster, there are four aspects that must be followed; they are internet searching, hypertext navigation, content evaluation, and knowledge assembly. These digital literacy competencies are conducted as the unit analysis in the aspect of students' perception of digital literacy. From this description, the purpose of this study is to define the students' perception of digital literacy competencies as learning sources held by the students of English Education Study Program of STKIP Muhammadiyah Bangka Belitung as active users of the internet.
RESEARCH METHODOLOGY
This study was qualitative research conducted for English Education Study Program of STKIP Muhammadiyah Bangka Belitung students. Qualitative research is a kind of research that the findings are not related to statistic procedural or other calculation and as a purpose to reveal something holistic-contextual through collecting data from a natural background with the researcher as the subject instrument (Sugiarto, 2015).
This research was used a descriptive study method, this method is to describe or analyze a matter of research but not as stated to design a wide conclusion (Sugiyono, 2012). The subjects of this research were 9 students of English Education Study Program of STKIP Muhammadiyah Bangka Belitung, Moleong (2010) describes research subjects as information, means that people in the background of the research is used to give information about the situation and condition of research background. This research used purposive technique sampling, in Crossman (2019) a purposive sample is a non-probability sample that is selected based on characteristics of a population and the objective of the study.
These nine subjects were characterized based on certain criteria, they were internet active users, operating the internet more than 4 hours a day. As stated by Andrew, a researcher at Oxford University, in Sativa (2017) from the calculation of the research, the ideal time for students doing online browsing is 257 minutes or 4 hours 17 minutes in a day. With this duration, Andrew adds that students not only have technology operating ability but also they can do socialization to others. While the object of this research is digital literacy competencies in learning sources. To collect the data in this research, the researcher used to interview and documentation method. The interview method used an in-depth interview with the unstructured interview. According to Sugiyono (2012), the unstructured interview is a free interview where the researchers do not use the systematical questions. The questions are only for guiding the researcher to do the depth interview. The researcher did the interview to get the depth information and built the situation informally, like did the daily conversation. So, the informant felt comfortable doing the interview.
Furthermore, documentation was also used in collecting the data. The researcher used the recorder to record the data interview. In analyzing the data, the researcher used qualitative data analysis, there were data reduction, display data, and drawing the conclusion. Data reduction was used to analyze the data when the researcher collects the data, it was for limiting the data that was not used. While display data was used to categorized the data, related to the group data. So the analyzed data become synchronized. And then drawing the conclusion, the conclusion was the final step of analyzing the data, but the researcher had confirmed it and revised it to the conclusion to find the final data. The trustworthiness in this study used triangulation sources. According to Patton in Moleong (2014) defines that triangulation sources are comparing and checking the degree of trust of information through different times and instruments in qualitative research. The triangulation sources of this study invited information and technology experts.
FINDINGS AND DISCUSSIONS
This research discussion describes the students' perception of digital literacy competence as learning sources. The competencies were analyzed from four aspects, they are internet searching, hypertextual navigation, content evaluation, and knowledge assembly. The data that was described is analyzed based on the Gilster theory.
In the first discussion, digital literacy competence reviewed from the internet searching aspect. In this aspect, the students were assessed to some indicators; they were the activities of the students when browsing the website, the kind of website most visited, the information found when browsing the website, the knowledge of the web search components, the students' ability on how to search the information. The students' perception of digital literacy reviewed from the internet search engine was in middle categorized. From nine students, there were one student who could not understand all of the aspects of the internet searching aspect. The rest students were having a good perception of the internet searching aspect. Although all students are active internet users, they still got difficulties in understanding the website's appearance. Because the appearances are too much. And when browsing information, they only focus on the information they need. According to Gilster (1997), internet searching is someone able to use the internet and do many activities there. There are some activities that we can do, there are using and managing the email account periodically, joining a newsgroup or mailing list, doing online business, doing an online transaction, doing work, searching the learning sources, reading online news, listening music, watching movies or videos. From this description, students still get difficulties in doing many online activities.
In the second discussion, the digital literacy competence reviewed from hypertext navigation. Stated in Gilster (1997) hypertext navigation is the ability to read and understand dynamically about hypertext navigation. Moreover, Gilster described that understanding hypertext navigation is not only relates to its hypertext but also the knowledge of information available on the internet is different from the information in the textbook.
Hypertext navigation competence assessed some indicators, there was students' knowledge about hypertext and hyperlink, students' knowledge about hypertext characteristics, students' understanding about website layout, students' understanding about web page characteristics, students' information on the internet and in the textbook. Students' perception of digital literacy as learning sources was in low perception. From nine students, there were six students who could not explain the indicators well, so only three students who understood about the indicators or the aspects of hypertext navigation. From the description, it can be concluded that the perception of students' of digital literacy in the aspect of hypertext navigation is low perception.
The third discussion, the digital literacy competence reviewed from the content evaluation. Gilster (1997) explained that content evaluation is someone able to think critically and give an assessment to the online information as well as identify the validity and the completeness information that suggested from the hypertext link. Content evaluation competencies have assessed some indicators, there is the ability to differentiate the layout and the content of information, the ability to analyze the correctness of the information, the ability to analyze the web page. Students' perception of digital literacy as learning sources was in middle perception. From nine students, there were two students who could not explain the aspects well. So the seven students the rest were having good perception in content evaluation. They had an awareness to evaluate the online information well. All the students are active internet users, they also did an evaluation on the content evaluation, they did not only access one website to search information, but they browsed others to link to make sure that the information is valid and correct. They also asked other people, friends or family about the correctness of the information.
The fourth discussion, the digital literacy competence reviewed from knowledge assembly. Gilster (1997) stated that arranging the knowledge assembly is the ability to arrange the knowledge, build information from other sources, and evaluate the facts and opinions properly without prejudice. Gilster added the competencies needed not only critical thinking but also the ability to learn about arrange the knowledge, build the information from different sources. And in the end, the users have to create the final conclusion to design new knowledge. Knowledge assembly competencies have assessed some indicators, there is the ability to finish the task through browse the information in the search engine. The ability to finish the task by joining the discussion group, the ability to analyze the background of the information, the ability to use other sources in finding the information, the ability to communicate and discuss to other people in order to solve the problem. Students' perception of digital literacy as learning sources was in middle perception. From nine students, there were two students who could not explain the indicators well. So the seven students were having good perception in knowledge assembly. The seven students used the internet as a learning source and opened other links to synchronize each information. When they found the new information, they also asked other people to confirm or discuss it. It did work that the internet helped them much as learning sources. From all the discussion, it can be concluded that all students have not literate yet of digital as learning sources, the indicators of digital literacy competencies have not been answered properly, only one of nine students who have a good perception of digital literacy as learning sources. This research only to investigate the perception of the students on digital literacy as learning sources, so it has not been found about the influence of their critical thinking. Based on this, it is suggested to the next researcher to find other problems related to digital literacy and students' understanding of online information. | 4,041.8 | 2020-03-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Spatially correlated classical and quantum noise in driven qubits
Correlated noise across multiple qubits poses a significant challenge for achieving scalable and fault-tolerant quantum processors. Despite recent experimental efforts to quantify this noise in various qubit architectures, a comprehensive understanding of its role in qubit dynamics remains elusive. Here, we present an analytical study of the dynamics of driven qubits under spatially correlated noise, including both Markovian and non-Markovian noise. Surprisingly, we find that by operating the qubit system at low temperatures, where correlated quantum noise plays an important role, significant long-lived entanglement between qubits can be generated. Importantly, this generation process can be controlled on-demand by turning the qubit driving on and off. On the other hand, we demonstrate that by operating the system at a higher temperature, the crosstalk between qubits induced by the correlated noise is unexpectedly suppressed. We finally reveal the impact of spatio-temporally correlated 1/f noise on the decoherence rate, and how its temporal correlations restore lost entanglement. Our findings provide critical insights into not only suppressing crosstalk between qubits caused by correlated noise but also in effectively leveraging such noise as a beneficial resource for controlled entanglement generation.
I. INTRODUCTION
Quantum computers hold great promise for solving computational problems that are intractable for classical ones, due to their ability to exploit the quantum coherence of qubits [1,2].However, quantum coherence is extremely fragile and noise poses a major challenge to quantum information processing [3,4].A comprehensive understanding of the effects of noise is the first step towards the development of effective noise mitigation strategies, and therefore, is crucial to leverage the full potential of large-scale quantum processors [5,6].
However, the presence of spatially correlated noise can limit the applicability of proposed protocols in multi-qubit settings.For instance, major quantum errorcorrecting codes rely on independently detecting and correcting errors on individual qubits [37][38][39][40].Correlated noise across multiple qubits impedes the effectiveness of these codes, leading to a higher probability of errors remaining undetected and reducing the performance of quantum systems [52][53][54][55][56].It is therefore crucial to better quantify and understand correlated noise.This has stimulated various theoretical proposals [57][58][59][60] and experimental works in the measurement of correlated noise, for example, in architectures based on spin qubits [61,62] and superconducting qubits [63].However, despite the progress in quantifying spatial noise correlations, a comprehensive understanding of how they affect the performance of multiqubit systems is still lacking.
On the other hand, while spatially correlated noise can have detrimental effects on quantum systems, it raises the question of whether the correlations stored in the noise, which are absent in the case of a single qubit, can be harnessed to process quantum information [64,65].For instance, correlated noise offers the intriguing possibility to imprint the correlations onto a two-qubit system, effectively converting the correlation of noise into entanglement between the qubits.To develop effective strategies either for mitigating correlated noise (suppressing the "ugly" aspect) or for leveraging the correlations it stores (exploiting the "good" aspect), a comprehensive understanding of the effects of correlated noise in multiqubit settings is required.
In this work, we present a systematic theoretical investigation of the impact of spatially correlated noise on the dynamics of driven qubits with a focus on their entanglement, considering both temporally correlated and uncorrelated noise.In particular, we find that operating the qubits at higher temperatures can help mitigate the "ugly" aspect of correlated noise and suppress the crosstalks between qubits.This unexpected reduction in crosstalk between qubits at warmer temperatures has been recently observed in an experimental study on spin qubits [66].On the other hand, to exploit the "good" aspect of correlated noise, such as the generation of substantial long-lived entanglement, one needs to drive the qubits and operate the system at low temperatures (relative to the qubit energy).We highlight that this entanglement generation process can be on-demand controlled by turning on and off the driving.
The present paper is organized as follows.In Sec.II, we establish the foundation for our study.First, we introduce the model Hamiltonian in Sec.II A. Next, we discuss the concept of local and spatially correlated noise spectral densities in Sec.II B, distinguishing their classical and quantum components.Finally, we present a set of broadly applicable master equations for the driven dynamics of two-qubit systems in the presence of spatially correlated generic noise in Sec.II C, which sets the stage for the following discussion.
In Sec.III, we investigate spatially correlated 1/f noise in pure dephasing dynamics without coherent drives.We pay particular attention to how the classical and quantum components participate in the two-qubit dynamics and whether they can be exploited to generate entanglement.We demonstrate that the classical correlations in the noise affect the dynamics through correlated pure dephasing, which modifies the dephasing rate but does not induce any coherence.In contrast, we reveal that correlated quantum noise impacts the two-qubit dynamics through both a noise-induced coherent interaction between qubits and correlated pure-dephasing processes.Interestingly, the former allows for the conversion of noise correlation to entanglement, while the latter does not.
In Sec.IV, we present an analytical study of the two-qubit dynamics under the influence of spatiallycorrelated Markovian transverse noise with coherent drives.Our analysis reveals that the quantum noise correlations induce a coherent symmetry exchange interaction, a Dzyaloshinskii-Moriya interaction between the two qubits, and a correlated decoherence process.We observe an intriguing interplay between these ingredients, resulting in distinct dynamical phases that can be achieved with different parameter values.Surprisingly, in contrast to pure dephasing, we find that both coherent interactions and correlated decoherence lead to a generation of significant entanglement, which can be potentially leveraged to implement two-qubit gates and other quantum information processing tasks.
Finally, in Sec.V, we conduct an analytical investigation into the influence of correlated 1/f noise on the dynamics of driven qubits.Our study shows that the non-Markovianity of the 1/f noise results in effective time-dependent decoherence rate that exhibits temporary negative values for some time intervals [67].We focus on the classical and quantum correlated 1/f noise in Sec.V A and V B, respectively.We find that the classical spatially-correlated 1/f noise is still unable to generate any entanglement.However, the non-trivial temporal correlations of the noise can restore the coherence lost in the environment.On the other hand, the non-Markovian nature of the quantum correlated 1/f noise leads to a temporary decrease of the entanglement generated by the quantum noise.
A. Hamiltonian
In this work, we analyse the dynamics of two qubits that are driven and are situated in the same environment, as depicted in Fig. 1.The qubits are subjected to both local and spatially-correlated (non-local) noise, which can have either a classical or quantum nature and can be either temporally correlated (such as 1/f ) or uncorrelated ("Markovian").We describe the combined system with the following Hamiltonian: where H S is the Hamiltonian of the two qubits that are characterized by qubit-frequency splittings ∆ i (i = 1, 2), and H drive (t) describes the time-dependent driving of the two qubits, H E is the Hamiltonian of the environment, and H SE describes the coupling between the two qubits and the environment.We assume the two qubits are driven coherently at frequency ω di with the drive amplitude ℏΩ i .We then consider the following Hamiltonian: where σ x,z i are the Pauli matrices for qubit i and ℏ is the Planck's constant.We describe the environment with where b k and b † k are operators describing quasiparticles in the environment leading to the decoherence of the qubits, such as phonons in semiconductors [68][69][70] or magnons in hybrid systems [65,[71][72][73][74][75][76][77][78].While we leave the spectrum ω k unspecified which can be either linear, for example, for acoustic phonons or quadratic for magnons, we specialize to single axis qubit-environment couplings: implying that pure-dephasing dynamics dominates the decoherence process in the absence of coherent drives.
Here, E i are operators acting on the environment Hilbert space, r i is the positions of the ith qubit, and g k is the coupling strength.
When the system is subjected to coherent drives, we effectively rotate the quantization axis in a frame rotating at the driving frequencies, where the qubits can exchange not only information with the environment but also exchange energy by emitting and absorbing quasiparticles.To illustrate this, we perform the unitary transformation R(t) = exp(iω d1 σ z 1 t/2) ⊗ exp(iω d2 σ z 2 t/2).The Hamiltonian in the rotating frame then is given by H = iℏ∂ t RR † + RHR † .We note that the unitary operator R(t) commutes with H SE and H E , leaving them invariant in the rotating frame.After applying the rotating-wave approximation by neglecting counterrotating terms that oscillate fast at frequency 2ω di , the total Hamiltonian reads, where the qubit Hamiltonian in the new rotating frame is H S = i=1,2 ℏ δ i σ z i + Ω i σ x i /2 with the detuning δ i = ∆ i − ω di .When the qubits are driven at resonance ω di = ∆ i , we arrive at: Here, we have rotated the axis in the spin space, , such that the qubit quantization axis is aligned with the z axis, and we label this new basis with Pauli matrices σi .We also assume here that the driving strength is equal for both qubits, denoting it as Ω ≡ Ω 1 = Ω 2 .We remark that, in this scenario, the relaxation dynamics dominates the decoherence process, which has been exploited for noisespectroscopy applications to extract noise spectra near frequency Ω [63].We focus our analysis in this work on the effects of correlated noise in two fundamental scenarios: one in the absence of coherent drives where puredephasing noise dominates, and the other in the presence of resonant drive where the transverse noise is prominent.
B. Distinguishing classical from quantum noise in local and spatially correlated noise
We described the qubit Hamiltonian under external coherent drives in the preceding section.Here, we discuss the noise experienced by the qubits due to their coupling to the environment, focusing on both local and spatially correlated noise while distinguishing their classical and quantum natures.We define the usual two-point noise correlation function in time domain [8]: where E(t) ≡ e iH E t/ℏ Ee −iH E t/ℏ and ⟨O⟩ ≡ tr(ρ B O) with the thermal state ρ B = e −βH E / tr[exp(−βH E )] and β = 1/k B T , where T is the temperature and k B is the FIG. 1.A schematic for two qubits in an environment experiencing both local and spatially correlated noise.The noise is quantified by the cross-noise power spectral densities Sij(ω), with its positive and negative frequency parts measuring the ability of qubits to emit and absorb energy, respectively.The asymmetric nature of the noise spectral density, in the presence of quantum noise, is linked to the asymmetry between the absorption and emission processes.The quantum correlated noise can be harnessed to generate entanglement.
Boltzmann constant.The noise power spectral density is given by the Fourier transformation of the correlation function: Here, S ii (ω) with i = {1, 2} is the auto power spectral density standing for the local noise, whereas S ij (ω) with i ̸ = j is the cross power spectral density representing spatially correlated noise.We note that S ij (ω) = S * ji (ω), indicating that the local noise spectral density is a real-valued function while the correlated noise can be complex-valued.This is also clear from the explicit expression of the noise spectral density: (9) where n B (ω) = 1/(e βℏω −1) is the Bose-Einstein distribution and the spatial vector r j −r i connects the positions of qubit j and i.The cross power spectral density S 12 (ω) is real when the spectrum of the environment is symmetric in momentum ω k = ω −k , whereas it is complex for general asymmetric ω k , for example in inversion-asymmetric environments.
It is worth noting that the noise power spectral density (9) part S ij (ω > 0) as a measure of the ability of qubits to emit energy, and the negative-frequency part S ij (ω < 0) as a measure of the ability of qubits to absorb energy.To differentiate between quantum and classical noise, we introduce the symmetrized and antisymmetrized noise spectral densities [8,79]: which are linked to each other through S C ij (ω) = coth(βℏω/2)S Q ij (ω), as dictated by the fluctuationdissipation theorem [80].This distinction enables a better understanding of how different types of noise affect the behavior of multiqubit systems.
When operating qubits at high temperatures, such as in the case of spin qubits that can be operated at a few Kelvin [81][82][83][84], the classical limit k B T ≫ ℏω applies, and classical fluctuations dominate over quantum noise.In this regime, the operators E i (t) can be treated classically [E i (t), E j (0)] = 0, resulting in a symmetric noise spectral density with a vanishing antisymmetric part, S Q = 0.In contrast, in the quantum regime where k B T ≲ ℏω, the quantum noise is comparable to classical noise, with S Q ≈ S C , and therefore cannot be neglected.
We point out that there is a constraint for the spatially correlated noise, [8], implying that the nonlocal noise is inherently bounded by the local one.This condition is closely related to the thermodynamic stability of the environment [65].To illustrate this condition explicitly, we consider a concrete example, where the environment has a linear spectrum ω k = c s |k|.This case describes for example acoustic phonons with sound velocity c s .The spatially correlated noise is re-lated to the local noise in a 2D architecture through the following equation: S 12 (ω) = J 0 (ωd/c s )S ii (ω), (11) where |J 0 (x)| ≤ 1 is the Bessel functions of the first kind and d = |r 1 − r 2 | is the distance between two qubits, as shown in Fig. 2 (a).At large distances, the correlated noise decays as J 0 (ωd/c s ) ∼ 1/ √ d.We will assume the two qubits to be identical and thus they experience the same local noise S 11 = S 22 throughout our discussion.The constraint |S 12 | ≤ S ii also holds in other dimensions, as shown in Fig. 2 (b) where the ratio S 12 /S ii is shown as a function of the separation between the two qubits in different dimensions.Exact relations are provided in Appendix A.
We remark that, when the two qubits are sitting within a few hundred nanometers, the spatially correlated noise is comparable to the local noise for qubit frequencies in the gigahertz range, as illustrated in Fig. 2 (a), and assuming the sound velocity to be c s ∼ 5 km/s in silicon.The correlated noise exhibits oscillatory behavior and gradually decays to zero as the qubit separation d increases.As shown in Fig. 2 (b) where we take the frequency to be ω/2π = 1 GHz, the spatially correlated noise is always comparable with the local noise for d < 1 µm in different dimensions.
C. Time convolutionless master equation
We now investigate the role of spatially correlated classical and quantum noise in the dynamics of the two-qubit system by deriving a master equation for the reduced density matrix ρ(t), obtained by tracing out the environment from the total density matrix ρ tot (t).To this end, we adopt a standard time-convolutionless (TCL) master equation approach [85] and assume that the qubitenvironment interaction is weak enough to truncate the TCL generator at the second order.Leaving the detailed derivation to the Appendix B, here we present the TCL master equations for the two-qubit system subject to correlated noise, without and with a resonant drive, respectively.In particular, we separate the quantum and classical noise, enabling us to clearly identify their respective contributions to the qubit dynamics.Our results allow us to explicitly calculate and explore the effects of correlated noise.
Pure-dephasing noise.In the absence of coherent driving, the qubit dynamics is purely determined by dephasing.As detailed in Appendix B 2, the TCL master equation for the two-qubit system, in the interaction picture, takes the form of where the correlated-noise-induced coherent interaction between qubits is Ising-like: The superoperators L z ij (t) are defined by with the anticommutator {A, B} ≡ AB + BA.Here, γ z ii (t) stands for the standard local dephasing which is time-dependent in general, whereas γ z 12 (t) represents correlated dephasing originated from the spatially correlated noise.The time-dependent coherent coupling parameter is given by J z (t) = t 0 ds G R 12 (s) + G R 21 (s) /2ℏ 2 , quantifying the retarded interaction mediated by the environment, where G R 12 (t) = −iΘ(t)⟨[E 1 (t), E 2 ]⟩ is the standard retarded Green's function.To clarify its relation to the correlated noise, it is insightful to recast it into the following form: with a filter function F c (ω, t) = [cos(ωt) − 1]/ω.This coherent Ising interaction is solely determined by the correlated quantum noise.One can also infer this conclusion from the fact that it is dictated by the retarded Green's function which vanishes when the noise operators E i commute, e.g. if they are classical variables.Similarly, to investigate the effects of correlated classical and quantum noise on the dephasing, we write the dephasing rate as with the filter function F s (ω, t) = sin ωt/ω, which is peaked at zero frequency and approaches a delta function at large time F s (ω, t → ∞) = πδ(ω).We observe that the local dephasing rate γ z ii is determined solely by the classical noise S C ii (where we recall that the local noise spectral densities S C,Q ii are always real), whereas the correlated dephasing γ z 12 has contributions from both classical and quantum spatially correlated noise.In contrast, we stress again that the coherent coupling is not affected by classical noise.Furthermore, it is worth noting that correlated quantum noise only influences the dephasing process when the environment has an asymmetric spectrum such that Im[S Q 12 ] ̸ = 0. Otherwise, γ z 12 (t) is exclusively determined by the correlated classical noise S C 12 (ω).Pure-transverse noise.In the presence of resonant transverse drives, the combined system is governed by the Hamiltonian (5) where the qubits experience pure transverse noise.The TCL master equation of the twoqubit system is shown to be the following form where the coherent interaction is and the superoperators are given by Here γ ↓ ij and γ ↑ ij describe time-dependent (local and correlated) decay and absorption rates, respectively.Detailed derivations are provided in Appendix B 3. Similar to the pure dephasing case, the coupling parameter J (t) is also fully determined by the quantum noise, and can be expressed in terms of the retarded Green's functions or correlated noise spectral densities as with qubit energy splitting Ω.We stress that, in contrast to the Ising coupling J z (t) which is always real, J (t) is complex in general.
The local and correlated decay processes are induced by both classical and quantum noise, , with classical and quantum contributions being ) Similarly, the local and correlated absorption rates are given by γ Notably, unlike the local pure dephasing rate that is solely determined by classical noise, γ ↑,↓ ii depends on both classical and quantum noise.Moreover, the correlated quantum noise is always present in the correlated decay and absorption rates, regardless of the symmetry of the spectrum ω k .We emphasize that these processes exhibit high sensitivity to the noise spectra within a frequency window of approximately 1/t centered on the qubit splitting ±Ω.Therefore, for sufficiently long time evolution where t ≫ 1/Ω, we can consider exclusively the contribution of the spectra at ω = ±Ω, and approximate the rates as from where we observe that the asymmetry between the absorption and emission is caused by quantum noise.We also obtain the standard detailed balance condition γ ↓ ij = e βℏΩ γ ↑ ji , by invoking the fluctuation-dissipation theorem.The TCL master equations presented here for the twoqubit system show clearly the dependence of qubit dynamics on the quantum and classical components of local and spatially correlated noise.Our formalism is general, and it does not require any microscopic understanding of the spatial and temporal correlations within the environment, as long as the noise is weak enough to justify the truncation of the TCL generator at the second order.It can be employed to describe challenging cases such as long-ranged classical or quantum non-Markovian noise.Furthermore, this approach can be extended straightforwardly to multiple qubits.By separating the contributions of classical and quantum noise to the qubit dynamics, this scheme provides a foundation for our investigation of the impact of generic noise on multiqubit dynamics.
III. PURE DEPHASING PROCESS WITH CORRELATED 1/f NOISE
In this section, we utilize the formalism presented above to investigate the impact of spatially correlated classical and quantum noise on the dephasing of two qubits.Specifically, we focus on a noise spectral function with a 1/f frequency dependence, which is common in various quantum computing architectures [23], including superconducting qubits and semiconducting devices.We stress that our approach is straightforwardly extended to other noise spectra.Let us consider a local classical noise spectral density: where σ is the standard derivation of the noise and ω l stands for the low frequency cutoff that is set by the measurement time.The timescales that we investigate are much shorter than this time, t ≪ ω −1 l .As we aim to study the effect of the correlated noise, we consider two qubits positioned within the range of micrometers, with correlated noise being comparable with the local one, S C,Q 12 (ω) ≈ e iθ S C,Q ii (ω), where θ is the phase of the correlated noise spectral density that characterizes its complex nature [86].We point that in our study, we assume that the distance between the two qubits is greater than the typical confinement lengths (10-50 nm for spin qubits).This ensures that the direct exchange interaction is suppressed, allowing us to focus specifically on the effects induced by correlated noise.
To examine how the classical and quantum components of the spatially correlated noise determines the two-qubit dynamics, we consider the quantum regime, where both quantum and classical noises are present and S Q ≈ S C .In this scenario, the coherent coupling and the local dephasing rate γ z ≡ γ z ii are given by as detailed in Appendix C, and the correlated dephasing rate is given by γ z 12 = e iθ γ z whose real and imaginary parts are rooted in classical and quantum correlated It is convenient to work in the basis {|a⟩} = {|↑↑⟩ , |↑↓⟩ , |↓↑⟩ , |↓↓⟩}, where the density matrix elements, denoted as ρ = G ab |a⟩ ⟨b|, are all decoupled from each other.While the diagonal elements G aa remain constant in the pure-dephasing dynamics, the offdiagonal components depend non-trivially on the correlated noise.By solving the TCL master equation analytically (see Appendix C for details), we find that the classical component of the correlated noise can only reduce or enhance the dephasing rate caused by the local classical noise, without increasing the coherence in the two-qubit system.For instance, G 23 and G 14 are given by suggesting a Gaussian decay of the coherence with a logarithmic correction.Here, γ is Euler's constant.
We are now ready to investigate how the correlated noise affects the dynamics of two-qubit entanglement.There are different measures of entanglement.For example, the singlet fidelity of the corresponding Werner state [87] of an arbitrary mixed state provides a lower bound for the entanglement of formation [88,89], as detailed in Appendix D 1.In this work, we adopt the twoqubit concurrence as a measure of entanglement [90], which is also summarized in Appendix D 1.In Fig. 3 (a), we illustrate the entanglement decay of the twoqubit system with the initial states being Bell states , respectively.The red curve represents the entanglement decay when only local noise is present, where both states decay with the same rates.When the correlated classical noise is present (we set the quantum noise to zero and θ = π/3 ), both states still decay but with different rates, corresponding to the blue and green curve in Fig. 3 (a).
The quantum component of the correlated noise affects the two-qubit dynamics through both the coherent Ising interaction, which is governed by the real part of S Q 12 , and the correlated dephasing, which is linked to the imaginary part of S Q 12 .Interestingly, we find that only the real part of the quantum correlated noise leads to an increase in the entanglement between the two qubits.To illustrate this effect, we consider an initial state |++⟩, where |+⟩ is defined by σ x |+⟩ = |+⟩.We examine the entanglement dynamics as a function of time and phase θ.When the spectral density is real (θ = 0, π), we observe a sizable increase in entanglement, while it remains at zero when the spectral density is purely imaginary (θ = π/2, 3π/2), as shown in Figure 3 (b).
Our findings indicate that to harness the correlations encoded in the pure-dephasing noise, qubits must be operated at low temperatures to ensure the presence of quantum correlated noise and in an inversion symmetric environment to obtain a real quantum noise spectral density.Conversely, if one wishes to prevent undesired entanglement between the qubits, operating the qubits at higher temperatures to favor classical correlated noise or breaking the symmetry of the environment at low temperatures to suppress entanglement generation can be effective strategies.We notice that a very recent experiment [66] has observed an unexpected significant reduction in crosstalk between spin qubits at higher temperatures in semiconductors.
IV. DRIVEN SPIN QUBITS SUBJECT TO CORRELATED MARKOVIAN NOISE
In this section, we investigate the dynamics of two qubits when they are resonantly driven.This case is governed by Hamiltonian (5), and the reduced dynamics is described by the master equation (17).In particular, we focus on the temporally uncorrelated (Markovian) noise, because this model accurately describes noise with generic power spectral density after long times.As detailed in the Appendix D 5, pure classical noise does not generate entanglement, hence we examine the case of low temperature where both classical and quantum noise are present.With the spatially correlated Markovian noise, the coherent interaction between the two qubits becomes time-independent, and is described by the Hamiltonian The coupling strength is complex-valued in general and denoted as J ≡ J s +iD.The contributions J s and Dzyaloshinskii-Moriya (DM) interaction D are symmetric and antisymmetric with respect to exchange of the two qubits, respectively, where the DM interaction only arises when the inversion symmetry of the environment is broken.
The decay and absorption rates become timeindependent when noise is assumed to be Markovian (or when the evolution time is long for generic noise, t ≫ 1/Ω) and are given by Eq. (22).At temperatures lower than the Rabi frequency, k B T ≤ ℏΩ, (for instance, when Ω/2π ∼ 2 GHz and the temperature is below 100 mK), the rates are further reduced to In this situation, the superoperators are given by the standard Lindbladians [85]: where the local decay rate γ ↓ ≡ γ ↓ ii and the collective decay rate γ ↓ 12 > 0 are determined by the local and spatially correlated noise, respectively.We have absorbed the phase of γ ↓ 12 into the definition of σ ± i .The completely positive evolution dictates that the correlated decay is weaker than the local decay γ ↓ 12 ≤ γ ↓ , which is also guaranteed by the thermodynamic stability of the environment [65].
Similar to the coherent interaction, it is convenient to symmetrize and antisymmetrize the superoperators, yielding where We note that the triplet state |T ⟩ ≡ (|↑↓⟩ + |↓↑⟩)/ √ 2 is superradiant decaying at rate Γ S , while the singlet state, |S⟩ ≡ (|↑↓⟩ − |↓↑⟩)/ √ 2 is subradiant decaying at rate Γ A .We also remark that these two states are decoupled from each other in Lρ, and are also eigenstates of the symmetric interaction characterized by J s .However, the DM interaction, which is parity-odd, exchanges these two states (ẑ In the following subsection, we analytically study the interplay between the symmetric interaction, the DM interaction, and the local and correlated decay processes.For the sake of concreteness, we assume the initial state is a trivial product state |↑↓⟩.We denote the density matrix as ρ = G t |T ⟩ ⟨T | + G s |S⟩ ⟨S| + G ts |T ⟩ ⟨S| + H.c. + ∆ρ, where ∆ρ stands for other elements of the density matrix.In this scenario, the concurrence of the two-qubit system can be shown to take the simple form of
A. The role of symmetric exchange interaction
In this subsection, we analyze one basic scenario in which the environment possesses inversion symmetry, resulting in a real correlated noise spectral density S 12 (ω).In this case, the DM interaction is absent.The dynamics of the singlet state G s and triplet state G t are decoupled due to the symmetry, while the real and imaginary parts of G ts are coupled to each other due to the symmetric exchange coupling J s , as illustrated in Fig. 4 (a).The complete dynamics is analytically solved in the Appendix D 2.
The impact of correlated quantum noise on the twoqubit dynamics is twofold.Firstly, the noise enters through the collective decay γ ↓ 12 , which results in different decay rates for the population of the singlet and triplet states, as demonstrated in the upper panel of Fig.
facilitates the increase of entanglement.Considering the initial state |↑↓⟩, these equations can be solved and reveal the following entanglement dynamics encoded in the concurrence: We observe that while the local noise has a detrimental effect on entanglement, the correlated quantum noiseinduced correlated decay and coherent coupling both have a beneficial effect.Interestingly, in the absence of coherent coupling J s = 0, a sizable amount of entanglement can be generated in the pure dissipative evolution due to the correlated decay, corresponding to the blue curve in Fig. 4 (c).It also provides a lower bound on the entanglement when J s is finite, as evidenced by the green and red curves in Fig. 4(c), which display entanglement oscillations with a frequency proportional to J s .We remark that, at the long-time evolution limit, the entanglement scales as . This persistence can be attributed to the slow decay of the singlet state, which is long-lived when the correlated noise is comparable to the local noise, γ ↓ 12 → γ ↓ .
B. The role of Dzyaloshinskii-Moriya interaction
In this subsection, we study another basic scenario where the DM interaction is present while the symmetric interaction vanishes J s = 0.The dynamics of the population of the singlet state G s and of the triplet state G t are now coupled to each other due to parity-breaking interaction D. The probability flows from the triplet state to singlet state when Re G ts > 0, and in the opposite direction when Re G ts < 0, while the dynamics of Re G ts is governed by the relative population of these states and can be expressed as Re ∂ t G ts = −γ ↓ Re G ts +D(G t −G s ), as shown in Fig. 4 (d).The element Im G ts is decoupled from other elements and remains zero when the initial state is |↑↓⟩.We solve the coupled dynamics analytically in Appendix D 3.
The two-qubit dynamics is affected in two ways by the presence of correlated quantum noise.Firstly, similar to the pure symmetric exchange case, the correlated noiseinduced collective decay γ ↓ 12 gives rise to different decay rates of G t and G s , which can lead to the generation of entanglement.On the other hand, the correlated noiseinduced DM interaction D causes an oscillation between the singlet and triplet state, which can interfere with the effect of γ ↓ 12 in a nontrivial manner.Remarkbaly, one can construct a rather simple equation of motion for C R from the coupled complex dynamics to quantify the entanglement evolution: We observe that the quantum correlated noise-induced collective decay and DM interaction compete with each other, while the local noise γ ↓ still acts as a "friction" force as before.When the coherent interaction dominates D > γ ↓ 12 /2, the system oscillates between the singlet and triplet states, resembling an underdamped oscillator, as illustrated in the upper panel of Fig. 4 (e).In the regime where D < γ ↓ 12 /2, the system can be described by an overdamped oscillator.Whenever the probability flows to the triplet state, it quickly decays due to the strong dissipation γ ↓ + γ ↓ 12 , preventing the probability from returning to the singlet state, and therefore, it does not exhibit any oscillatory behavior, as illustrated in the lower panel of Fig. 4 (e).With the initial state |↑↓⟩, we solve the entanglement dynamics, which reveals three distinct dynamic regimes: respond to the singlet and triplet states, respectively.The entanglement decays on a characteristic timescale of 1/γ ↓ .At the critical point where 2|D| = γ ↓ 12 , the oscillation behavior ceases, and the entanglement follows a scaling of ∝ te −γ ↓ t , as depicted by the red curve in Fig. 4 (f).The loss of oscillation can be clearly observed in Fig. 5, where the dashed orange line represents the critical point; entanglement exhibits oscillations above this point, while there is no oscillation below it.In the overdamped regime, the entanglement exhibits a scaling behavior of ∝ exp[−(γ ↓ − κ)t], with an extended lifetime of 1/(γ ↓ − κ), while the maximal entanglement it can reach is reduced in the dynamics, as shown by the blue curve in Fig. 4 (f).
Our findings suggest that despite the issues that noise can produce in quantum information processing, if properly harnessed, correlated quantum noise provides a useful resource to generate a significant long-lived entanglement, opening up to a several opportunities for optimizing quantum computation.We stress that the process of generating entanglement can be controlled on-demand by switching the driving on and off.Compared to the puredephasing dynamics, where the correlated dephasing cannot build up coherence despite being rooted in the correlated quantum noise, in this section we find that, interestingly, the correlated decay induced by the transverse noise can generate sizable entanglement by causing the singlet and triplet states to decay at different rates.In particular, while both rooted in correlated quantum noise and beneficial for entanglement generation, the competition between the correlated decay and coherent coupling leads to distinct dynamical regimes.From the two fun-damental cases presented here, one can extrapolate to dynamics when both symmetric exchange and DM interaction are present, and we leave the detailed discussion of this situation to Appendix D 4.
V.
DRIVEN SPIN QUBITS SUBJECT TO CORRELATED 1/f NOISE Building upon the insights obtained from the previous study on Markovian noise, in this section, we investigate the impact of spatially and temporally correlated 1/f noise, which is non-Markovian in nature.We assume that the two driven qubits are located within a few hundred nanometers of each other, and thus the spatially correlated noise is comparable to the local noise S 12 ≈ S ii in the relevant frequency range.We present an analytical investigation into two situations, in the first one, only classical 1/f noise is present, while in the second one, we include also quantum 1/f noise, comparable to the classical noise.This situation can occur at low temperatures, when k B T ≤ ℏΩ, where the ability of qubits to emit and absorb energy is different as discussed in Sec.II B. For simplicity, we assume that the spectral density of the correlated noise is real, and we focus on its temporal correlations.The detailed derivations of the results in this section are sketched in Appendix E.
A. Classical 1/f noise
In the presence of purely classical 1/f noise, the twoqubit dynamics is governed by the TCL master equation (17) with vanishing coherent coupling J (t) = 0 and equal local and correlated absorption and decay rates, denoted as γ(t) ≡ γ ↓ ij = γ ↑ ij , with following form: (36) Here, Si(x) ≡ x 0 dτ sin(τ )/τ is the sine integral function.One surprising feature of the rate γ(t) is that it can take negative values during finite time intervals, denoted by the purple regions in Fig. 6 (a).This behavior is an indication of non-Markovian memory effects, reflecting the exchange of information between the two qubits and the environment [67].Nevertheless, the time integral of γ(t), denoted as must remain non-negative due to the complete positivity requirement of the system dynamics [52,91].This is illustrated in the inset of Fig. 6 (a).Based on the insights gained from the previous sections, we anticipate that entanglement generation would not be possible with purely classical correlated noise.This is indeed the case even when the noise is temporally correlated, as shown in the inset of Fig. 6 (b), where the entanglement remains to be zero with a trivial initial product state |↑↓⟩.A new feature arising from the non-Markovian nature is the occurrence of oscillations in the decay of entanglement, when the initial state is entangled.For concreteness, we initialize the system to be the Bell state The dynamics of the entanglement is then given by which is depicted in Fig. 6 (b).To gain some insights, let us consider a specific quantum trajectory.During time intervals where the rate γ(t) > 0, the system can undergo quantum jumps, which can lead to the loss of coherence and a transition from an entangled state such as |ψ 0 ⟩ to a trivial state like |↓↓⟩ with a finite probability.Conversely, during the negative rate γ(t) < 0 later on, the quantum jump process can be interpreted as a jump in the reverse direction [67,[92][93][94], |ψ 0 ⟩ ← |↓↓⟩, with a finite probability of restoring the lost superposition due to the non-Markovian memory effect.As a result, the entanglement exhibits a temporary increase during the decoherence process, as illustrated in Fig. 6 (b).
B. Quantum 1/f noise
We now investigate the impact of quantum correlated noise in the quantum regime where S Q ≈ S C .The dynamics of the two-qubit system is governed by the same master equation (17), with the decay and absorption rates defined as Considering that the filter function F s (ω +Ω, t) is peaked at ω = −Ω which lies outside the range of integration, we approximate the absorption rate γ ↑ ij to be zero to enable complete analytical solution of the dynamics.In the strong correlated noise regime considered in this section, we assume that the local and correlated decay rates are equal and we denote them as γ ↓ (t) ≡ γ ↓ ij (t), which is evaluated to be: The upper panel of Fig. 7 (a) shows that the rate γ ↓ (t) can be negative temporarily, while its time integral defined as must be positive due to the complete positivity of the dynamics, as illustrated in the inset.In the presence of quantum correlated noise, the coherent coupling J (t) in the Hamiltonian ( 18) is nonvanishing and given by: which oscillates with a frequency of Ω and takes values comparable to the decay rate as shown in the lower panel of Fig. 7 (a).To demonstrate the effect of quantum noise, we first initialize the system into the Bell state |ψ 0 ⟩ = (|↑↓⟩ + i |↓↑⟩)/ √ 2 and investigate how the entanglement decays in the presence of both classical and quantum 1/f noise.We show the entanglement in this case is given by with the phase Φ(t) defined as Φ(t) ≡ t 0 ds J (s) = 2πσ 2 (cos Ωt − 1)/ℏ 2 Ω 2 .It is shown as the blue curve in The blue shaded area demonstrates the impact of quantum noise, with the blue and purple curves representing cases with and without quantum noise, respectively.The inset displays the final steady-state entanglement as a function of temperature, which is zero in the classical limit and 1/2 in the quantum regime.(c) Entanglement evolution as a function of time for an initial state |↑↓⟩.The effect of quantum noise is illustrated by the blue shaded region (the entanglement remains at zero when only classical noise is present).The inset reveals a final entanglement value of 1/2 with ℏ/σ = 3 ns within a time ∼ 10 ns.The plotted results are obtained using the following parameters: ℏ/σ = 100 ns, ω l /2π = 1 MHz, and Ω/2π = 1 GHz.Fig. 7 (b), with the purely classical 1/f noise case also shown for contrast as the purple curve.The temporary increase in the entanglement during the decoherence process is also observed in the presence of quantum correlated noise.In the case of strong correlated quantum 1/f noise, decoherence occurs at a much slower rate, and the net effect of quantum noise is reflected in the shaded blue region in Fig. 7 (b).The entanglement is long-lived, and the final entanglement approaches 1/2 due to the longlived singlet state with a decay rate of γ ↓ ii − γ ↓ 12 , which is almost zero when the correlated noise is comparable to the local noise.For arbitrary temperature, the residual entanglement is shown to be: which is shown in the inset of Fig. 7 (b).Here we have invoked the fluctuation-dissipation theorem S C ij (ω) = coth(βℏω/2)S Q ij (ω).It can be clearly observed that in the classical limit where k B T ≫ ℏΩ, S Q = 0, the final entanglement is zero as expected.Furthermore, we also conclude that there is always a finite, long-lasting entanglement present when the correlated quantum noise (comparable to local quantum noise) is finite.
To investigate the entanglement generation by spatially correlated quantum 1/f noise, we consider a simple initial state |↑↓⟩.The entanglement then is given by We note that there are temporary decrease in the growing entanglement, which finally reaches 1/2, as shown in Fig. 7 (c) and its inset.This can also be attributed to the non-Markovian memory effect.Considering a quantum trajectory, when the decay rate is positive γ ↓ (t) > 0, the system can undergo quantum jumps, taking the product state to an entangled state in the presence of correlated quantum noise.Later, when the decay rate is negative γ ↓ (t) < 0, the state jumps back to a product state from the entangled state, which leads to the temporary dip in the entanglement growth.Our findings demonstrate the potential of utilizing the correlated quantum 1/f noise, which is ubiquitous in solid-state quantum computing platforms, to generate significant entanglement or delay the decoherence process in two-qubit systems.We also illustrate the effects of the non-Markovianity of the noise in the dynamics of two driven qubits.
VI. CONCLUSION
In this paper, we have presented a comprehensive analytical study of the two-qubit dynamics subject to both local and non-local spatially correlated noise.Our analysis is based on a time-local TCL master equation that is applicable for generic noise spectra, including both Markovian and non-Markovian noise.We explored how the classical and quantum correlations stored in the noise dictate the qubit dynamics.Our results reveal that, at high temperatures compared to the relevant qubit energy when only classical correlations are present, the correlated noise only modifies the decoherence rate without leading to any entanglement between qubits.One can therefore operate the qubits at warmer temperatures to effectively suppress undesired crosstalk between qubits caused by correlated noise [66].
In the case of low temperatures, when both classical and quantum correlations are present in the noise, the correlated quantum noise introduces various new effects.These include the coherent Ising interaction and correlated dephasing in the case of purely dephasing noise, as well as the coherent symmetric exchange, DM interaction, and correlated relaxation in the case of transverse noise.We show that these dissipative interactions can be turned on and off on-demand by resonantly driving the qubits.We have illustrated the effects of these interaction by solving the two-qubit dynamics analytically.Specifically, our analysis has demonstrated that only the noiseinduced Ising interaction, not the correlated dephasing, can lead to finite entanglement generation.However, for the case of transverse noise, we found that both coherent interactions and correlated relaxation give rise to significant long-lived entanglement.Their competitions generally lead to different dynamical regimes of the two-qubit system.Therefore, by driving the qubits and operating the system at lower temperatures, one can exploit the quantum correlations stored in the noise for various quantum information applications.We finally also studied the non-Markovian memory effect by investigating the correlated 1/f noise, where the decay rate can be negative temporarily.
Our work provides a comprehensive understanding of how the classical and quantum correlated noise affects the qubit dynamics.Our analysis enables the development of effective strategies for utilizing the noise correlations in quantum information processing or for mitigating their potentially harmful effects, paving the way towards the design of robust and scalable quantum technologies.
Future work will explore the effect of long-ranged noise, beyond nearest-neighbor, in multiple qubit systems.This case is critical for future experiments aiming to scale up quantum processors, especially because recent measurements highlighted the presence of long-range noise in spin qubits in quantum dots [62].Investigating how the correlated noise affects standard quantum operations, such as the fidelity of two-qubit gates is therefore critical [52].Additionally, it is important to understand how the correlated noise interferes with existing strategies developed to suppress the impact of single-qubit noise, such as quantum error correction codes, dynamical decoupling, and sweet spots.Our work provides a solid foundation for future research in these directions and will serve as a starting point to address these challenges towards largescale quantum computers.
In Sec.II B in the main text, we introduce the local and spatially correlated noise spectral densities S ij (ω), defined in Eq. (9).To be specific, we present the relation between the local and correlated noise in two dimensional architectures in the main text.In this section, we provide a detailed illustration of their relations in various dimensions, assuming a linear spectrum of the environment as ω k = c s |k|.This allows us to write the noise spectral density as: where k = ω/c s and we have assumed the coupling g k only depends on the magnitude of k.The first term (positive-frequency component) represents the emission of energy into the environment, while the second term (negative-frequency component) stands for the absorption of energy from the environment.It is evident that the summation over the momentum k yields distinct outcomes in various dimensions.After some algebraic manipulation, we obtain the following conclusions: 1D : S 12 (ω) = cos(kd)S ii (ω), 2D : S 12 (ω) = J 0 (kd)S ii (ω), 3D : S 12 (ω) = sin(kd) kd S ii (ω).
(A2)
Here, J 0 (kd) represents the Bessel function of the first kind, which decays algebraically at large distances as J 0 (kd) ∝ cos(kd − π/4)/ √ kd.From this, it is evident that the correlated noise is always bounded by the local noise, that is, |S 12 | ≤ S ii .Finally, we remark that we have assumed that the quasiparticle does not decay when traveling between the two spins.
Appendix B: Time convolutionless master equations
In Sec.II C of the main text, we have presented the TCL master equations for the two-qubit system, which allow us to investigate the effects of correlated classical and quantum noise.In this section, we provide detailed derivations for the results presented in the main text.Firstly, in Appendix B 1, we recap the timeconvolutionless projection operator method.Based on this method, we derive the master equation for puredephasing noise in Appendix B 2, and the master equation for pure-transverse noise in Appendix B 3.
The time-convolutionless projection operator method
For the sake of self-consistency, here we briefly review the TCL master equation that we employ in the main text.Let us consider a system S of our interest coupled to an environment E that we will not keep track of.The dynamics of the combined system is governed by a microscopic Hamiltonian of the form: where H S and H E dictate the time evolution of the system S and the environment E, respectively, and H SE describes the coupling between them.In the platform considered in the main text, H S is the two-qubit system that is of interest and H E is the environment that gives rise to local and correlated noise, leading to decoherence of qubits via the coupling H SE .It is convenient to work in the interaction picture, where the density matrix of the combined system ρ tot (t) obeys the following equation of motion: with the Liouville superoperator defined as We aim to derive the equation of motion for the reduced density matrix ρ of the system S. To this end, we introduce a projection superoperator P that projects any density matrix ρ tot onto the system part of the Hilbert space: , where the trace is taken over the environment and ρ B is the initial state of the environment which we take to be the thermal state in the main text.We note that our goal is to obtain a closed equation for Pρ tot , which would give us the equation for ρ naturally.Accordingly, a complementary superoperator Q can be defined by Q ≡ I − P, with the identity operator I, which projects on the irrelevant part of the density matrix.By applying the projection operators P and Q to the Liouville-von Neumann equation (B2), we arrive at The idea now is to solve for Qρ tot (t) formally and substitute it into the equation for Pρ tot , which results in a closed equation for Pρ tot .At this point, there are typically two ways to proceed, both of which give rise to exact but conceptually different equations of motion for the reduced density matrix.These methods are detailed in Ref. [85].One approach leads to the wellknown Nakajima-Zwanzig equation, which is a time-nonlocal equation containing a memory kernel.In contrast, the second method yields a time-convolutionless master equation with the following form: where K(t) is a time-local generator, known as the TCL generator, and we have assumed the initial state of the combined system takes the form of ρ tot (0) = ρ(0) ⊗ ρ B thus Qρ tot (0) = 0.As customary, we assume tr E [H SE (t)ρ E ] = 0 (namely, the noise operator E i in the main text has vanishing mean in state ρ B ) and also assume that coupling between the system S and the environment is sufficiently weak.Thus we can truncate the TCL generator K(t) at second order, yielding: Therefore, the TCL master equation for the reduced density matrix ρ in the interaction picture takes the form of This equation serves as the starting point of our following derivations.
Master equation for pure-dephasing noise
In this subsection, we drive the TCL master equations for the two-qubit system in the absence of the coherent drive.In this scenario, the qubit-environment coupling is given by where E i are the noise operators acting on the Hilbert space of the environment.With this interaction, we can derive the master equation by utilizing the TCL equation (B6):
B8) where we have introduced the two-point correlation function of the noise operators tr
Here we have used the fact that the initial state of the environment is the thermal state which is stationary.We note that terms with i = j correspond to the standard local dephasing dynamics rooted in the local noise, whereas the terms with i ̸ = j describe the collective dynamics originated from the correlated noise.We can cast the equation above into the following form: (B9) where the dissipator is defined as usual: We remark that there is also a term proportional to σ z i , which describes the induced Lamb shift and renormalizes the qubit energy spliting.It is rooted in the local noise and we thus have neglected it since our main concern is the correlated noise.Then we can conveniently write the master equation above into the following form: Below, we focus on the coherent part (which leads to unitary evolution) and the dissipative part (which gives rise to non-unitary dynamics), respectively.Coherent interaction.The environment-induced coherent Ising interaction takes the form of H z (t) = J z (t)σ z 1 σ z 2 with the coupling being where G R 12 (s) ≡ −iΘ(s)⟨[E 1 (s), E 2 ]⟩ represents the standard retarded Green's function, which indicates the retarded interaction between the two qubits mediated by the environment.One can easily check that this coupling function is real-valued according to its definition above.We aim to examine the effect of the correlated quantum and classical noise.It is helpful to write the expression in a more suggestive form by utilizing S ij (t) = dω e −iωt S ij (ω)/2π, with the filter function F c (ω, t) = [cos ωt − 1]/ω.We now further decompose the noise power spectral densities into classical and quantum components [see Eq. ( 10)], arriving at: which is the Eq. ( 15) in the main text.It is clear that the correlated classical noise spectral densities cancel out, suggesting that this coherent Ising interaction is solely determined by the correlated quantum noise.An alternative way to understand this is by referring to Eq. (B11).The coherent coupling arises solely from the commutators of the noise operators E i .This implies that the coupling J z vanishes when only classical noise is present, as E i can be treated as classical variables, and the commutators then vanish.
Dissipative evolution.The dissipative part is given by (B14) It is clear that γ z ii (t) is the local dephasing noise determined by the auto noise correlator S ii , whereas γ z 12 (t) stands for the correlated dephasing governed by the cross noise correlator S 12 .In terms of the noise spectral density, the pure-dephasing parameters can be written as with the filter function F s (ω, t) = sin ωt/ω.When we further decompose the noise spectral densities into quantum and classical components, we arrive at: This is the Eq. ( 16) in the main text by using the fact that S * ij (ω) = S ji (ω).First, we observe that the local dephasing rate γ z ii is solely determined by the local classical noise S C ii (ω); the quantum component does not enter the local dephasing.In contrast, for the correlated dephasing process, the rate γ z vanishes unless the spectrum of the quasiparticle in the environment ω k is asymmetric.Otherwise, in the case of symmetric environment, both local and correlated dephasing processes are dictated by the classical noise, and the quantum noise only leads to the coherent Ising interaction between qubits.
Master equation for pure-transverse noise
In this subsection, we derive the TCL master equations for the two qubit system in the presence of coherent drives at resonance.In this scenario, the coupling between the system and the environment is given by in the interaction representation with σ± ≡ (σ x ±iσ y )/2.With this interaction, we can derive the master equation: (B18) We can then regroup the terms on the right hand side and rewrite the equation into the following compact form: (B19) This is the Eq. ( 17) in the main text.We discuss below the time-dependent coherent interaction H xy (t) and the dissipative part, respectively.Coherent interaction.The coherent coupling induced by the environment is given by 2 with time-dependent coupling strength being: (B20) We see that, again, the coupling can be expressed in terms of the retarded Green's function, suggesting that the coherent interaction is rooted in the correlated quantum noise, similar to the coherent Ising interaction we discussed before.This point becomes clear in the discussion below.Let us first express the coherent interaction in terms of the noise spectral density S 12 (ω): (B21) We now further decompose the spectral density S 12 (ω) into positive and negative part which can be expressed in terms of the quantum and classical noise.We finally arrive at B22) which is Eq.(20) in the main text.It is clear that this coherent coupling takes a similar form as the Ising coupling J z (t), and at Ω → 0, we obtain J (t) = J z (t).However, we should point out one crucial difference between them: the Ising coupling is always real whereas the coherent coupling J (t) is complex in general consisting of a real part that describes the symmetry exchange between the two qubits and a imaginary part that represents the Dzyaloshinskii-Moriya (antisymmetric exchange) interaction.
Dissipative evolution.The dissipator in the master equation (B19) is given by the following Lindbladians: with rates being (B24) Here terms with i = j stand for the local emission and absorption processes governed by the local noise spectral density S ii (ω), whereas the terms with i ̸ = j represent the correlated emission and absorption processes rooted in the cross noise spectral density S 12 (ω).In the long time dynamics Ωt ≫ 1, the filter function F s (ω ± Ω, t) approaches a delta function πδ(ω ± Ω).In this case, the decay and absorption rates become time-independent and we can approximate them with γ ↓ ij = S ij (Ω)/ℏ 2 and γ ↑ ij = S ij (−Ω)/ℏ 2 .They are related by the Boltzmann factor, γ ↓ ij = e βℏΩ γ ↑ ji .This is the detailed balance condition.Specifically, at low temperatures, the decaying process dominates and we can approximate γ ↑ ij ≈ 0. To further illustrate the effects of the classical and quantum noise, we express the local and correlated decay rates in terms of the classical and quantum noise spectral densities, yielding: (B25) This is the equation (21) in the main text.The local and correlated absorption rate γ ↑ ij (t) is given by the same expression above but with Ω → −Ω.It is now clear that, in contrast to the pure dephasing dynamics, here the quantum noise leads to both local and correlated decoherence.We can similarly approximate the filter function with a delta function when t ≫ 1/Ω.We then arrive at γ , which is the equation (22) in the main text.From these expressions, we conclude that the asymmetry between the decay and absorption processes is rooted in the quantum noise.In the absence of any quantum noise, decay and absorption occur with equal strength, corresponding to the infinite temperature limit.
At this point, it is beneficial to recapitulate the dimensions of some critical parameters and functions that are and the pure-dephasing parameter γ z defined by Eq. (B16): (C4) These two equations give us Eq. ( 24) in the main text.Here, we have introduced a low frequency cutoff ω l which is set by the experimental measurement time and the cosine integral function defined by: Ci where γ is Euler's constant.Since the dynamics that we are interested in occurs within a time much shorter compared to the measurement time, i.e. ω l t ≪ 1, we can approximate Ci(x) = γ + ln x and we thus have When we solve the master equation, it is convenient to introduce two timedependent functions, All elements can be expressed in terms of these two functions and are given by the following expressions: (C7) With the explicit expression of the reduced density matrix ρ(t), one can evaluate the entanglement of the twoqubit system as a function of time with arbitrary initial state, for example the two plots we show in Fig. 3 in the main text.
Appendix D: Markovian limit
In Sec.IV of the main text, we discuss the dynamics of two qubits subjected to coherent drives with Markovian noise.In this section, we provide detailed derivations of some results used in the main text.In Sec D 1, we introduce the concurrence as a measure of the entanglement of the two qubits in the symmetrized and antisymmetrized basis.We then present an analytical solution for the master equation in the absence of the DM interaction in Sec.D 2. In Sec.D 3, we discuss the two qubit dynamics in the absence of the symmetric exchange interaction.In Sec.D 4, we present a study of the two qubit For a pure bipartite state ρ AB = |ψ AB ⟩ ⟨ψ AB |, we usually adopt the von Neumann entropy as the entanglement measure: S(|ψ AB ⟩) ≡ − tr ρ A ln ρ A = − tr ρ B ln ρ B .For a general mixed state ρ AB , this von-Neumann entropy is no longer a good measure since the classical mixture in ρ AB will have a nonzero contribution.We will adopt entanglement of formation as our entanglement measure.
The entanglement of formation is defined as where the minimum is taken over all possible decompositions of ρ AB = i p i ψ i AB ψ i AB and S( ψ i AB ) is the von Neumann entropy of the pure state ψ i AB .Physically, E F (ρ AB ) is the minimum amount of pure state entanglement needed to create the mixed state.This is extremely difficult to evaluate in general since we need to try all the decompositions.Quite remarkably an explicit expression of E F (ρ AB ) is given when both A and B are two-state systems (qubits).This exact formula is based on the often used two-qubit concurrence, which is defined as [90] C where λ i 's are, in decreasing order, the square roots of the eigenvalues of the matrix ρ(σ y ⊗σ y )ρ * (σ y ⊗σ y ), where ρ * is the complex conjugate of ρ.The entanglement of formation is then given by [90] E is monotonically increasing and ranges from 0 to 1 as C(ρ) goes from 0 to 1, so that one can take the concurrence as a measure of entanglement in its own right.When we write the density matrix as: with ω r = 4D 2 − γ ↓2 12 .It is then straightforward to obtain the analytic expressions for all the density matrix elements.Focusing on the dynamics of the entanglement between the qubits, one can easily write down its expression according to C[ρ(t)] = |G s − G t | once we have the explicit expression for the density matrix.Distinct two-qubit entanglement dynamics can be achieved with different values of the DM interaction D. We also remark that the interplay between the DM interaction and the dissipative process can also lead to intriguing physics in classical dynamics [95,96].
On the other hand, it is helpful to write down the equation of motion for the entanglement directly and study how the quantum coherence in the system evolves.To this end, we introduce C R ≡ G s − G t and we derive a differential equation for it.We note that it is linked to Ẋ via: C R = − Ẋe −γ ↓ t /D, from which we conclude that: This is the equation (34) in the main text, from which one can solve for the entanglement dynamics directly.
Symmetric and DM interactions
We illustrate the effects of the symmetric exchange and the DM interaction, respectively, by solving the master equation for the two qubits analytically.This allows us to extrapolate the scenario when both are present.In this case, all four important elements are coupled to each other, as shown in Fig. 9.The populations of the two states, G t and G s , are coupled to each other by the DM interaction (assumed to be positive for concreteness), which breaks the parity symmetry.Population flows from the triplet state to the singlet state when Re G ts is positive, and in the opposite direction when it is negative.The dynamics of Re G ts depend on the relative populations of the singlet and triplet states and are further coupled to Im G ts through the symmetric exchange coupling J s .In Fig. 10, we present the entanglement dynamics in two different scenarios.In the first case, we fix the symmetric exchange coupling J s to be comparable to the local decay rate and gradually vary the strength of the DM interaction D. Fig. 10 (a) demonstrates that as D increases, the maximal entanglement that can be achieved in the time evolution increases.This is what we expect since more probability flows to the singlet state before significant decoherence occurs, as illustrated in the plots for G s (t) and G t (t) in Fig. 10 (c) and (d), respectively.The green curves in (c) and (d) indicate that G s reaches its first peak whereas G t reaches its first valley after a certain time.For the red curves in (c) and (d), G s reaches the peak at a later time and the peak is lower than the green curve.Additionally, the entanglement oscillation frequency increases as we increase D, which is understandable as the coherent interaction generally leads to oscillation in the system (i.e., shuffle the information back and forth), as observed in Fig. 10 (b), (c), and (d), where all elements oscillate faster.Finally, we observe that the generated entanglement has a shorter lifetime, which is easy to understand since the DM interaction can bring the singlet state (leading to the residual entanglement) that decays slower to the triplet state that decays much faster.This is also evident in Fig. 10(c), where G s decays much faster as the DM interaction increases.
In the second scenario, we assume that the DM interaction D is comparable to the local decay rate.Surprisingly, we observe that a increase in the symmetric exchange interaction does not lead to a significant increase in the maximum entanglement, as illustrated in Fig. 10 (e).This is because the symmetric exchange interaction directly couples Re G ts to Im G ts , whereas the maximum entanglement depends on the peak that G s can reach, which is mainly determined by the value of D. However, increasing J s does lead to a faster oscillation in all elements, including the entanglement between the two qubits, as we can see in Fig. 10 (f), (g) and (h).
5.
Pure classical noise In Section IV of the main text, we mention that pure classical noise does not lead to any interesting dynamics.To illustrate this point, we first note that in the absence of any quantum noise, both symmetric exchange and DM interactions are absent, and the decay and absorption rates are equal, since their asymmetry is rooted in the quantum noise.We introduce the notations γ ≡ γ ↑ = γ ↓ for local decay and absorption, and γ12 ≡ γ Starting from these equations, it is straightforward to verify that the entanglement remains zero if the system is initialized to a product state.To demonstrate the impact of correlated classical noise on the decoherence process, we consider a specific initial state, the Bell state |ψ 0 ⟩ = (|↑↓⟩+i |↓↑⟩)/ √ 2. Figure 11 displays the entanglement decay for different strengths of correlated classical noise.Although the presence of classical noise modifies the decoherence process slightly, it does not introduce any new features.Therefore, in the main text, we concentrate on the quantum regime where both correlated classical and quantum noise coexist.In this section, we provide the detailed derivations for some results presented in Sec.V in the main text.We first consider the case of purely classical 1/f noise and then consider the scenario where the quantum 1/f noise is comparable to the classical one.We assume the correlated noise is comparable to the local noise and is real valued S 12 (ω) = S ii , as we assumed in the main text.
Correlated classical 1/f noise.In the presence of purely classical 1/f noise, the coherent coupling is absent.The decay and absorption rates take the following form from Eq. (B25): (E2) which is the equation (36) in the main text.One surprising feature is that the above decoherence rate can be temporarily negative.Here we have introduce the low frequency cutoff ω l for the 1/f noise.For the purely classical 1/f noise, the system is governed by the same set of equations (D31) with γ = γ12 = γ(t).First, we can obtain the expression for x(t) and y(t) easily as they are decoupled from other elements: x(t) = x(0)e −2Γ(t) , and y(t) = y(0)e −2Γ(t) , (E3) where we have introduced Γ(t) = t 0 ds γ(s).We now are interested in how the entanglement decays with the decoherence rate γ(t).To be specific, we assume the initial state is a Bell state |ψ 0 ⟩ = (|↑↓⟩ + i |↓↑⟩)/ √ 2, or equivalently, G t (0) = G s (0) = y(0) = 1/2 (other elements vanish).By using the symmetry between the absorption and decay processes, we conclude that G 11 (t) = G 44 (t).In the case of large correlated noise (comparable to local noise), G s (t) = 1/2 remains to be a constant.From the fact tr ρ = 1, we have the relation G t (t) = 1/2 − 2G 11 .We can deduce the equation for When the initial state is a trivial product state, for example |↑↓⟩ (namely, G t (0) = G s (0) = y(0) = 1/2), one can also show that the entanglement remains to be zero with the pure classical 1/f noise.Correlated quantum 1/f noise.In the presence of correlated quantum noise S Q ≈ S C , the coherent coupling J is finite (we assume the spectral density is real), which is evaluated to be: sin Ωt, (E6) where we have taken the principle value of the integral.This is Eq. ( 42) in the main text.As we discussed in the main text, we approximate the absorption rate with zero and the decay rate is evaluated to be Eq.(40).In this case, the two-qubit dynamics is governed by the same set of equations in Eq. (D18) but with J s = J (t), γ ↓ = γ ↓ and D = 0. We again consider two initial states: one is the Bell state |ψ 0 ⟩ and the other one is the product state |↑↓⟩ .In both cases, we can show that G s (t) = 1/2 and G t (t) = e −2Γ ↓ (t) /2 with Γ ↓ (t) = t 0 dsγ ↓ (s).For the dynamics of Re G ts and Im G ts , we introduce X(t) ≡ xe Γ ↓ (t) and Y (t) ≡ ye Γ ↓ (t) .One can show that they are described by the following equations: d dt from which we can obtain: [X(t), Y (t)] T = U (t)[X(0), Y (0)] T with the rotation matrix: where Φ(t) = t 0 ds J (t).When the initial state is |↑↓⟩, we have initial condition X(0) = 1/2 and Y (0) = 0 and obtain: In this case, the entanglement is given by Eq. ( 45) in the main text.When the initial state is the Bell state |ψ 0 ⟩, we have X(0) = 0 and Y (0) = 1/2, which gives us This leads to the entanglement in Eq. (43).So far, we have focused on the quantum regime, and it was shown in the main text that the final entanglement in this case is 1/2.However, we also wish to examine the final entanglement as a function of temperature, or equivalently, the ratio between the quantum noise and classical noise, by invoking the following relation To this end, we set ρ = 0 and t → ∞ [then the dynamics is governed by Eq. (D3) with all coefficients being constant in this limit].We still assume the correlated noise is comparable to the local noise (otherwise, one can show the entanglement will eventually decay to zero).Then is generally asymmetric in frequency, |S ij (ω)| ̸ = |S ji (−ω)|.Its positive-and negative-frequency components are linked through the Boltzmann factor, S ij (ω) = e βℏω S ji (−ω), which reflects the quantum nature of the noise, as indicated by the non-zero commutator [E 1 (t), E 2 ] ̸ = 0.One can interpret the positive-frequency
FIG. 2 .
FIG. 2. Correlated noise.(a) Density plot of the ratio between the spatially correlated noise S12 and the local noise Sii as a function of frequency ω and the distance d between two qubits in two dimensions.The plot shows that for qubit frequencies in the few GHz regime, the correlated noise is as strong as the local one when two qubits sit within micrometers, and it oscillates and decays to zero as the distance increases.(b) The ratio S12/Sii as a function of qubit distance in different dimensions, where we have taken ω/2π = 1 GHz.The correlated noise exhibits similar behavior in different dimensions, and obeys the constraint S12 ≤ Sii.In both plots, we used ω k = cs|k| and cs = 5 km/s.
FIG. 3 .
FIG. 3. Dephasing of two qubits subjected to spatially correlated noise.(a) Entanglement decay as a function of time for different initial states.The red curve corresponds to the case where only local noise is present, showing that both scenarios [with initial states being Bell states |Ψ+⟩ = (|↑↓⟩ + |↓↑⟩)/ √ 2 and |Φ+⟩ = (|↑↑⟩ + |↓↓⟩)/ √ 2 ] decay at the same rates.The blue and green curves show the entanglement decay with initial states |Ψ+⟩ and |Φ+⟩, respectively, in the presence of correlated classical noise.We set the quantum noise to zero and choose a phase of θ = π/3 for the correlated noise power spectral density S12(ω).(b) Density plot of the entanglement as a function of time and the angle θ with initial state |++⟩ (|+⟩ is defined by σ x |+⟩ = |+⟩).The correlated quantum noise is real and enters the qubits dynamics through the coherent coupling when θ = 0, π, and is purely imaginary and enters the dynamics through the correlated dephasing when θ = π/2, 3π/2.The correlated quantum noise does not generate entanglement through the correlated dephasing, but it generates it via the noise-induced Ising coherent coupling J z .Parameters used in the plot: ℏ/σ = 500 ns and ω l /2π = 1 MHz.
FIG. 4 .
FIG. 4. Entanglement dynamics of resonantly driven qubits with initial state |↑↓⟩.(a)-(c) Qubit dynamics in the absence of DM interaction D. (a) Schematic for the coupled dynamics of relevant density matrix elements.In the absence of the DM interaction D = 0, the dynamics of Gs and Gt are decoupled from each other, decaying independently at rates of γ ↓ − γ ↓ 12 and γ ↓ + γ ↓ 12 , respectively.The real and imaginary parts of Gts are coupled to each other through the symmetric coupling Js, while they decay at the same rate γ ↓ .(b) The upper panel illustrates the decay of the superradiant state |T ⟩ and the subradiant state |S⟩, which is independent of the parameter Js.The lower panel shows the oscillations of the real and imaginary parts of Gts with frequency 2Js, where we use parameter Js = 5γ ↓ .(c) Entanglement quantified by the concurrence C[ρ(t)] between two qubits as a function of time with varying parameter Js.The oscillation frequency of the entanglement is ∝ Js.The entanglement is bounded below by |Gs − Gt| at any time.At large time t ≫ 1/γ ↓ , the oscillation is insignificant and the dynamics is dominated by the local and correlated noise.(d)-(f) Qubit dynamics in the absence of symmetric interaction.(d)The dynamics of Gt and Gs are coupled to each other via Re Gts, in the presence of the DM interaction, while Im Gts is decoupled from other elements.Assuming D > 0 without loss of generality, the probability in |S⟩ would flow to Gt when Re Gts < 0, whereas the probability flows in the opposite direction when Re Gts > 0. The changing rate of Re Gts is determined by the difference between Gt and Gs, Re ∂tGts = −γ ↓ Re Gts + D(Gt − Gs).(e) The upper panel shows the oscillations of Gt and Gs when D = 5γ ↓ .The lower panel shows the their time evolution when the DM coupling is small D = 0.45γ ↓ , where we do not see the oscillation behavior as the dynamics is overdamped.(f) Entanglement between two qubits as a function of time with varying strength of DM coupling D. The green curve shows the oscillation of entanglement, where the DM coupling is large and the dynamics is underdamped.The red curve is the critical point, where the entanglement stops oscillating and behaves as ∝ te −γ ↓ t .The blue curve is when the DM coupling is small, where the dynamics is overdamped.Parameters used in all figures: γ ↓ = 1 µs −1 and γ ↓ 12 = 0.9γ ↓ .
FIG. 5 .
FIG. 5. Entanglement dynamics of two driven qubits under DM interaction with initial state |↑↓⟩.The plot shows the time evolution of entanglement as a function of DM interaction strength D. The dashed orange line at D = 0.45γ ↓ separates the underdamped and overdamped regimes.When D is above the dashed line, the entanglement exhibits oscillations with increasing frequency as D increases.In contrast, below the dashed line, the entanglement decays on a longer timescale and does not exhibit oscillations.Parameters we use in the figure are γ ↓ = 1 µs −1 and γ ↓ 12 = 0.9γ ↓ .
FIG. 6 .
FIG. 6. Entanglement dynamics of two driven qubits under pure classical 1/f noise.(a) Temporal evolution of the decay rate γ(t) depicted as a function of time, where the occurrence of negative values is identified within the marked purple intervals.However, the integral of the decay rate Γ(t), as highlighted in the inset of (a), must remain nonnegative to ensure the complete positivity of the dynamics.(b) Entanglement between two qubits as a function of time with the initial state being a Bell state |ψ0⟩ = (|↑↓⟩ + i |↓↑⟩)/ √ 2, and the inset shows the entanglement with initial state |↑↓⟩.Parameters that are used in the plots: ℏ/σ = 100 ns, ω l /2π = 1 MHz, and Ω/2π = 1 GHz.
FIG. 7 .
FIG. 7. Analysis of driven qubit dynamics in the presence of quantum noise.(a) The upper panel depicts the time-dependent decay rate γ ↓ , which can take negative values for certain time intervals (indicated by purple shading).Its time integral Γ ↓ is positive at all times to ensure the complete positivity of the dynamics.The lower panel shows the coherent coupling J between two qubits as a function of time.(b) Time-dependent entanglement for a maximally entangled initial state |ψ0⟩ = (|↑↓⟩+i |↓↑⟩)/ √ 2.The blue shaded area demonstrates the impact of quantum noise, with the blue and purple curves representing cases with and without quantum noise, respectively.The inset displays the final steady-state entanglement as a function of temperature, which is zero in the classical limit and 1/2 in the quantum regime.(c) Entanglement evolution as a function of time for an initial state |↑↓⟩.The effect of quantum noise is illustrated by the blue shaded region (the entanglement remains at zero when only classical noise is present).The inset reveals a final entanglement value of 1/2 with ℏ/σ = 3 ns within a time ∼ 10 ns.The plotted results are obtained using the following parameters: ℏ/σ = 100 ns, ω l /2π = 1 MHz, and Ω/2π = 1 GHz.
FIG. 10.Entanglement dynamics of resonantly driven qubits with initial state |↑↓⟩ in the presence of both symmetric exchange and DM interaction.(a)-(d) Qubits dynamics with the fixed symmetric exchange coupling Js = γ ↓ and varying DM interaction D. (a) The entanglement measured by the concurrence C[ρ(t)] between two qubits is plotted as a function of time for varying strengths of D. As D increases, the maximum entanglement also increases and the oscillation frequency of entanglement becomes faster.(b) The imaginary part of Gts is shown as a function of time, which exhibits faster oscillations as D increases.Furthermore, the magnitude of Im Gts is suppressed with an increase in D. (c) The time-dependent function Gs is plotted, which displays a larger value at short times for higher values of D, but has a smaller value overall with increasing D. (d) The time-dependent function Gt is plotted, which exhibits both faster oscillations and larger magnitudes with increasing D. (e)-(h) Qubits dynamics with the fixed DM interaction D = γ ↓ , and varying symmetric exchange coupling Js.(e)The entanglement between two qubits is plotted as a function of time, and it exhibits faster oscillations for larger values of Js.Moreover, as Js increases, the magnitude of entanglement increases, but the amount of increase is smaller than in (a).(f) The imaginary part of Gts is shown as a function of time, which exhibits faster oscillations and larger amplitude as Js is increased.(g) The time-dependent function Gs is plotted and displays faster oscillations for larger Js.Additionally, the magnitude of Gs is suppressed at shorter times but achieves a larger value at longer times as Js is increased.(h) The time-dependent function Gt is plotted, which exhibits faster oscillations and a suppressed magnitude as Js is increased.Parameters used in all figures: γ ↓ = 1 µs −1 and γ ↓ 12 = 0.9γ ↓ .
FIG. 11 . 2 .
FIG. 11.Dynamics of entanglement between two qubits as a function of time in the presence of pure classical Markovian noise.The system is initialized in the Bell state |ψ0⟩ = (|↑↓⟩+i |↓↑⟩)/ √ 2. The curves correspond to the entanglement decay under different spatially correlated noise conditions.It is observed that the presence of correlated classical noise results in a slight modification of the decoherence rate, but does not introduce any new features.Parameters used in the plot: γ = 1 µs −1 .
Appendix E: Correlated classical and quantum 1/f noise
(31)C I ≡ 2 Im G ts ,(31)which indicates that the entanglement of the qubits has two independent contributions (see Appendix D 1 for detailed derivations).To gain some physical understanding of this expression, we observe that the first contribution arises from the asymmetry in the populations of the triplet state |T ⟩ and singlet state |S⟩, which is proportional to |↑↓⟩ ⟨↓↑|+|↓↑⟩ ⟨↑↓|.The second contribution comes from a finite imaginary part of G ts , which is proportional to |↑↓⟩ ⟨↓↑|−|↓↑⟩ ⟨↑↓|.Both indeed characterize the coherence (superposition) of |↑↓⟩ and |↓↑⟩. | 19,139.4 | 2023-08-06T00:00:00.000 | [
"Physics"
] |
Functional Significance of the Central Helix in Calmodulin*
The 3-A crystal structure of calmodulin indicates that it has a polarized tertiary arrangement in which calcium binding domains I and I1 are separated from domains I11 and IV by a long central helix consisting of residues 66-92. To investigate the functional significance of the central helix, mutated calmodulins were engineered with alterations in this region. Using oligonucleotide-primed site-directed mutagenesis, Thr-79 was converted to Pro-79 to generate CaMPM. CaMPM was further mutated by insertion of Pro-Ser- Thr-Asp between Asp-78 and Pro-79 to yield CaMIM. Calmodulin, CaMPM, and CaMIM were indistinguish- able in their ability to activate calcineurin and Ca2+-ATPase. All mutated calmodulins would also maxi- mally activate cGMP-phosphodiesterase and myosin light chain kinase, however, the concentrations of CaMPM and CaMIM necessary for half-maximal activation (ICaet) were 2- and 9-fold greater, respectively, than CaM23. Conversion of the 2 Pro residues in CaMIM to amino acids that predict retention of helical secondary structure did not restore normal calmodulin activity. To investigate the nature of the interaction between mutated calmodulins and target enzymes, syn- thetic peptides modeled after the calmodulin binding region of smooth and skeletal muscle myosin light chain kinase were prepared and used as inhibitors of calmodulin-dependent cGMP-phosphodiesterase. The data suggest that the
calcium binding proteins which includes troponin C, parvalbumin, and calbindins. Some of these calcium binding proteins, such as CaM and troponin C, serve as tranducers of calcium signals. Both of these proteins modulate the function or activity of target proteins via calcium-dependent alterations in protein-protein interactions. At the molecular level, information required for transmission of the calcium signal is encoded by the spatial arrangement and dynamic properties of complementary recognition domains in the calcium-binding protein and its target protein.
The analogous mechanism of action of CaM and troponin C may have provided the selective pressure to maintain the very similar tertiary structure shared by these homologous proteins (1)(2)(3). Each protein consists of four calcium-binding domains that conform to the helix-loop-helix or EF-hand motif originally observed in parvalbumin (4). In both proteins, calcium binding domains I and I1 are separated from domains I11 and IV by a long a-helix located in the central region of the proteins. Two other calcium binding proteins with known crystal structures, parvalbumin and the 7.5-kDa form of calbindin, also contain EF-hand calcium binding domains but do not have the elongated dumbell shape of CaM and troponin C (4, 5). This suggests a basic difference in the mechanisms of action of these two types of calcium binding proteins. Indeed, parvalbumin and the 7.5-kDa calbindin have not been demonstrated to have activator activity. The similar tertiary motif of CaM and troponin C may be typical of a class of calcium "switch" proteins in which the specificity of the switch is defined by variations in protein recognition domains within this general structural framework. For example, the amino-terminal helix in troponin C, which is absent in CaM, may contribute to functional divergence.
The structure of CaM and how it encodes functional information is of particular interest since CaM regulates a variety of proteins and enzymes. A series of studies using biochemical (6-9) and protein engineering (10-13) techniques have shown that CaM contains multiple functional domains that selectively interact with target enzymes. In this report we have mutated the central helix of CaM to investigate its role as a structural element and a protein-protein interactive site and show that its length and not composition appears more important for the activation of selected enzymes. The data are also consistent with our previous classification of CaM-dependent enzymes based on their interactions with bacterially synthesized CaM-like proteins (10, 11).
MATERIALS AND METHODS
Plasmid Construction-Mutation of CaM was accomplished using oligonucleotide-primed site-directed mutagenesis of the CaM expression plasmid pCaM23 (10) as outlined in Fig. 1. In step 1 a 274-base pair EcoRIIPstI fragment from pCaM23 was subcloned into the bacteriophage M13mp18 to obtain a single stranded DNA template molecule, phTemplate-1. In step 2, primer 1 was used to convert Ser-79 to Pro and also generate a BamHI for screening and as a site for 11242 insertion mutation. Site-directed mutagenesis was performed essentially as described by Zoller and Smith (14), 0.5 pmol of single stranded DNA template and 10 pmol of the mutagenic oligonucleotide were heated to 55 "C for 10 min and cooled slowly to room temperature in 10 p1 of annealing buffer containing 20 mM Tris, pH 7.5, 10 mM MgC12, 50 mM NaC1,l mM DTT. The annealed DNA was then diluted with 10 pl of 20 mM Tris, pH 7.5, 10 mM MgC12, 10 mM DTT, 1 mM ATP, 0.8 mM dNTPs, 1 unit of Klenow fragment DNA, 10 units of T4 DNA ligase. The reaction was incubated at 14-15 "C for 5 h, after which 1 r l was used to transform JM103 cells by the procedure of Hanahan (15). To screen for the desired mutation, step 3, the transformed population of bacteria was diluted to 5 ml with L-broth and incubated overnight at 37 "C. Bacteria from 1.5 ml of culture were collected by centrifugation and used to isolate replicative form DNA by the alkaline minilysate procedure (16). An aliquot of the mixed population of phage DNA was digested with BamHI, run on a 1% low gelling agarose gel (Bio-Rad), and stained with ethidium bromide, 0.5 pg/ml. Linearized DNA, which migrated faster than circular phage DNA, was identified and excised from the gel. The gel piece was diluted 5-fold with 10 mM Tris, pH 7.5, 1 mM EDTA, and heated at 70 "C with occasional vigorous mixing until melted. DNA in 2 r l of the melted agarose solution was circularized by ligation and used to transform JM103. Individual plaques were picked for the isolation of double and single-stranded DNA. After confirming the presence of predicted restriction endonuclease sites in double-stranded DNA the corresponding single stranded DNA was used as template for DNA sequencing by the dideoxy chain termination method (17). Recombinant phage containing the desired point mutation was called ph-CaMPM. In step 5, a 142-base pair AccIIPstI fragment from ph-CaMPM was subcloned into the corresponding sites of pCaM23 to yield phtermediate-1. To complete the amino acid coding region of CaMPM a 700-base pair PstI fragment from pCaM23 was subcloned into the unique PstI of phtermediate-1 to yield pCaMPM.
A CaM insert mutation was generated using cassette mutagenesis of pCaMPM as outlined in steps 5 and 6 of Fig. 1. pCaMPM, which has two BamHI sites, was partially digested with BamHI, dephosphorylated with bacterial alkaline phosphatase, and ligated with a BamHI fragment from pUC4K (Pharmacia LKB Biotechnology Inc.) that contains a kanamycin resistance marker within an inverted cloning cassette. Ligation products were used to transform JM109, and kanamycin-resistant colonies were picked for restriction endonuclease analysis of their plasmid DNA. The desired plasmid, phtermediate-2, was digested with SalI, re-ligated, and used to transform JM103 cells. The resulting plasmid called pCaMIM is identical to pCaMPM with the exception of an additional 12 nucleotides that encode 4 amino acids between Asp-78 and Pro-79 of CaMPM.
Steps 8-12 of Fig. 1 outline procedures that convert the 2 Pro residues in CaMIM to helix-forming amino acids. An EcoRI/PstI fragment from pCaMIM was first subcloned into M13mp18 to yield template DNA, phTemplate-2. Mutation of phTemplate-2 was performed in step 9 as described above using primer 2. In step 10, replicative form DNA was isolated from a mixed population of transformed bacteria, digested with BamHI, and used directly to transform JM103. Since replicative form DNA that have mutations in both Pro codons will be resistant to BarnHI digestion, they will remain circular and have a much higher efficiency of transformation. Phage ph-CaMIM-TQ, phCaMIM-KQ, and phCaMIM-QQ were identified first by restriction endonuclease analysis and then by DNA sequencing. In step 11 EcoRIIPstI fragments from the replicative form DNA of these three phage were subcloned into pCaM23 that had been digested with EcoRI and partially digested with PstI. In step 12 the CaMcoding region from phtermediates-3, -4 and -5 were isolated after digestion with EcoRI and partial digestion with PstI. These fragments were subcloned into pCaMPL which had been digested with PstI and partially digested with EcoRI. pCaMPL is a derivative of pCaM23N (18) in which the tac promoter has been replaced with a heat-sensitive P L promoter (19).
Protein Isolation and Enzyme Assays-Bacterially synthesized CaMs were isolated by phenyl-Sepharose chromatography as described previously (10, 18). As an additional purification step the isolated proteins were bound to a Waters DEAE-5PW high performance liquid chromatography column and eluted with a NaCl gradient in a buffer of 50 mM Tris, pH 7.5, 0.2 mM EDTA. For those experiments where a change of buffer was required, the proteins were either desalted into the appropriate buffer by gel filtration using Sephadex G-25, or subjected to 6-8 successive rounds of concentration and dilution using Amicon Centricon P-10 microconcentraters.
For cGMP-phosphodiesterase assays, CaM23 concentration was determined by amino acid analysis, and the concentration of CaM mutants was determined by the method of Bradford (20) using CaM23 as a standard. The concentration of CaM binding peptides was determined by absorption at 278 nm using molar extinction coefficients of 5556 and 5554 for skeletal and smooth muscle isoforms of myosin light-chain kinase, respectively. CaM-deficient phosphodiesterase was purified from bovine brain by a modification of the method of Sharma et al. (21) up to the CaM-Sepharose 4B chromatography step. The eluant from the CaM-Sepharose 4B column was dialyzed overnight against 20 mM Tris-HCl, pH 7.5, 1 mM magnesium acetate, 10 mM 2-mercaptoethanol, 100 mM NaCl. Bovine serum albumin was added to a final concentration of 1 mg/ml, and the enzyme preparation was frozen in liquid nitrogen until use. The purified phosphodiesterase was stimulated 10-fold by saturating concentrations of CaM.
Phosphodiesterase was assayed by a modification of the procedure of Thompson et al. (22). The reaction was performed in a volume of 0.25 ml containing 50 PM phosphodiesterase and varying concentrations of CaM in a buffer of 40 mM Tris-HCI, pH 8.0,5 mM magnesium acetate, 1 mM CaC12,0.03 mM cGMP, 0.15 pCi of [8-3H]cGMP, 1 mM DTT, 640 pg/ml bovine serum albumin. All reactions were carried out in polypropylene tubes. The reaction was initiated by addition of cGMP, incubated at 30 'C for 40-60 min, and terminated by boiling for 3 min followed by the addition of 25 p1 of 1 mg/ml snake venom (Crotalus atrox, Sigma) and an additional incubation at 30 "C for 10 min. After addition of 25 pl of 10 mM guanosine, unreacted cGMP was adsorbed by the addition of 0.5 ml of a 50% (v/v) AG 2-X8 resin in 30% ethanol. The slurry was separated by centrifugation (850 X g for 10 rnin), and 200 p1 of the supernatant was counted in 10% Beckman BioSolve-Spectrofluor. Percent activation was calculated as described previously (23), and Kapp was calculated from a Hill plot of the data.
ICso values for the CaM-binding peptides were determined by a competitive enzyme assay using phosphodiesterase. The reaction mixture contained appropriate concentrations of the binding peptide, CaM (either 1.7 nM CaM23; 3.4 nM CaMPM, or 16.5 nM CaMIM), 20% glycerol, and the phosphodiesterase reaction mixture as described above.
Chicken gizzard myosin light chains and myosin light chain kinase were isolated as described previously (24,25). Myosin light chains were chromatographed on phenyl-Sepharose CL-4B to reduce contaminating CaM (26). The assay was performed in a 0.1-ml volume containing 50 mM HEPES, pH 7.6, 100 mM KCl, 10 mM MgC12, 1 mM CaC12,l mM DTT, 0.2 mM ATP (6-8 X lo6 cpm of [y3'P]ATP), 0.05 mg/ml bovine serum albumin, 0.02 mM myosin light chains, 8 X lo-' mM myosin light chain kinase, and the indicated amount of activator. The mixture was incubated at 30 "C for 20 min, the reaction was terminated and incorporation of 32P into myosin light chains was determined as described previously (27).
Calcineurin was isolated and assayed as described previously (6,28). The assay was conducted at 28 "C and contained 20 mM Tris, pH 8.0, 100 mM NaCl, 6 mM MgCl,, 0.5 mM CaCl,, 0.5 mM DTT, 5 mM p-nitrophenyl phosphate, 0.1 mg/ml bovine serum albumin, lo-' M calcineurin, and the indicated amount of activator. Maximal stimulation of enzyme activity in the presence of saturating amounts of activator was 18-fold over basal activity in the presence of EGTA and 8-fold over the activity in the presence of Ca2+ but no activator. Erythrocyte ATPase was isolated and assayed as described by Niggli et al. (29).
Spectral Measurements-Tyrosine fluorescence measurements were made using an Aminco SPF-BOO Ratio Spectrofluorimeter. Proteins were desalted into a buffer of 50 mM Tris, pH 7.5, 150 mM NaC1, 0.2 mM EGTA and adjusted to a protein concentation of 0.2 mg/ml. Calcium standard solutions of 100, 20, 5, and 2 mM CaC12 were prepared from a 100 mM CaCl, standard (Orion). Calcium from the standards was added sequentially in 2-pl aliquots to 2 ml of protein solution. Addition of the four standards was selected to achieve a uniform increase in tyrosine fluorescence. After completion of the titration, the volume change due to calcium addition was less than 3%. Free calcium concentrations were computed based on the total calcium added and the calcium dissociation constants of EGTA (30).
CD measurements were taken on a JASCO-500 Spectropolarimeter. Proteins were dialyzed against 50 mM HEPES, pH 7.5, 0.1 mM EGTA and adjusted to an A278 of 0.11. Samples were made 100 mM in KC1 and scanned from 260 to 185 nm in the presence or absence of 1 mM CaCI2.
RESULTS
The amino acid changes introduced into CaM by the procedures outlined in Fig. 1 are summarized in Fig. 2, panel A. Bacterially synthesized CaM23 has the sequence of vertebrate CaM and is expressed from a plasmid containing the chicken calmodulin cDNA. Despite the absence of an acetylated amino terminus and trimethylation of Lys-115, CaM23 has been shown to be physically and functionally identical to naturally occurring calmodulin by all criteria tested thus far (10, 11) and is used as a control in this study. CaMPM contains a single point mutation in which Thr-79 is changed to a Pro residue. CaMIM is a derivative of CaMPM and has Pro-Ser-Thr-Asp inserted between Asp-78 and Pro-79. CaMIM-T (Thr), CaMIM-TQ (Thr and Gln), and CaMIM-KQ (Lys and Gln) are all derivatives of CaMIM in which one or both Pro residues are converted to the indicated amino acids. Multiple substitution of the Pro residues was accomplished using a mixture of oligonucleotide primers. The oligonucleotide was designed to replace the second Pro with Thr, Gln or Lys; however, only Gln was obtained. This probably reflects a Although necessarily imprecise, secondary structure in the mutant proteins was predicted by the method of Gamier et al. (31) and is summarized in Fig. 2, panel B. This calculation assigns a numerical value for the probability that a given amino acid will be in either a random coil (C), turn (T), 0sheet (S), or a-helix ( H ) secondary structure. The program accurately predicts the non-helical regions of CaM23 which constitute the four Ca2+ binding loops and the non-helical region that connects domains I and 11. The central region in CaM23 is assigned a helical configuration; however, the numerical value for Thr-79 indicates that this helix is not strongly favored. Analysis of the central region of CaMIM indicates that Lys-77 through Asp-80, including the 4-amino acid insertion, probably assume a non-helical conformation in the protein. Conversion of the Pro residues to either polar or charged amino acids predicts a more mild disruption of the a-helix with CaMIM-KQ predicted to retain an a-helical secondary structure across the insertion mutation. Similar to
FIG. 3. Electrophoretic mobility of calmodulin.
Purified proteins were resolved on 15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis gels (29.2% acrylamide to 0.8% bisacrylamide) using buffers described by Laemmli (32). Proteins were solubilized in sodium dodecyl sulfate sample buffer containing either 5 mM EDTA (-Calcium) or 5 mM CaC12 (+Cafciurn). The gel and running buffers contained no added EDTA or CaC12. The relative molecular mass of the standard proteins (Std) are listed on the left in kilodaltons.
CaM23, helical conformation of the central region in CaMIM-KQ is not strongly favored.
The purity and electrophoretic mobility of the mutant CaMs is shown in Fig. 3. All proteins exhibited an equivalent Ca2+-dependent decrease in apparent molecular weight. Proteins with insert mutations migrated slightly slower than CaM23 and CaMPM in the presence and absence of Ca2+, consistent with an additional 4 amino acids. CaMIM-T, CaMIM-TQ, and CaMIM-KQ all co-migrated with CaMIM in the presence and absence of calcium (data not shown).
In an effort to detect potential differences in secondary structure, calcium-dependent changes in tyrosine fluorescence and CD spectra were compared for CaM23, CaMPM, and CaMIM. Fig. 4 shows that intrinsic tyrosine fluorescence is not affected by the amino acid changes in the central helix of CaM. This was not unexpected since the 2 tyrosine residues in CaM are located in domains I11 and IV at positions 99 and 138. Fig. 5, panels A and B, show the CD spectra of CaM23 and CaMPM to be indistinguishable in the presence and absence of calcium. Panels C and D show the magnitude of the mimima in the spectrum for CaMIM is about 10% greater relative to CaM23. This suggests that the sum of secondary structures between these two proteins is different but does not identify the nature of the difference. A recent paper by Hennessey et al. (33) suggests that, at high ionic strength, calcium-dependent changes in the secondary structure of CaM are due to a reorientation of helices rather than an increase in helical content. The differences in spectra between CaM23 and CaMIM may reflect general differences in the organization of secondary structures. Fig. 6 shows the activation of calcineurin and Ca2+-ATPase by CaM23, CaMPM, and CaMIM. Activation characteristics of both enzymes were unaffected by mutations in the central helix of CaM. In contrast, Fig. 7 shows that the activation characteristics of cGMP-phosphodiesterase and myosin light chain kinase are both influenced by alteration in the central helix of CaM. A summary of multiple experiments with cGMP-phosphodiesterase and myosin light chain kinase is shown in Table I. Although both enzymes are maximally stimulated by all bacterially synthesized CaMs, the concentrations of CaMPM and CaMIM necessary for half-maximal activation (Kaa) are approximately 2-and 9-fold greater, respectively, than CaM23. Although the apparent ICad values from two experiments with myosin light chain kinase differ, the relative values shown in brackets in Table I are very similar. Inter-assay variability is most likely due to effects of storage on enzyme and substrate.
The aberrant ability of CaMIM to activate cGMP phosphodiesterase and myosin light chain kinase could be due to a disruption of secondary structure or a lengthening of the central region in CaM by 4 amino acids. To approach this Activation constants for activation of cGMP-phosphodiesterase and myosin light chain kinase by CaM23, CaMPM, and CaMZM The following numbers represent the apparent K,,, (nanomolar) for enzyme activation by the indicated CaM. K,,, is defined as the amount of CaM required for half-maximal activation under standard assay conditions. The values for phosphodiesterase are the average f S.D. of n determinations (in parentheses). The values for myosin light chain kinase represent two separate experiments and the error values are derived from computer fits of the data. The numbers in brackets are the -fold increase in Kact for CaMPM and CaMIM relative to CaM23. 9. Inhibition of cGMP-phosphodiesterase activity by the CaM-binding peptide (SrnK) from smooth muscle myosin light chain kinase. The assay was performed as described under "Materials and Methods" but in the presence of varying concentrations of inhibitor peptide. Activator, enzyme, peptide, and calcium were added together prior to the addition of substrate. The amount of activator in each assay is shown in Table I1 and was adjusted to yield equivalent initial enzyme activities. The data shown is for inhibition by smooth muscle kinase. ICs0 values for both smooth and skeletal muscle kinase (SkK) are given in Table 11.
question, a series of mutants were generated in which 1 or both of the introduced Pro residues were changed to charged or polar amino acids (Figs. 1 and 2) in an attempt to retain helical secondary structure. Fig. 8 shows that cGMP-phosphodiesterase is activated identically by CaMIM, CaMIM-T, CaMIM-TQ, and CaMIM-KQ. Although secondary structure in the mutated CaMs may differ from computer predictions, the data demonstrate that the functional abnormalities of CaMIM are not due to the disruptive influence of Pro residues. The activation of target enzymes by CaM involves complex macromolecular interactions including the binding of CaM to the enzyme. To investigate the nature of the interaction between mutated calmodulins and myosin light chain kinase, peptides modeled after the CaM-binding sites in both smooth and skeletal muscle isoforms of myosin light chain kinase (34,35) were synthesized and used as inhibitors of activation of cGMP-phosphodiesterase. Dose-response curves for the inhibition of cGMP-phosphodiesterase by smooth muscle kinase are shown in Fig. 9 and IC5o values for both smooth and skeletal muscle kinase are summarized in Table 11. In all experiments the concentrations of CaM23, CaMPM, and CaMIM were adjusted to achieve equivalent initial enzyme activities. Although the ICso values in Table I1 differ for the three CaMs, the numbers are proportional to the concentration of activator present in the assay and the K,,, values shown in Table I. When corrected for varying CaM levels, the relative ICso values show no apparent difference. We would interpret these data to indicate that the synthetic peptides interact similarly with the three CaM proteins.
DISCUSSION
A long central helix has been observed in both CaM and fast skeletal muscle troponin C from structural analysis or the crystal strucutes (1)(2)(3). Although thermodynamic arguments do not favor the stability of an eight-turn a-helix that is exposed to solvent, experimental evidence not only support the existence of the central helix but also calcium-dependent conformational changes in this region. It is proposed that solvent-exposed helices in troponin C are stabilized by intrahelical electrostatic and salt-bridge interactions between the basic and acidic amino acid side chains (36). Vacuum-UV CD measurements in the presence of calcium predict a helical content for CaM that is in good agreement with the crystal structure (33), and they predict that binding of calcium by CaM a t physiologic ionic strength induces a reorganization of the helices rather than a change in helical content. Fluorescence anisotropy measurements on the dityrosine derivative of CaM in the presence of high calcium concentrations suggest that protein has a length equivalent to that predicted by the crystal structure (37) and that in the absence of calcium the protein appears more compact and exhibits segmental motion.
Dynamic changes in the central helical region of CaM is an attractive model that would explain not only spectral data but also calcium-dependent differential accessibility of this region to both proteases (38) and acetic anhydride (39). Dynamic changes could involve a collapse of the amino-and carboxyl-terminal lobes to shield the central helix as might be predicted from the crystal structure of troponin C and/or an increase in the conformational flexibility of this region. These are attractive models; however, it must be appreciated that structural studies of isolated CaM may not fully represent its biologically active conformation when complexed with its target enzymes. For example, binding of melittin and peptides derived from myosin light chain kinase induce conformational changes in both halves of CaM as determined by NMR analysis (40, 41) and binding of CaM to target proteins increases its affinity for Ca2+ (42). This presents the possibility that interactions of CaM with target enzymes may stabilize conformations that do not predominate in solution. Crystallization of CaM may approximate interactions with target enzymes.
Assuming that the central helix in CaM does exist either in the Ca2+-bound form of the isolated protein or when complexed with a target protein, at least two possible functions for this structure can be hypothesized. First, the helix may maintain a proper linear and/or rotational orientation between the amino-terminal and carboxyl-terminal lobes such that recognition sites in these regions can functionally interact with complementary sites in target proteins. Alternatively, the central helix may encode recognition sites that are rendered accessible to target enzymes by Ca2+. We have attempted to investigate these mechanisms by generating a series of mutated proteins in which the length and composition of the central helix in CaM is altered.
CaMIM, CaMIM-T, CaMIM-TQ, and CaMIM-KQ all have an insertion in the central helix of 4 amino acids but with varying degrees of predicted disruption of the helix. All four mutants exhibit identical functional abnormalities with respect to activation of phosphodiesterase. Therefore, functional characteristics of these activators are not due to the Pro residues. Aberrant activation of myosin light chain kinase by CaMIM does not appear to result from a disruption of a recognition site due to insertion of amino acids since CaMbinding peptides derived from both the smooth and skeletal kinases have similar affinities for CaM and CaMIM. Therefore, the length and not composition of the central helix appears more important for the activation of certain target enzymes. A similar conclusion was also reached for skeletal muscle troponin C by Reinach and Karlsson (43) who showed that conversion of Gly-92 to Pro did not alter the properties of troponin C in a reconstituted actomyosin ATPase assay. A requirement for a specified length of the central helix in troponin C has yet to be investigated by mutagenesis.
Using an analogous approach, Craig et al. (13) have reported that conversion of Glu 82-84 in the central helix to lysines effectively inhibits the ability of the mutated CaM to activate myosin light chain kinase and plant NAD kinase while the activation of phosphodiesterase is minimally affected. Although the Ca2+-dependent CD spectra of control and Lyssubstituted CaM are quite different, suggesting that this considerable change in local charge density has distal effects on protein secondary structure, this acidic cluster may represent a recognition site that is more important for activation of the former two enzymes. If so, this recognition site does not appear to overlap Thr-79 since CaM-binding peptides from smooth and skeletal muscle myosin light chain kinase show the same relative affinity for CaM23 and CaMIM. Together, these results suggest that the central helix contributes to the functional characteristics of CaM by both providing sites of protein recognition and maintaining either a proper orientation or linear relationship between the lobes in CaM, and that the requirement for these structural features vary for different CaM-dependent enzymes. | 6,175.4 | 1988-08-15T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Quantitatively proling acetylome of DNA repair proteins in early DNA damage
Background (cid:0) Lysine acetylation is a reversible regulated post-translational modication that can regulate the stability, localization, and function of proteins in multiple cellular processes. However, the regulative mechanism of acetylation on the repair proteins in the early DNA damage is not fully understood. Methods (cid:0) We performed a global proteome and acetylome of DNA repair proteins in DNA damage in 1 h after treated with epirubicin by using high anity enrichment and high-resolution liquid chromatography–tandem mass spectrometry approaches. Results: 190 Kac sites in 50 repair proteins were identied in cells treated with epirubicin as compared to the control. 42 acetylated lysine sites and 24 deacetylated lysine sites were observed in 21 and 16 repair proteins, respectively. 7 repair proteins simultaneously contained both acetylated and deacetylated lysine sites. 11 acetylation sites were located in the function domains of 7 repair proteins that might reveal mechanisms by which acetylations alter DDR protein function. In 17 repair proteins, the induced acetylation changes were for the rst time identied in the present study. Conclusion: The proteome and acetylome results indicated that fast acetylation or deacetylation on these repair proteins might play a critical role in the early DNA damage repair process.
proteins in response to ionizing radiation, and obtained 33,500 ubiquitination and 16,740 acetylation sites, respectively (9). Focus on the acetylation of nuclear proteins, a pro ling study identi ed 217 Kac sites and analyzed the dynamic change in response of DNA damage induced by irradiation (14). Pancancer analysis of TCGA data has discovered frequent mutations of acetylation and ubiquitination sites in cancer driver genes, suggesting the PTM at these sites as novel mechanisms of cancers (15). However, the global mapping of proteome and acetylome of repair proteins remains large unknown. Here, we investigated the acetylation-status change of repair proteins in the early response of DNA damage using high-solution mass spectrometry analysis.
Materials And Methods
Cell culture and treatment.
The experimental design process was portrayed in Figure 1a. Human embryonic kidney HEK293T cells (CRL-11268) were purchased from American Type Culture Collection and maintained at 37℃ and 5% CO2 in Dulbecco's Modi ed Eagle Medium supplemented with 10% (v/v) FBS, 100 U/mL penicillin, and 100 mg/mL streptomycin. The 293T cells in exponential state were treated with 0.8uM epirubicin (Sigma Aldrich) for 1h (EPI + group) as well as control group (EPI -group) without treatment in culture medium.
Three biological replicates were performed for each group.
Protein extraction and digestion
The cell samples were sonicated three times on ice using a high-intensity ultrasonic processor (Scientz) in lysis buffer (8 M urea, 1% protease inhibitor cocktail, 3 μM TSA and 50 mM NAM). After centrifugation with 12000g at 4 °C for 10 min, the supernatant was collected. Protein concentration was determined with a BCA kit according to the manufacturer's instructions (Pierce). After recover and alkylation, the protein samples were digested twice in digestion buffer containing trypsin (1:50 and 1:100 trypsin-toprotein) at room temperature for overnight and 4h, respectively.
A nity enrichment of lysine acetylation peptides
The peptides were dissolved in immunoprecipitation buffer solution (100 mM NaCl, 1 mM EDTA, 50 mM tris-hcl, 0.5% np-40, pH 8.0), and the supernatant was transferred to the pre-washed acetylated resin (antibody resin no. PTM-104, from Hangzhou jingjie PTM Bio), placed on a rotating shaker at 4 ℃, gently shaken and incubated overnight. After incubation, the resin was washed with IP buffer solution 4 times and deionized water twice. Finally, 0.1% tri uoroacetic acid eluent was used to elute the resin-bound peptide for three times. After draining in vacuum, the eluted peptides were cleaned using C18 ZipTips (Millipore) according to the manufacturer's instructions for subsequent LC-MS/MS analysis.
LC-MS/MS Analysis
The tryptic peptides were dissolved in solvent A (0.1% formic acid in water), directly loaded onto a homemade reversed-phase analytical column (15 cm length, 75 μm inside diameter, Sigma Aldrich). For proteomics analysis, peptides were separated with a gradient from 4% to 6% solvent B (0.1% formic acid in acetonitrile) in 2 min, 6% to 24% over 68 min, 24% to 32% in 14 min and climbing to 80% in 3 min then holding at 80% for the last 3 min, all at a constant ow rate of 300 nL/min on a nanoElute UHPLC system (Bruker Daltonics). For acetylomics analysis Peptides were separated with a gradient from 6% to 22% solvent B (0.1% formic acid in acetonitrile) over 43 min, 22% to 30% in 13min and climbing to 80% in 2 min then holding at 80% for the last 2 min, all at a constant ow rate of 400 nL/min on the nanoElute UHPLC system.
The peptides were subjected to capillary source followed by the timsTOF Pro mass spectrometry (Bruker Daltonics). The timsTOF Pro was operated in parallel accumulation serial fragmentation mode with an electrospray voltage 1.60 kV. Precursors and fragments were analyzed at the TOF detector with a MS/MS scan range from 100 to 1700 m/z. . Precursors with charge states 0 to 5 were selected for fragmentation, and 10 PASEF-MS/MS scans were acquired per cycle. The dynamic exclusion was set to 30 s.
Database Search
The resulting MS/MS data were processed using Maxquant search engine (version 1.6.6.0). Trypsin/P was speci ed as cleavage enzyme allowing up to 2 missing cleavages. The mass tolerance for precursor ions was set as 40 ppm in rst search and 40 ppm in main search, and the mass tolerance for fragment ions was set as 0.04 Da. FDR was adjusted to < 1%.
Protein annotation
Acetylation on lysine sites detected in the EPI + group while were not detected in all replicates of the EPIgroup were considered as acetylated lysine sites. Deacetylated sites were on the contrary. In case a protein ratio is not determined, normalization was done based on a logarithm-transformation algorithm as described (16). The cutoff for differently expressed proteins in EPI+ group compared to EPI-group was strictly set in 1.5-fold. Comparisons between variables were tested by paired t-test. P values < 0.05 were considered to be statistically signi cant.
Bioformatics analysis.
Gene Ontology (GO) annotation analysis (http://www.ebi.ac.uk/GOA) was derived from the UniProt-GOA database for functional classi cation of proteins. Kyoto Encyclopedia of Genes and Genomes (KEGG, http://www.genome.jp/kegg) database was used to annotate protein pathways.
To generate the PPI network, acetylated proteins were searched against the STRING database version 11.0 (https://string-db.org/ ) with interactive score ≥0.7 as high con dence. Subsequently, Cytoscape software version 3.7.2 (http://www.cytoscape.org/index.html ) was used for visualization of the PPI network, in which nodes represented genes and edges represent interactions between genes. Helm (version 1.0.3.7 https://helm.sh/ ) software was used to make the heatmap.
Motif analysis
MeMe suite version 5.1.1 (http://meme-suite.org/ ) was used to analyze the model of sequences constituted with amino acids in speci c positions of modi er-21-mers (10 amino acids upstream and down-stream of the site) in the sequences of proteins which contain acetylated or deacetylated lysine sites.
Results
Overview of proteome and acetylome in epirubicin-induced early DNA damage We pro led the proteome and acetylome of DNA repair proteins in HEK293T cells in 1 h after treated with epirubicin using LC-MS/MS (Figure 1a). In total, 6291 proteins were detected and 5526 proteins were quanti ed with a label-free strategy. Among these quanti ed proteins, 106 repair proteins associated with NER, BER, MMR, HR, and NHEJ pathways were identi ed ( Figure 1b, Table S1, S3).
The acetylated proteins and their modi cation sites were identi ed using a label-free strategy and antiacetyl antibody a nity enrichment followed by high-resolution LC-MS/MS. The length of most peptides was distributed between 7 and 20, which agreed with the property of tryptic peptides ( Figure 1d; Table S5). Among 6789 Kac sites in 2400 proteins identi ed, 4457 Kac sites in 1778 proteins were quanti ed, including 190 Kac sites in 50 repair proteins ( Figure 1b; Table S2, S4). The overall acetylated proteins contained different numbers of acetylation sites from 1 to 29. 1132 acetylated proteins (47.2%) contained only one acetylation site. The proportions of proteins with two, three, four or more modi cation sites were 18.4, 10.3, and 24.1%, respectively ( Figure 1c; Table S6). Of these 50 repair proteins, the acetylation changes were identifed in 30 proteins. we next analyzing lysine sites on repair proteins, 42 acetylated and 24 deacetylated lysine sites were observed in 21 and 16 repair proteins, respectively, whereas, both acetylated and deacetylated lysine sites were detected in 7 repair proteins (Figure 1b, 2a). In 17 repair proteins, the epirubicin induced acetylation changes were identi ed for the rst time in the present study. (Figure 5b). The number of Kac sites in repair proteins with acetylation modi cation ranged from 1 to 27 (Figure 1b; Table S7). Repair proteins with acetylated or deacetylated lysine sites were shown in Figure 2 b.
Sites that previously have identi ed to underwent acetylation modi cation in DNA damage and repair processes were also identi ed in our result, such as K120 and K164 in TP53 (17), K77 and K13 in PCNA (18). Of the 106 repair proteins analysed, 50 repair proteins were identifed to contain acetylation and deacetylation modi cation. Analysing the expression of 106 proteins, only RAD23A was signi cantly upregulated (1.76 fold) in cells treated with epirubicin ( Figure 2c). The proteome and acetylome results indicated that rapid acetylation or deacetylation of lysine in DNA repair proteins were responsible for manipulating their functions to coordinate the repair progress earlier than the alteration of expression levels in the early stage of DNA damage repair process.
KEGG pathway classi cation
The 50 repair proteins with acetylation were able to be classi ed into six major pathways including NER, BER, MMR, HR, NHEJ, DNA replication, and other pathways related to DNA repair process (Figure 3a, Table S8). The NER pathway ranked the rst place containing 21 in 34 repair proteins. Epirubicin can induce inter-chain cross-linking and DNA adduct, inhibit the activity of topoisomerase II, and release oxygen-free radicals resulting in DNA lesions and activating several repair pathways, which was consistent with our results.
Functional analysis of the repair proteins with acetylated or deacetylated lysine sites Of 50 repair proteins containing acetylation modi cations, we analysed the cellular component, molecular function, and biological process of 30 proteins with acetylated or deacetylated changes ( Figure 3b). These repair proteins were mainly distributed in the nuclear including nuclear lumen organelles (96.7%), nucleoplasm (93.3%), and chromosome (60%), respectively. The top three molecular functions of these proteins were DNA binding; catalytic activity acting on DNA and ATPase activity. The foremost biological process that these repair proteins were involved in DNA metabolic process and DNA repair, chromosome organization, and DNA recombination.
Protein to protein interaction between repair proteins
The protein interaction of acetylated repair proteins was conducted with PPI network analysis. The PPI sources were originated from the STRING database and visualized through the Cystoscope. Interaction information came from experiments and databases resources, and the minimum required interaction score was set to the high level (0.700) to ensure the reliability of the relationship. The relationship of the 50 repair proteins was illustrated in Figure 4a ( Table S10). The PPI network revealed the interacting partners of the 17 proteins with new identi ed acetylation lysine sites, suggesting possible molecular functions related to the effect of acetylations (Figure 4a, 4b).
Acetylation sites located in the functional domains of repair proteins.
The PTM on the domains of proteins can signi cantly regulate the protein functions. We subsequently explored the relationship between the acetylation sites and functional domain of 17 repair proteins. Of the 32 acetylation sites analysed, 9 acetylation sites were located in the functional domains of 7 repair proteins (Figure 5a). Rad23B simultaneously have 2 acetylated and 1 deacetylated lysine sites that respectively are AcK67, AcK36 and DeacK45 which were all located in UBL domain. AcK64 and DeacK380 on PRP19 were positioned in the U box and WD40-repeat regions. AcK171 and AcK313 on RECQL were both located in the 2 RecA-like domains. The other four proteins (RFC5, RFC3, XAB2 and RAD17) contained 1 acetylation site located .in their main functional domain.
Analysis of Acetylated Lysine Motifs
To identify the possible speci c motifs anking acetylation lysine site, the amino acid sequence from the −10 to the +10 positions surrounding the 1090 acetylated peptides and 1047 deacetylated peptides were analyzed using the MeMe suite. Motifs K[Ac]Y, K[Ac]N, and K[Ac]T ranked the top three of acetylated-Lys motifs, and among deacetylated-Lys motifs, the top three motifs were GK[Ac], K[Ac]S, and K[Ac]Y ( Figure 6a, 6b). The matching peptides accounted for all peptides respectively were 27.3% and 24.1%. Among the 30 repair protein with acetylation sites, motifs K[Ac]K, K[Ac]H, and K[Ac]F were most enriched (Figure 6c).
Discussion
In this study, we utilized label-free LC-MS/MS strategy to acquire proteome and acetylome dataset of early DNA damage in 293T cell lines treated with epirubicin. A total of 5526 quanti ed proteins and 6789 Kac sites in 2400 proteins were identi ed, among which 4457 Kac sites in 1778 proteins were quanti ed. Up to now, the maximal quantitative proteomic atlas of acetylation in DNA damage response was reported in 2015 by Elia, A.E. and his colleges (9). With the combination of SILAC and FACET-IP strategy, 16740 Kac sites in 3361 proteins were identi ed in Hela cells that treated with 40 J/m2 UV or 10 Gy IR for 1 hour via LC-MS/MS (9). Compared with the dataset of Elia, A.E.'s, we identi ed additional 3858 Kac sites in our acetylome result.
Among 106 quantitative repair proteins, a total of 190 Kac sites were identi ed in 50 of them, which were distributed in NER, BER, MMR, HR, NHEJ and other pathways closely related to DNA repair, possibly indicating that proteins in multiple repair pathways were regulated by acetylation and involved in restoring the lesions induced by epirubicin. Kac sites on GTF2H2C, RAD51C, and RAD17 were discovered for the rst time. 66 acetylated or -deacetylated lysine sites induced by epirubicin were observed in 30 proteins. 7 repair proteins simultaneously contained acetylated and deacetylated lysine sites. According to the Go analysis, the 30 repair proteins were mainly equipped with DNA binding ability and ATPase activity to concentrate on the chromosome organization, regulation of DNA metabolic process, and DNA recombination.
Of the 50 repair proteins with acetylation, the regulation mechanisms of acetylation on 17 repair proteins still need further studied to illustrate., the molecular functions of them were mainly distributed in chromatin, DNA, ATP, nucleic, protein binding, and ligase, ATPase, DNA clamp loader activity according to the GO annotation. Increasing studies have revealed that acetylation within the domain region were capable of regulating the function of proteins (19,20). For instance, acetylation of K1626 and K1628 in the Tudor-UDR domain of 53BP1 was dynamic regulated by CBP and KDAC2, which was associated with 53BP1 interaction with nucleosomes and the choice of DNA repair pathway (21). Hence, via analyzing the regions of these acetylation sites on the 50 repair protein, 9 acetylated or deacetylated Kac sites were observed to locate in the function domains of 7 repair proteins.
We have identi ed new acetylation changes in 17 repair protein including PRP19, RECQL, RFC5, e.g. and analysed the associations of acetylation sites with functional domain in these proteins. 7 proteins were observed to have acetylated or deacetylated lysine sites on their functional domains. RAD23B, as component of the XPC complex, is the rst factor for recognizing DNA lesions involved in global genome nucleotide excision repair (22). Three lysine sites (acetylated K67 and K36 and deacetylated K45) detected in our study were positioned in the UBL of RAD23B that is responsible for mediating the degradation of the ubiquitinated substrate in proteasome. Therefore, AcK67, AcK36, and DeacK45 in the early DDR are also highly possible to connect with protein degradation in DNA repair.
PRP19 is a ubiquitin ligase involved in DNA damage response. The U box and WD40-repeat regions of PRP19 are importance for recruiting E2 ubiquitin-conjugating enzyme and interacting E3 ubiquitin-protein ligase complex to catalyze the polyubiquitination of target proteins involved in DNA damage response (23) AcK64 and DeacK380 were located in the U box region and fourth WD40-repeat, respectively, suggesting these acetylation changes might regulate the ubiquitination on target protein of PRP19. RECQL. involved in DNA duplex helix in DNA repair (24). AcK171 and AcK313 were located in the RecAlike domains where harbored the ATP-dependent translocation activity and were sought to form a cleft to bind with nucleotide (25) Therefore, it is worthwhile to explore whether the acetylation in RECQL is able to affect the molecular conformational change or ATP-dependent translocation activity or other features. RFC5, RFC3, XAB2 and RAD17 only have one acetylated lysine site identi ed in this study. AcK66 in RFC5 located in the AAA+ ATPase domain. ATPase activity within RFC couple the chemical energy of ATP hydrolysis to the assemble of PCNA onto the RNA-primed DNA (26,27). Whether the acetylation on RFC5 is related to the DNA elongation still needs further elucidation. Mass spectrum result implicated that AcK590 located in the TPR motif 9-15 of XAB2. TPR motifs 11-12 have been validated that were essential for e cient HR (28). Hence, the acetylation of K590 is inferred to probably affect downstream combinations between XAB2 and other proteins during the repair process. K313 in RAD17 was the rst identi ed acetyl-site. In the early DNA damage repair process, Rad17 was considered to involve in triggering the DNA damage checkpoint when combined with RFC2-5 complex forming an RFC-like complex and also were capable of coupling the hydrolysis of ATP to load PCNA onto DNA (29). Whether the AcK313 in RAD17 is related to the RFC complex conformation and PCNA loading still awaits elucidation.
According to the results of proteome and acetylome, 66 acetylated or deacetylated sites were discovered in 30 repair proteins, whereas, the majority of the repair proteins have no signi cant changes in the expression level. These results appeared to prompt that the fast acetylation and deacetylation on the repair proteins were responsible for mediating signaling transduction and participating in repair pathway activation in response to DNA damage, which was in line with the result of a previous study that investigated the acetylation dynamics of human nuclear proteins during the ionizing radiation-induced DNA damage response (14). Increasingly studies have informed that the abnormal acetylation status of repair proteins was capable of modulating the repair e ciency, which was closely related to cancer risk, progression, and therapeutic response (3)(4)(5). Aberrant mutations of lysine on proteins also in uenced the precise acetylation on them and consequently affected their functions. Recently, various of KDAC inhibitors were applied to clinical praxis as effective anti-tumor drugs (30).Therefore, guring out the signaling transduction mediated by acetylation on repair proteins in the early DNA damage response network is vital to develop new medicine targeted protein acetylation.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 4,415.6 | 2020-07-24T00:00:00.000 | [
"Biology"
] |
Ethanolic Extract of Taheebo Attenuates Increase in Body Weight and Fatty Liver in Mice Fed a High-Fat Diet
We evaluated whether intake of an ethanolic extract of Taheebo (TBE) from Tabebuia avellanedae protects against body weight increase and fat accumulation in mice with high-fat diet (HFD)-induced obesity. Four-week old male C57BL/6 mice were fed a HFD (25% fat, w/w) for 11 weeks. The diet of control (HFD) mice was supplemented with vehicle (0.5% sodium carboxymethyl cellulose by gavage); the diet of experimental (TBE) mice was supplemented with TBE (150 mg/kg body weight/day by gavage). Mice administered TBE had significantly reduced body weight gain, fat accumulation in the liver, and fat pad weight, compared to HFD mice. Reduced hypertrophy of fat cells was also observed in TBE mice. Mice administered TBE also showed significantly lower serum levels of triglycerides, insulin, and leptin. Lipid profiles and levels of mRNAs and proteins related to lipid metabolism were determined in liver and white adipose tissue of the mice. Expression of mRNA and proteins related to lipogenesis were decreased in TBE-administered mice compared to mice fed HFD alone. These results suggest that TBE inhibits obesity and fat accumulation by regulation of gene expression related to lipid metabolism in HFD-induced obesity in mice.
Introduction
Obesity is defined as abnormal or extensive fat accumulation that negatively affects health [1] and is associated with numerous chronic diseases, including type 2 diabetes, hypertension, cardiovascular disease, and nonalcoholic fatty liver disease (NAFLD), as well as psychological and social problems [2][3][4]. Obesity results from chronic caloric intake in excess of energy expenditure, a common occurrence in the modern environment that facilitates consumption of hypercaloric diets, including high-fat diet (HFD) [5], and which ultimately accumulates fat, primarily in the liver. In particular, excessive fat accumulation in the liver caused by obesity in general dysregulates insulin action in the liver, leading to insulin resistance [6]. Recent prospective diet intervention studies indicate that 5%-10% weight loss improves liver histology and reduces hepatic triglycerides [7,8].
Through the process of adipogenesis, preadipocytes are converted to adipocytes, and fat accumulation is induced in the tissues. Peroxisome proliferator-activated receptor γ2 (PPARγ2) and CCAAT/enhancer binding protein α (C/EBPα) are the master transcriptional regulators of the adipogenic process [9]. Acetyl-CoA carboxylase (ACC) and fatty acid synthase (FAS) are known to be regulated by sterol regulatory element-binding protein-1c (SREBP-1c), which is a critical transcription factor that stimulates lipogenic enzymes involved in lipid synthesis [10]. Hepatic lipid accumulation is caused by the upregulation of de novo lipid synthesis, activation of lipid uptake, and suppression of lipid catabolism. Also, increased de novo lipogenesis in the liver accumulates excessive triglyceride in the liver [10]. Activation of adipocyte lipid binding protein (aP2) and FAS by SREBP-1c leads to lipid accumulation in the tissues. Thus, inactivation of these adipogenic regulators or inhibition of expression of lipogenic genes may contribute to suppress adipogenesis and lipogenesis, and ultimately prevent obesity and fatty liver. Many studies have been conducted in an effort to identify novel anti-obesity agents that have the ability to control adipogenesis and lipogenesis.
Taheebo (TBE), obtained from the purple inner bark of the Bignoniaceae tree Tabebuia avellanedae Lorentz ex Griseb, which is found in tropical rain forests in northeastern Brazil, has been used as a traditional medicine for various diseases for more than 1500 years [11]. Recently, various fractions and extracts of T. avellanedae bark have been prepared and reported to have anti-inflammatory, antibacterial, and antifungal, as well as anticancer effects [12][13][14]. Although the pharmacological activity of the T. avellanedae bark has been investigated worldwide, its anti-obesity properties have not been studied to date. Therefore, in this study, we investigated the effect of an ethanolic extract of TBE on obesity and fatty liver in the HFD-induced obese mouse model. Figure 1A shows the effect of TBE on body weight of the mice fed an HFD for 11 weeks. Although there was no significant difference in food intake, a significant decrease in mean body weight was observed in the TBE group compared to the HFD group after 9 weeks of feeding. After 11 weeks of feeding the experimental diet, final mean body weight of mice in the TBE group was significantly lower than in the HFD group (36.29 ± 1.28 g vs. 32.35 ± 0.97 g) and body weight gain in the TBE-fed group was less than in the HFD-fed group by 19.98%. Moreover, the weights of epididymal and subcutaneous fat in the TBE group were significantly decreased compared to those of the HFD group ( Figure 1B, p < 0.05), and hematoxylin and eosin (H&E) staining of epididymal fat tissue revealed that TBE supplementation reduced the size of adipocytes in HFD-fed mice ( Figure 1C, p < 0.05), indicating that the reduction in body weight associated with TBE consumption might be the result of reduced adipose tissue weight.
Effect of TBE on Body Weight and Fat Tissue Weight and Serum Biochemical Parameters
We also analyzed serum markers of lipid metabolism. Overall, indicator levels were ameliorated by TBE consumption and, in particular, the levels of triglyceride, insulin, and leptin were significantly decreased compared to those of the HFD group (Table 1, p < 0.05).
Effect of TBE on the Expression of Adipogenic Genes in the White Adipose Tissue (WAT)
Consistent with the reduced fat tissue weight and adipocyte size with TBE supplementation, the expression of mRNA and protein related to adipogenesis was suppressed by TBE supplementation. 3+6C/EBPα, FABP4, PPARγ and SREBP1c mRNA were each decreased to 1.7-, 1.25-, 1.56-, and 1.53-fold levels. Adipogenic proteins including PPARγ, C/EBPα, and fatty acid binding protein 4 (FABP4), were also suppressed by TBE supplementation, indicating that the anti-obesity effect of TBE might be the result of inhibition of adipogenesis (Figure 2A,B).
Effect of TBE on Lipid Content and Expression of Lipogenic Genes in the Live
To examine whether TBE affected hepatic lipid accumulation, we investigated the histopathological changes in the liver. Histopathological analyses following H&E staining, and morphological analysis revealed smaller lipid droplets in the liver sections and reduced liver size of TBE group compared to the HFD group, suggesting that TBE supplementation effectively attenuated hepatic lipid accumulation ( Figure 3A). These findings are consistent with lipid in the livers. Hepatic total lipid and triglyceride contents were significantly lower in the TBE group than in the HFD group ( Figure 3B). To further explore the effect of TBE on hepatic lipid accumulation, we evaluated expression of genes related to lipid metabolism (Table 2 and Figure 4). Comparison of hepatic mRNA in the HFD and TBE groups indicated that TBE supplementation significantly reduced lipogenic mRNAs, such as FAS, acetyl CoA-synthetase (ACS), and stearoyl-CoA desaturase-1 (SCD1), providing further evidence of the antilipogenic effect of TBE. Consistent with reduced expression of lipogenic mRNAs, the expression of lipogenic proteins, such as FAS, ACS, SCD1, ACSL, and cluster of differentiation 36 (CD36) were also downregulated in the TBE group compared to the HFD group. These results indicate that TBE supplementation attenuates lipid accumulation in the liver through the regulation of lipogenesis. 1.00 ± 0.057 0.6109 ± 0.0742 ** Data are mean ± SE (n = 10 per group). * p < 0.05, ** p < 0.01 compared to the HFD group. Relative mRNA levels were calculated after normalization of values to that of β-actin and presented as a ratio compared with HFD.
Discussion
TBE obtained from T. avellanedae is used as a traditional medicinal for various diseases. Previous studies have reported that TBE exerts various pharmacological actions, including anti-inflammatory, antibacterial, and antifungal, as well as anticancer effects [12][13][14]. However, no study has investigated the effect of TBE in diet-induced obesity. Therefore, we examined whether TBE extract has an anti-obesity effect and potential as a natural anti-obesity supplement.
In the present study, we evaluated the anti-obesity effect of TBE in HFD-induced obese mice. During 11 weeks of feeding a HFD, a smaller increase in body weight was observed in the TBE group than in the HFD group. In particular, epididymal and subcutaneous fat mass was reduced significantly by TBE consumption. Consistent with these results, C/EBPα and PPARγ mRNA and protein levels were also downregulated in the WAT in the TBE group. In the process of adipogenesis, C/EBPα and PPARγ are key transcriptional factors and mediate the transcription of terminal adipocyte differentiation marker genes [15]. Taken together, our findings suggest that TBE attenuates HFD-induced increase in body weight by inhibition of adipogenesis.
It has been suggested that leptin contributes to hepatic steatosis by increasing fatty acid concentration in the liver and promoting insulin resistance [16]. In this study, TBE supplementation was found to significantly reduce serum leptin and insulin levels in mice fed a HFD. Moreover, TBE supplementation also markedly lowered hepatic total lipid and triglyceride levels in mice fed a HFD. This finding was supported by histologic analysis of liver tissue, and indicates that TBE supplementation reduced lipid accumulation in the liver. To understand the underlying molecular mechanisms, we assessed the expression of hepatic genes regulating lipid metabolism. The liver is a major site of lipogenesis, where most lipogenic genes, including FAS and SCD1, are highly expressed [17]. Moreover, numerous studies have reported that hepatic expression of FAS and SCD1 was increased in obese mice [18]. FAS, which is a central lipogenic gene, provides nonesterified fatty acid substrate for triglyceride synthesis and increases fatty acid uptake with CD36 [19]. CD36, a target gene of PPARγ, may promote fatty liver by accelerating fatty acid transport to the liver [20]. Suppressed fatty acid uptake into the liver, which was mediated in part by decreased expression of CD36, may help suppress fat deposition in the liver. SCD1 is a regulatory enzyme in lipogenesis, catalyzing the rate-limiting step in the overall de novo synthesis of monounsaturated fatty acid (MUFA), mainly oleate and palmitoleate from stearoyl-and palmitoyl-CoA [17]. MUFAs synthesized by SCD1 are the major substrates for the synthesis of various lipids, such as phospholipids, triglycerides, and cholesterol esters [21]. In our study, TBE supplementation markedly suppressed lipogenic gene (FAS, ACC, ACSL, ACS, SCD1 and CD36) expression. Taken together, our findings suggest that TBE reduces fatty liver by regulating genes related to lipid metabolism.
According to the recent literature, flavonoids, cyclopentene dialdehydes, benzoic acid, quinones, furanonaphthoquinones, and naphthoquinones have been identified as a major compounds in T. avellanedae [11,12,22]. Among the naphthoquinones, β-lapachone appears to have clinical importance [11,12], and Hwang reported that β-lapachone ameliorates metabolic abnormalities [23]. Whereas we explained several mechanisms underlying the anti-obesity effect of TBE, further investigation of the active compounds responsible for this effect is needed. Collectively, our study provides evidence that TBE supplementation exerts an anti-obesity effect and inhibits fatty liver by regulating expression of genes related to lipid metabolism.
Preparation of TBE
To prepare TBE, dried inner bark of T. avellanedae was extracted three times with 70% ethanol at room temperature [11]. Extracts were pooled, filtered, and concentrated under reduced pressure at 40 °C, with a final yield of 12%. The extract was suspended in 0.5% sodium carboxymethyl cellulose (CMC-Na) immediately prior to the start of the experiments.
Animal Models
Four-week old male C57BL/6 mice were purchased from Orient Bio Inc. (Gyeonggi, Korea). The animals were maintained at a temperature and humidity of 21-25 °C and 50%-60%, respectively, and kept on a 12-h light/12-h dark cycle with free access to food and water. After one week of adaptation, all mice were fed a HFD containing (by weight) 20% casein as protein, 25% fat, and 44.5% carbohydrate supplemented with 0.5% cholesterol ad libitum for 11 weeks. Control (HFD) mice were orally administered vehicle (0.5% CMC-Na in distilled water) and experimental (TBE) mice were orally administered TBE extract (150 mg TBE/kg body weight). HFD were based on the American institute of nutrition-76 (AIN-76) diet formula. All procedures involving animals were conducted in accordance with the Guidelines for Institutional Animal Care and Use Committee of the Korea Food Research Institute (KFRI-IACUC, KFRI-M-13021).
Tissue Analysis
At the end of the study period, mice were fasted for 12 h and sacrificed under anesthesia. Blood was collected from the abdominal aorta and centrifuged at 1500× g for 15 min to separate the serum. Triglycerides, total cholesterol, and HDL cholesterol levels in serum were measured using commercial enzyme kits (Shinyang Chemical Co., Seoul, Korea). Liver tissue and fat tissues were also excised and weighed. Hepatic lipids were extracted according to the method described by Folch et al. (1957). Hepatic triglyceride content was measured using commercial enzyme kits (Shinyang Chemical Co.).
Histopathological Evaluation
Liver and epididymal fat tissues were fixed in 10% neutral-buffered formalin, embedded in paraffin, and 5-μm sections were prepared. Liver and epididymal fat sections were stained with H&E. Pathological changes were investigated using an Olympus (Tokyo, Japan) BX50 light microscope to confirm lipid droplet and adipocyte size.
qRT-PCR Analysis
Total RNA from the livers and epididymal fat tissues were extracted using NucleoSpin RNA II (Macherey-Nagel, Düren, Germany) according to the manufacturer's instructions, and reverse transcription to generate cDNA was performed using the iScript TM cDNA Synthesis Kit (BioRad, Hercules, CA, USA) according to the manufacturer's instructions. qPCR was performed with 1 μg of cDNA, 10 μL of SYBR Green Master Mix (Toyobo, Tokyo, Japan) and forward/reverse primers in a StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA, USA). The cycling conditions were as follows: 50 °C for 2 min and 95 °C for 5 min, followed by 40 cycles of 95 °C for 5 s, 55-60 °C for 10 s, and 72 °C for 15 s. The melting curve was also analyzed to ensure that only a single product was amplified and the condition were as follows: 95 °C for 15 s, 60 °C for 1 min, and 95 °C for 15 s. The sequences of the primers used in this study are shown in Table 3. Relative mRNA expression levels were calculated after normalization of values to that of β-actin. Table 3. Primer sequences for qRT-PCR.
Primer
Direction Sequence
Western Blotting
Proteins were extracted from liver and epididymal fat tissues and quantified, followed by separation by 10% SDS-PAGE and transfer to polyvinylidene fluoride membranes (Millipore, Billerica, MA, USA). The membranes were blocked in 5% skim milk in Tris-buffered saline containing 0.05% Tween-20 (TBST) for 2 h at room temperature. After overnight incubation at 4 °C with primary antibodies, membranes were incubated with appropriate horseradish peroxidase-conjugated secondary antibodies for 1 h at room temperature. Immunodetection was carried out with Amersham ECL detection reagent (GE Healthcare, Chalfont St. Giles, UK). Antibodies used in this study were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA) and Cell Signaling Technology (Danvers, MA, USA).
Statistical Analysis
The data were expressed as mean ± standard error of the mean. Differences between groups were examined for statistical significance using Student's t-test and the 2-way ANOVA test was used to compare the weight changes between the groups using GraphPad Prism version 5.0 (GraphPad Software, San Diego, CA, USA), with a value of p < 0.05 selected as the threshold for statistical significance.
Conclusions
In conclusion, an ethanolic extract of TBE from the purple inner bark of the Bignoniaceae tree T. avellanedae Lorentz ex Griseb attenuates obesity and fatty liver in HFD-induced obese mice by regulating expression of genes related to lipid metabolism. | 3,537.8 | 2014-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Preparation, Structure, and Properties of Polystyrene-Microsphere-Reinforced PEG-Based Hydrogels
To improve the mechanical strength and practicability of hydrogels, polystyrene microspheres with core–shell structure were prepared by the soap-free emulsion polymerization, polyethylene glycol hydrogels with polystyrene microspheres by the in-situ polymerization. The structure, morphology, roughness, swelling property, surface energy, and mechanical properties of the microspheres and hydrogels were investigated by Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, confocal laser microscopy, swelling test, contact angle measurement, and compression test. The results showed that they have certain swelling capacity and excellent mechanical properties, and can change from hydrophobic to hydrophilic surface. The reason is that the hydrophilic chain segment can migrate, enrich, and form a hydration layer on the surface after soaking for a certain time. Introducing proper content of polystyrene microspheres into the hydrogel, the compressive strength and swelling degree improved obviously. Increasing the content of polystyrene microspheres, the surface energy of the hydrogels decreased gradually.
Introduction
Hydrogels are a gel system formed by hydrophilic polymers and water molecules with a three-dimensional interconnection network structure that can swell in water but not dissolve [1][2][3]. In 1960, Wichterle and Lim [4] found a soft, swelling, elastic and transparent material in the presence of a small number of crosslinking agent, and a certain number of water or an appropriate solvent during the polymerization of hydroxyethyl methacrylate monomer. This research result opened a prelude to the synthesis and application of hydrogels. Subsequently, the diverse of hydrogels emerged and developed rapidly. The hydrogels are widely used in tissue engineering [5][6][7], biomedical devices [8,9], microfluidic, optical actuators [10,11], and the marine industry [12,13] due to its high moisture, physiological fluid, and high elasticity. From the source of preparing hydrogels, they can be divided into natural macromolecule hydrogels and synthetic hydrogels; according to the different ways of crosslinking, they can be divided into physically cross-linked hydrogels and chemical cross-linked hydrogels; according to different sizes of hydrogels, they can be divided into macroscopic hydrogels, micron hydrogels, nano hydrogels, etc. [14].
It is a common phenomenon that the swelling property of hydrogels is accompanied by complex water evaporation and water absorption in the actual environment, which will lead to the weakening of hydrogels and loss of their original properties, such as mechanical properties. On the one hand, by grafting hydrogels onto carriers with high mechanical strength, the mechanical properties of hydrogels can be greatly improved. On the other hand, functionalized hydrogels can be obtained by grafting certain functional chains onto hydrogels. Ewaramm [15] copolymerized N-vinyl caprolactam onto guar gum, then it mixed with sodium alginate and cross-linked with glutaraldehyde to obtain a composite hydrogel, and applied the hydrogel to control release, which can effectively control the release of zidovudine (a drug for the treatment of AIDS) through pH and temperature.
Materials
The reagent used in this study was styrene ( Chemical Reaction Principle of Preparing Polystyrene Microspheres Polystyrene microspheres were prepared by soap-free emulsion polymerization using styrene, HEMA and PEGMA as reaction monomers and AIBA as initiator, as showed in Figure 1.
2.
Chemical Reaction Principle of Preparation of PS-NCO Intermediates PEG2000 and IPDI were used as the reactive monomers to conduct the initial reaction in the mixed solvent. Then the PS microspheres powder prepared in the previous step was added into the system, the -NCO in excess IPDI will react with the -OH on the surface of PS microspheres to obtain PS-NCO intermediates through in-situ polymerization, as shown in Figure 2.
Chemical Reaction Principle of Preparation of PS-PEG Hydrogel
Using PEG2000, IPDI, and HSH330 as monomers, the isocyanate of IPDI was firstly reacted with the hydroxyl terminations of PEG2000 and HSH330 by solution polymerization method. Then PS-NCO and BDO was added into the reaction system and finally, PS-PEG hydrogel was obtained, as showed in Figure 3. The monomer mass of PS microspheres was 7.1 wt%, 14.2 wt%, 21.3 wt%, and 28.4 wt%. The corresponding products were named as PS7-PEG, PS14-PEG, PS21-PEG, and PS28-PEG. The sample without PS microspheres is labeled as PS0-PEG.
Technological Process
The first step of the synthesis, A 500 mL four-neck flask was connected to a SZCL-2A digital intelligent control magnetic stirrer (Yuhua Instrument Co., Ltd., Gongyi, China), and 4 g HEMA, 6 g PEGMA, and 250 g deionized water were added. Meanwhile, 20 g styrene was put into 50 mL separating funnel, and control in all added into the flask about 30 min. Used DW-3 high-speed digital display electric stirrer (Yuhua Instrument Co., Ltd., Gongyi, China), and pre-emulsified at 3000 rpm for 1 h. Under the protection for nitrogen, the temperature was raised to 60 • C by using DF-101Sheat-collecting thermostatic heating magnetic stirrer (Yuhua Instrument Co., Ltd., Gongyi, China). In addition, 0.5 g AIBA was dissolved in 20 g deionized water, put into a 50 mL separatory funnel, and control about 1 h to all drip into the flask for reaction. After the content was completed, the reaction was carried out for 8 h with 280 rpm. Then, 200 mesh gauze was used to filter out the larger particles and used LC-10A-50N vacuum freeze-drying (Lichen Bangxi Instrument Technology Co., Ltd., Shanghai, China) is performed (Freeze for 12 h, dry for 24 h). Then, the powder was washed with added ethyl acetate and deionized water successively in a mass ratio of 1:1, used KQ-300E ultrasonic cleaner (Ultrasonic Instrument Co., Ltd., Kunshan, China) to wash for 40 min, and TDL-50C centrifuge (Anting Scientific Instruments Co., Ltd., Shanghai, China) was used to remove the supernatant by centrifugation for 15 min with 4000 rpm. Repeat three times to wash away the unreacted monomers. After finishing freeze-drying again (Freeze for 12 h, dry for 24 h), the powder obtained is ground and sealed, and the yield of PS microsphere powder prepared at one time was about 26 g. The specific process flow is shown in Figure 4a. The next stage of the work, 44 g PEG was dried under reduced pressure at 120 • C for 3 h to remove water and dissolved gases. The mixed solvent (Xylene: Ethyl acetate: Cyclohexanone = 2:2:1) was filled, mixed, and stirred for 30 min, and then the temperature was raised to 70 • C. 10 g IPDI was added to the flask, reacted for 3 h, and then the temperature was lowered to 60 • C, the 10 g PS microspheres powder was slowly added, isocyanate terminated PS-NCO intermediate is obtained after 5 h, which is encapsulated and ready for use. The specific process flow is showed in Figure 4b.
Finally, 40 g PEG and 10 g HSH330 were dried under reduced pressure at 120 • C for 3 h, mixed with 80 g solvent and stirred for 30 min. 10 g IPDI was added to the flask at 70 • C and reacted for 3 h. When temperature cooled to 60 • C, 10 g PS-NCO solution was slowly added to the flask. After adding the chain extender 0.5 g BDO, the reaction ends after 5 h. The yield of PS-PEG hydrogel was about 160 g. The specific process flow is showed in Figure 4c. When it is lowered to room temperature, pour it into a polytetrafluoroethylene board for casting, and put in GDHS-2005A constant temperature and humidity box (Jinghong Experimental Equipment Co., Ltd., Shanghai, China), adjust humidity 70% RH to wet solidify to get PS-PEG hydrogel. The specific process flow is showed in Figure 4d.
Preparation of Samples
The part of PS-PEG hydrogel solution was painted on glass slides whose dimensions are 76.2 mm × 25.4 mm × 1 mm, the rest was poured into a Teflon mold whose dimensions are 20 mm × 20 mm × 9 mm. The specimens were placed in a dust-free room temperature environment of 70% RH, cured for 7 days. The former cured was used to characterize the chemical structure and surface morphology, the latter cured to measure contact angle, swelling and compressive properties. FTIR (EQUINOX5, Bruker, Karlsruhe, Germany) was used to analyze the chemical structure of the PS microspheres with KBr tablet method. 200 mg KBr and 2 mg PS microsphere powder were mixed and ground in an agate mortar with ϕ 60 mm for 1 min, and then, put into the tableting mold with ϕ 13 mm, pressurized to 20 MPa, and stayed for 2 min. After that, take out the mold and put the tablet into the tablet holder for infrared test. The scanning range was 4000 to 400 cm −1 , the resolution was 2 cm −1 , and the number of scans was 32 times.
FTIR (PERKINELMER, Waltham, MA, USA) was used to analyze the chemical structure of the PS-NCO intermediate and PS-PEG hydrogels with attenuated total reflection mode, the scanning range was 4000 to 650 cm −1 , the resolution was 2 cm −1 , and the number of scans was 32 times.
X-ray Diffraction (XRD)
XRD spectrum of the samples was performed with D/MAX-Ultima X-ray diffractometer (Rigaku Denki, Tokyo, Japan). The instrument used copper-palladium ceramics with the test range was 10-90 • , the step size was 0.02 • , and the time per step was set at 8 • min −1 .
Surface Morphology Scanning Electron Microscope (SEM)
The morphology of the samples was observed by Supra-55-sapphire SEM (Carl Zeiss AG, Jena, Germany). Take a square monocrystalline silicon wafer with a side length of 2 cm, and the PS microspheres emulsion was diluted and then dropped on the wafer. After being dried, liquid conductive adhesive drops are used to seal the edges around the wafers, and then the microstructure characteristics were observed by SEM. In addition, the hydrogel was prepared into a sheet of 10 mm× 10 mm× 1 mm, and adhered to the surface of the metal block with conductive tape. The JFC-1100 ion sputter (Japan Electronics Ltd., Tokyo, Japan) used direct current, adjusted the voltage and current to 5 kV and 5 mA, respectively, and sputtering on the surface of the specimen for 2 min. The observation mode is SE2, and the acceleration voltage is 1 kV.
Transmission Electron Microscope (TEM)
JEOL-2100 TEM (Japan Electronics Ltd., Tokyo, Japan) was used to observe the morphology of nano microspheres. After diluting the emulsion in absolute ethanol, it was ultrasonically dispersed for 15 min, then dripped onto a copper net for air-drying, and the accelerating voltage was 200 kV.
Confocal Laser Scanning Microscope (CLSM)
OLS4000 CLSM (Olympus, Tokyo, Japan) was used to observe and analyze the surface and fracture of the PS-PEG hydrogels. The surface roughness (Sa) of the samples were analyzed by LEXT analysis software.
Physical Properties Differential Scanning Calorimetry (DSC)
DSC measurement was performed on NETZSCH DSC 200F3 differential scanning calorimeter (NETZSCH-Gerätebau GmbH, Selb, Germany) under a nitrogen flow of 5 • C min −1 . The hydrogel was dried for 48 h and put into an aluminum crucible after removing the water. The scanning temperature range was set at −100-200 • C.
Swelling Properties
The composite hydrogel obtained after curing for 7 days were cut into cuboid specimens whose dimensions are 2.5 mm × 1 mm × 0.5 mm, and immersed in deionized water for 48 h. The samples were taken out at different time intervals, quickly dry the water on the sample surface with a filter paper, and put it into a precision balance for weighing. The swelling degree of the hydrogel SD = (WS − W0)/W0. WS and W0 are respectively the mass of the hydrogel when swelling to a certain time and primitive hydrogel.
Contact Angle (CA) and Surface Free Energy
CA of the hydrogels were measured by hanging drop method at room temperature using JC2000C Contact Angle Measurement Instrument (Zhongchen Digital Technology Equipment Co., Ltd., Shanghai, China), and used a syringe to drop 3 µL of the liquid (deionized water or diiodomethane) onto the surface of the sample. Meanwhile, calculating for 5 min for dynamic water contact angle (DWCA) and dynamic diiodomethane contact angle (DDCA). Owens two-liquid method [30] was used to calculate the surface free energy of PS-PEG hydrogel coating by calculating DWCA and DDCA.
Mechanical Properties
According to GB/T 1041-92, the hydrogel samples wet-cured for 7 days were made into cuboid mold with a length of 20 mm, a width of 20 mm and a height of 9 mm. The compression performance of hydrogels was tested by UTM 5105 computer-controlled electronic universal testing machine (Jinan Wance Electrical Equipment Co., Ltd., Jinan, China). The hydrogel sample was placed in the middle of the test bench and compressed at a speed of 1 mm/min. The elastic modulus was calculated according to the slope of the strain within 0.1 mm/mm in the stress-strain curve.
Chemical Structure and Morphology of PS Microspheres
The PS microspheres powder prepared by the freeze-drying method was tested by FTIR, as showed in Figure 5, indicating that the absorption peak at 3435 cm −1 was the vibration absorption peak of the hydroxyl group. The stretching vibration absorption peaks at 3000-3100 cm −1 were C-H bond of benzene ring. The absorption peaks at 1724 cm −1 , 1801 cm −1 , 1870 cm −1 , and 1945 cm −1 were weak bands unique to PS, corresponding to the frequency double and group frequency absorption peaks of the out-of-plane bending vibration of the aromatic ring C-H. While the strong absorption peak at 1633 cm −1 was the stretching vibration absorption peak of the carbonyl group on the ester group, its original absorption peak was 1740 cm −1 . Because of the high polarity of carbonyl group, it can form conjugate effect with the benzene ring, so that the absorption peak of carbonyl group moves to the direction of low wave number, that is, the absorption frequency decreases [31]. The absorption peaks at 1450 cm −1 , 1494 cm −1 , and 1600 cm −1 are the skeleton vibration absorption peaks of the benzene ring. The absorption peaks at 1354 cm −1 and 1382 cm −1 were the in-plane bending vibration absorption peaks of alkyl C-H, and the absorption peak of ether bond at 1101 cm −1 . In the meanwhile, the absorption peaks at 842 cm −1 , 906 cm −1 , and 945 cm −1 were the C-H out-of-plane flexural vibration absorption peaks of the benzene ring, and the absorption peaks at 700 cm −1 and 759 cm −1 were the monosubstituted benzene ring. The above characteristic peaks indicate that PS microspheres have hydroxyl, carbonyl, and benzene ring functional groups. The morphology of PS microspheres was observed by SEM and TEM. Figure 6a shows the SEM image of PS microspheres with the same scale. Clearly, PS microspheres have a smooth surface, compact arrangement, and a diameter of about 100 nm. PS microspheres are visualized by transmission electron microscopy. As shown in Figure 6b, it can be seen that that the core-shell PS microspheres have an average diameter of about 100 nm. The shell looks very uniform with a thickness of about 11 nm. The PS microspheres with styrene as core, hydroxyethyl methacrylate and polyethylene glycol methyl ether methacrylate as shell were successfully prepared by soap-free emulsion polymerization.
Chemical Structure and Morphology of PS-PEG Hydrogels
ATR-FTIR tests were carried out on the PS-NCO intermediate terminated with isocyanate through bulk polymerization and the PS-PEG hydrogel as the final product. The infrared spectra obtained were showed in Figures 7a and 7b, respectively. There is a very obvious absorption peak at 2258 cm −1 in Figure 7a. This is the absorption peak of isocyanate (-NCO), which proves that the synthetic PS-NCO intermediate will successfully seal the isocyanate. As showed in Figure 7b, PS-PEG hydrogels with different contents were successfully synthesized. It can be observed from the infrared spectrum that the stretching vibration absorption peak of the -NH bond is located at 3396 cm −1 , and its deformation vibration peak is at 1546 cm −1 . In addition, at 1643 cm −1 and 1701 cm −1 , the stretching vibration peak of C=O in the amide carbonyl group and the stretching vibration peak of C=O in the urea group is separated. The above characteristic peaks indicate that the final product is PEG-based hydrogel. The skeleton vibration absorption peak of the benzene ring is at 1450 cm −1 , C-O stretching vibration peak is at 1248 cm −1 , and the strong absorption peak at 1097 cm −1 , is the -O-asymmetric stretching vibration peak polyether structure, which indicates that PEG-based hydrogel successfully introduces the structure containing benzene ring and polyether. Moreover, there was no obvious absorption peak between 2200 cm −1 and 2300 cm −1 , indicating that the isocyanate in the polymer had been thoroughly reacted at this time, and the final synthetic substance was PS-PEG composite hydrogel. The hydrogel sample after soaking and swelling in deionized water for 48 h, as showed in Figure 7c. The absorption peak of free -NH formed hydrogen bond with carbamate was located at 3340 cm −1 . In contrast, the carbonyl peak C=O after soaking in different solutions was redshifted to about 1638 cm −1 , and the ether bond region was also redshifted from 1097 cm −1 to 1074 cm −1 . The main reason is that a large number of water molecules in various solutions provide the -H receptor, which increases the degree of hydrogen bonding.
The above analysis showed that the PS-PEG hydrogel was successfully synthesized. After wet curing, the free -NH and -CO-stretching vibration region were split into two peaks, namely, the free absorption peak and the absorption peak forming hydrogen bond. Meanwhile, the isocyanate was consumed, and after soaking, the -NH and -CO-on the hydrogel structure tended to form a hydrogen bond with water molecules. As a result, the number of hydrogen bonds increases, but the number of hydrogen bonds on the segment structure decreases, which weakens the degree of hydrogen bonding of the hydrogel structure chain.
SEM observation showed PS microspheres were uniformly dispersed in PS-PEG hydrogels, refers in Figure 8a. XRD spectra of PS microspheres and PS14-PEG hydrogel were shown in Figure 8b. Clearly, there was a broad peak near 2θ of 19.9 • , which was a characteristic amorphous diffraction peak, and it was consistent with the literature [32]. There were two dominant characteristic peaks respectively at 19.4 • and 23.5 • which derived from the PEG, which were consistent with the literature [33]. However, it can be observed that the amorphous peak of PS shifted about 0.9 • in the hydrogel by compared with two diffraction peaks, which due to the distortion of the crystal lattice caused by the internal residual stress of hydrogel. In this regard, DSC is further used to perform differential thermal analysis on each hydrogel sample, it can be clearly observed that the PS-PEG hydrogels added with PS microspheres all have an exothermic crystallization peak at about −33 • C in Figure 8c. In summary, combined with the infrared spectrum, the PS microspheres were successfully compounded with the PEG-based hydrogel. The morphology diagram of PS-PEG composite hydrogel with each PS microspheres addition number was clearly as showed in Figure 9. The observation results also indicated that with the increase of PS microspheres, the light transmittance of the composite hydrogel decreased gradually. While the macroscopic light transmittance of the PS0-PEG sample without PS microspheres is the strongest.
Swelling Properties of Hydrogels
The swelling properties of each sample of PS-PEG hydrogel placed in deionized water for 48 h were showed in Figure 10a. In the first 3 h of the initial swelling stage, the swelling degree of each sample increased significantly in water absorption. With the increase contents of PS microspheres, PS-PEG hydrogels had rapid water absorption and swelling property in the first 6 h. Then the water absorption and swelling degree slowed down, but it still kept increasing. As the PS microspheres continued to increase, the swelling degree of the PS28-PEG sample was smaller than that of the PS21-PEG sample; that is, the swelling degree increased first and then decreased. This is because the swelling of hydrogel is determined by hydrogen bonding and hydrophobicity, and the PS microspheres contain oleophilic groups simultaneously. When the content exceeds a certain proportion, the water absorption of the hydrogen bond is less than that of its hydrophobic structure so that the swelling degree will be affected to a certain extent. At the same time, we calculated the swelling degree of each hydrogel sample after soaking for 24 h and 48 h, as listed in Table 1. It demonstrates that the PS21-PEG sample had the highest swelling degree, not the PS28-PEG, which is because the dense three-dimensional cross-linking structure is formed after the third step of polymerization. However, the limited space of the system will further hinder the absorption of water by the hydrogel. The volume of PS21-PEG hydrogel after swelling for 48 h increases to about 3.4 times that of original dry gel sample, as shown in Figure 10b. It can be observed from the figure that the volume of the hydrogel has undergone macroscopic expansion after swelling. It always keeps a certain swelling degree in deionized water. This is because the -OH of the PS shell layer with a core-shell structure can be cross-linked with -NCO, making the PS microspheres a 'central connection point', which promotes the overall hydrogel the three-dimensional structure was significantly improved.
Contact Angle and Surface Energy
Since static contact angle is measured under equilibrium condition, it can only reflect the wettability at equilibrium, and cannot reveal the change information of surface structure. It is unable to study the relationship between material surface structure and wettability, as well as the precise control of the surface structure. However, dynamic contact angle just makes up this shortcoming. It can provide information on surface roughness, uniformity of chemical properties, reconstruction of hydrophilic/hydrophobic chain segments. The dynamic water contact angle (DWCA) and dynamic diiodomethane contact angle (DDCA) of PS-PEG are calculated for 5 min, respectively. The surface free energy is calculated as listed in Table 2. PS-PEG hydrogel was tested in deionized water for DWCA, as showed in Figure 11a. The initial DWCA was PS7-PEG > PS14-PEG > PS21-PEG > PS28-PEG. The final state DWCA was PS21-PEG > PS14-PEG > PS7-PEG > PS28-PEG. As the number of PS microspheres increased, the initial DWCA became lower and changed from hydrophobic to hydrophilic within 5 min (from 104 • to 75.5 • in the case of PS14-PEG). This was because the PS surface of the PS-PEG hydrogel compound with PS microspheres powder was rich in a large -OH, which was a strong hydrophilic group. In the subsequent synthesis process, after the reaction with isocyanate and wet solidification of the film, the chain segment formed by it is more flexible. Therefore, for the test area of water droplets, the more flexible hydrophilic chain segment is more likely to migrate and enrich on the surface of the coating, realizing the conversion process from hydrophobic to hydrophilic, and then forming a layer of hydration film. Therefore, it appears to be strongly hydrophilic after the appearance of hydrophobicity. It also can be observed from the swelling test. The results after the DDCA test are plotted as showed in Figure 11b. It is not difficult to observe that the initial DDCA is PS21-PEG > PS28-PEG > PS7-PEG > PS14-PEG. The final DDCA is PS21-PEG > PS14-PEG > PS7-PEG > PS28-PEG. All samples are lipophilic (take PS14-PEG as an example, from 49.25 • to 33.5 • ). In addition, DWCA and DDCA showed the same change trend. PS-PEG hydrogels has both hydrophilic and lipophilic, when the surface contacts with polar or non-polar media, internal structure of the hydrogel will be restructured within a certain period of time, and the corresponding functional groups will eventually accumulate on the surface through the turnover of the flexible chain.
The surface energy of the coating was calculated according to the given formula by measuring the water contact angle and diiodomethane contact angle. The atoms (or ions) on the surface are in a state of imbalance between internal and external forces. The more unbalanced the force is, the more unstable it is and the higher its surface energy is. Therefore, as evidenced by the surface energy of hydrogel can be reduced by adding PS microspheres properly, and the surface energy decreased with the increase of PS microspheres. The surface of PS28-PEG sample is minimum, only 31.45 mJ/m 2 .
Surface Morphology and Roughness
The relationship between the surface roughness of the coating before and after soaking and the content of PS microspheres was showed in Figure 12. The surface roughness of the hydrogel before immersion increased with increasing the content. This is because the nano-PS microspheres have high surface activity. Therefore, too many nanospheres will increase the agglomeration effect. The agglomeration effect is enhanced, small agglomerations are easily formed in the partially incompletely dispersed areas, which increases the surface roughness. When the sample swelled for 48 h, the surface roughness of the sample decreased with the increase of PS microspheres. This is because PS microspheres are used as the crosslinking points in the hydrogel, making the crosslinking density increase, it caused three-dimensional space to become more compact when the hydrogel absorbs water, it can form a hydration layer on the surface, and the roughness of the surface is reduced. Therefore, the surface of the hydrogel after swelling at the same time is more even and smooth. As for PS14-PEG hydrogel with PS microspheres added number of 14.2 wt%, the roughness of PS14-PEG hydrogel was not only low before soaking, but also decreased significantly after soaking for 48 h, which was better than that of other samples.
Compression Performance
The rheological test cannot reflect the macroscopic fracture property of the material under large deformation, so it is necessary to test the compression property of the hydrogel.
The results showed that a simple compression test of PS-PEG hydrogel, as showed in Figure 13. It was observed that the hydrogel can recover to its original shape after the removal of external force without being damaged, with certain elasticity. The compressive stress-strain curves, compressive elastic modulus and maximum compressive stress of all samples were showed in Figure 14a,b. Obviously, the maximum compression stress of PS14-PEG hydrogel reached 5.83 MPa, which was nearly 3.3 times higher than that of PS7-PEG hydrogel 1.78 MPa. However, the maximum compression stress gradually decreased with increasing PS microspheres, and the maximum compression stress of PS28-PEG hydrogel was 2.89 MPa. The maximum compression stress of PS28-PEG is 2 times lower than that of PS14-PEG. The maximum compressive stress of PS14-PEG hydrogel is much higher than that reported by Jiang [34]. He used hyperbranched polyethyleneimine (HPEI) as the main material and prepared the quaternary ammonium nanoparticles, these nanoparticles were added to HPEI to prepare hydrogels. The maximum compressive stress of his hydrogels reached nearly 2.5 MPa. Similarly, the relative compression ratio increased first and then decreased with the increase of PS microspheres. The relative compression ratio of PS14-PEG hydrogel reached 161.96%, while PS21-PEG and PS28-PEG were damaged at 151.04% and 134.71%, respectively. This indicates that the proper content of PS microspheres can effectively improve the compression performance of the composite hydrogels. However, the excessive PS microspheres will increase the density of hydrogen bonds and crosslinking points between the segments in the hydrogel. The increasing density of overall three-dimensional network structure makes chain segments difficult to move.
The compressive elastic modulus was fitted by the stress-strain curve. With increasing the content of PS microspheres from 7.1 wt% to 14.2 wt%, the maximum compression modulus increased from 39.6 kPa to 73.7 kPa, the latter was 1.86 times higher than the former. Afterwards, the compressive modulus of the hydrogel decreased gradually, and the PS28-PEG hydrogel was only 14.8 kPa. Compared with PS14-PEG, the reduction was nearly five-fold. The results showed that too many PS microspheres can easily lead to their agglomeration, this leads to uneven distribution of internal structures. Too many rigid PS microspheres reduced the elastic modulus and compressive strength of hydrogels.
The dispersion of low-concentration nanoparticles was relatively uniform. The molecular chain between two nanoparticles was longer, so the degree of molecular chain curl was higher. A large number of energy were consumed in the compression process. Therefore, the PS-PEG hydrogels exhibit some elasticity at low concentrations. With the increase of PS microspheres content, the cross-linking density of hydrogel increased, it resulted in a shorter chain between the two nanoparticles, so the elasticity goes down. At this time, when the hydrogel is extruded by compression, the space size of the microspheres were irreversibly destroyed to a certain extent.
Discussion
The above results demonstrate that the content of PS microspheres not only affects the surface properties of composite hydrogels, but also affects the physical properties and mechanical properties of composite hydrogels. With the increase of the content of PS microspheres, the mechanical properties of the hydrogel gradually increase, as showed in Figure 15a. when the content of PS microspheres reaches 14.2 wt%, the mechanical properties of the hydrogel are the maximum, while the corresponding the swelling degree was lower at 48 h, indicating that the internal cross-linking structure of the hydrogel is the tightest; while continuing to increase the number of PS microspheres added, the mechanical properties show a downward trend. In addition, the surface roughness and surface energy of the hydrogel decreased with the increase of PS microspheres, as showed in Figure 15b. The deep-frozen fracture morphology of hydrogel samples was shown in the Figure 16. The observation indicates that the content of PS microspheres obviously affect the fracture morphology of the hydrogel. The fracture of PS7-PEG hydrogel has a certain arrangement of gully shape, as showed in Figure 16a. This is because a few PS microspheres can make the three-dimensional network crosslinking structure of the hydrogel more compact. However, the number of connection points is limited and the crosslinking structure is not perfect, so the segment with weak bonding strength breaks when subjected to external stress. When the number of microspheres continued to increase, the fracture groove of PS14-PEG hydrogel was significantly less than that of PS7-PEG hydrogel, as showed in Figure 16b, and has a certain regularity, is not along the only direction of fracture. This is because the hydrogel will deform under the action of natural forces, in the place where the spatial structure of hydrogel is cross-linked closely, the bond strength is high and it is not easy to be broken, while in the place where the relative bond strength is low, it is easy to reach the limit and break. The fracture occurs in the position where the relative strength of the hydrogel bonding is poor, so the whole fracture process will be affected by the distribution characteristics of this region. Therefore, the fracture position is different at this time, and it has a certain spatial structure regularity. Continue to increase PS microspheres, as showed in Figure 16c, the PS21-PEG hydrogel cross section is similar to lamellar fault, and the depth of the gully produced by the fracture cracks is relatively shallow. The reason is that the internal crosslinking structure continues to increase and the binding strength increases with the further increase of the content number of nano-spheres. Therefore, in the whole process of fracture, the strength of longitudinal fracture cannot extend well to the interior, it only keeps the regular fracture in the transverse direction. The corresponding PS28-PEG hydrogel had the most PS microspheres, as showed in Figure 16d. The results demonstrates that its fracture morphology was relatively flat, and there was no obvious fracture layer and similar gully shape. This was attributed to the fact that the sample contains the most PS microspheres, and the full cross-linking reaction inside the hydrogel results in the highest binding strength, thus forming a three-dimensional structure with high strength, which delays the further fault damage of the internal structure caused by the external force. Consequently, the influence of PS microspheres content on fracture strength can be generalized schematically as showed in Figure 17.
Conclusions
(1) PS microspheres with a core-shell structure were successfully prepared by soap-free emulsion polymerization. The diameter of the microspheres was about 100 nm and the shell thickness was about 11 nm. Moreover, PS microspheres reinforced PEGbased hydrogels were successfully prepared by bulk polymerization and solution polymerization.
(2) The swelling degree of PS-PEG hydrogel changes with the content of PS microsphere.
When its content was 14.2 wt%, its swelling degree was 246.9% at 48 h. PS-PEG hydrogels have good swelling performance. (3) The static water contact angle and surface free energy of the hydrogels decreased with increasing PS microspheres content, but diiodomethane contact angle creased. The surface energy of PS28-PEG hydrogel is only 31.45 mJ/m 2 . Moreover, the surface of PS-PEG hydrogel can changed from hydrophobic to hydrophilic during the wetting process. (4) The content of PS microspheres can affect the surface roughness of the hydrogel. The surface roughness of the dry gel increases with the increase content of microspheres. However, the surface roughness of the soaked PS-PEG hydrogel decreased with the content of PS microspheres. (5) Appropriate content of PS microspheres can improve the mechanical properties of PS-PEG hydrogels. When the content of PS microspheres is 14.2 wt%, the maximum compressive stress reaches 5.83 MPa, the relative compression ratio reaches 161.96%, and the compressive elastic modulus is 73.7 kPa. | 7,739.2 | 2021-08-01T00:00:00.000 | [
"Materials Science"
] |
Financial wellbeing of households in instability
In instability and economic turbulence, the wellbeing of households as market economy entities constitutes the financial-investment capacity of a region, the level of which is determined by the conditions of the competitive socio-economic environment. The paper aims to estimate the financial wellbeing of households on the example of the oblasts of the Carpathian region of Ukraine in instability. The study is based on a system-integral estimation method, which includes the implementation of three stages: (1) development of a system of indicators, (2) determination and substantiation of weight significance, and (3) construction of time series of empirical parameters of households’ wellbeing based on temporal and spatial approaches. The analysis reveals that the financial wellbeing of households differentiates in a competitive economic environment and with the spread of behavioral factors (COVID-19, consumer reflections). Among the oblasts of the Carpathian region of Ukraine, the highest values of empirical parameters of financial wellbeing were in Zakarpatska (0.537) and Chernivetska (0.459) oblasts (2019). Meanwhile, the level of the financial wellbeing of households is higher in Lvivska oblast by several indicators. The divergence of the Carpathian region from Ukraine by the level of the financial wellbeing of households was mostly observed in 2018–2019. Zakarpatska oblast was the leader by the level of the financial wellbeing of households in 2010–2019. The study is of the practical nature for framing the regional economic policy in terms of detecting the critical “pressure” of financial wellbeing on the economic growth of the region and economic ability to increase investment capacity.
AcknowledgmentsThe study has been conducted within the framework of the Applied Research “Financial determinants of the provision of economic growth in the regions and territorial communities based on behavioural economy” with the support of the National Research Foundation of Ukraine (M. Dolishniy Institute of Regional Research of the National Academy of Sciences of Ukraine, the grant Reg. No. 2020.02/0215, 2020–2022).
INTRODUCTION
The wellbeing of the population is an indicator of access to benefits that, as needed, become even more diverse and are produced by various domains. The financial domain of ensuring the wellbeing of the population remains the most important in the system of its determinants, since it creates conditions for meeting all the needs through the opportunity to receive income. The modern conditions of human development differentiate the ways of securing financial wellbeing and change the conceptual understanding of an economic person with the growth of one's responsibility for own wellbeing. Meanwhile, financial wellbeing not only depends on labor conditions and opportunities to receive decent income, but also is determined by other tangible and intangible assets that can be capitalized if needed.
A new comprehension of human capacities updates the research of the financial wellbeing of households in terms of its determinants -not only financial but also property-related and behavioral. The development of recommendations for estimating the financial wellbeing with its testing on the regional level allows detecting the features (positive trends and flaws) of the human development environment and suggesting recommendations on the increase of its financial resilience in instability.
The oblasts of the Carpathian region of Ukraine lag behind the average national values, hampering the high level of the financial wellbeing of households in the oblasts of the Carpathian region. The leading region and outsider can be easily traced among the oblasts of the Carpathian region of Ukraine by economic development parameters. It is about Lvivska and Chernivetska oblasts, respectively, while Zakarpatska and Ivano-Frankivska oblasts have mostly average (within the Carpathian region of Ukraine) values with the former one gravitating towards the below-average economic development level and the latter one towards the above-average level. A significant divergence of the values of the financial wellbeing of households in the Carpathian region compared to Ukraine caused by the differentiation of economic growth paces of the regions has updated the need to develop the methodological approach to estimating the wellbeing of households in conditions of instability.
LITERATURE REVIEW
According to the Consumer Financial Protection Bureau, financial wellbeing is the human condition that provides an opportunity to fully meet their financial liabilities, feel safe nowadays and in the future, and make choices that allow them to enjoy life (Consumer Financial Protection Bureau, 2017). A comprehensive understanding of financial wellbeing combines the control over finances, financial resilience and security, as well as financial freedom of making decisions to enjoy life and achieve financial goals (Cárdenas et al., 2021, Mahendru et al., 2020. Drawing on the basic understanding of financial wellbeing, its estimation methodology is developing at the junction of conceptual provisions regarding (1) income and expenditures, purchasing power of the population (the most addressed direction is related to general economic conditions and social standards of consumption, living standards, and the quality of life; (2) financial awareness and inclusion of the population (the actively developing direction in conditions of the establishment of behavioral economics); (3) financial capacity of society (the direction that determines the efficiency of territorial management from the viewpoint of using the resource capacity of the area) (Voznyak, 2021).
Research of financial wellbeing in terms of income and expenditures is generated by scientific discussions about which indicator is more objective for evaluation -income or consumption (Gradín et al., 2008). The impact of income on wellbeing has been the object of scientific research for a long time. The ideas regarding the impact of the reference of income on the financial wellbeing of the population are relevant to this day (Ferrer-i-Carbonell, 2005), emphasizing the subjectivity of its estimation when using the objective analysis methods (based on statistical data). The impact of income and other resources on wellbeing is relative due to dependence on changing standards, which, in turn, depend on expectations, addictions, and social comparisons (Diener, 1993). Consumption research determines the role of consumer expenditures for subjective perception of financial wellbeing. Meanwhile, consumption expenditures depend on lifestyle, which is directly influenced by inequality, income level, and limited resources (Noll & Weick, 2015).
Scientific research of financial wellbeing in terms of developing financial awareness and financial inclusion of the population combines pedagogical and managerial recommendations. Financial awareness is positioned as an antecedent of financial wellbeing (Vieira et al., 2021). The context of financial awareness raises the issue of financial socialization of individuals, which defines their features, the further expression of which in the behavior is defined by financial wellbeing (Drever et al., 2015). The perception of financial wellbeing combines the sense of security of the financial future and stressful feelings in the management of individu-al financial resources (Netemeyer et al., 2018). The perception of financial wellbeing from the viewpoint of impact on individuals, including their psychical health, is combined with the phenomena of financial hardships, financial condition, tension, stress, and finally, financial security (Hassan et al., 2021).
The combination of subjectivization of understanding the financial wellbeing (perception and reflections through financial awareness and rational decision-making) and objectivization of financial capacities remain common in the evaluation of the former. Mapping the relationship between financial awareness and financial capacities allows evaluating the financial wellbeing of the population more comprehensively (Mahendru, 2021). Moreover, the research proves the domination of subjective construction of the financial wellbeing evaluation: wellbeing is more closely related to the individual estimation rather than the objective size of income (Barnard, 2016).
Estimation of financial wellbeing as a complex phenomenon requires adaptation for the countries with different socio-economic conditions and for different socio-demographic groups. For that matter, the methodologies for estimating the relationship between financial awareness (perception, objective financial knowledge, experience, and certainty), financial behavior (fulfilment of financial obligations, financial planning, savings, and monitoring) and features (time perception, impulsiveness, social status, locus of control, and general condition) of an individual have been developed (Sehrawat et al., 2021). Financial behavior is proven to be a key factor in financial wellbeing as it defines financial stress and financial awareness for determining wellbeing in the future. Therefore, the regulation of income and expenditures, management of financial stress, and improvement of financial awareness are the basic tasks for securing the financial wellbeing of the population, especially if it is in worse socio-economic conditions (Rahman et al., 2021). Financial behavior is the result of individual reactions to behavioral factors such as future security, savings, investment, credit discipline, financial consciousness (Kavita et al., 2021; Rushchshyn, 2021). Some studies on the estimation of financial wellbeing prove gender differences in its subjective perception, that is, financial knowledge plays the major role among the factors of financial wellbeing and satisfaction with its level for men, while financial condition -for women (Gerrans et al., 2013).
The role of financial knowledge in the wellbeing of the population increases in conditions of global financial instability, information asymmetry on financial markets, and complexity of financial products and technologies on the Fintech grounds (Philippas & Avdoulas, 2020). The issue of financial awareness of the population in relation to financial behavior and stress management remains the main wellbeing factor in conditions of the establishment of behavioral economics. Meanwhile, financial responsibility for wellbeing increases, emphasizing the development of the concept of its individualized behavioral understanding.
There is also a controversial direction of the research of financial wellbeing related to its identification with financial capacity, which is broader in content than financial awareness and knowledge because it constitutes the result of "healthy" financial behavior and financial planning, efficient decision-making about the selection of financial products and reaction to financial changes (Cox et al., 2009). Financial capacity requires the proper level of financial knowledge supported by the desired financial behavior to secure financial wellbeing (Xiao & O'Neill, 2016). Increasing the financial capacity of society (area) requires the financial resilience of households and their ability to maintain financial stability in financial shocks. The lower financial resilience of households with low income requires additional support through access to various liquidity assets such as loans, social payments, special offers to increase income, etc. (Bufe et al., 2021).
Financial capacity of communities, regions, and households correlates with different socio-economic conditions of the society development environment: while the societies with low income need the substantiation of the ways of its increase, for the developed societies, financial wellbeing is estimated in terms of the concept of experienced wellbeing with the determination of the maximum threshold values for income, with the growth of which an individual feels the life improvement (Killingsworth, 2021).
Despite subjective projections of financial wellbeing, its estimation methodology, in the first place, should be improved by developing the conceptual basis. The estimation of financial wellbeing by objective (statistical) parameters allows interstate and interregional comparisons and contributes to improving decision-making. Following the previous research, this one hypothesizes that the financial wellbeing of households depends on the system of financial, property-related, and behavioral indicators of the regional economic environment.
The identification of financial wellbeing of households and financial-investment condition of an entity stipulates the development of the methodology to estimate the financial wellbeing of Ukrainian households based on the innovative composition of indicators, the use of a multiplicative form of integral index, simultaneous normalization and integral estimation of indicators, and formalized substantiation of weight coefficients of the indicators.
METHODS
To build a time series of integral coefficients of households' financial wellbeing, three tasks must be accomplished: (1) normalization of indicators, (2) calculation and substantiation of weight coefficients, and (3) construction of integral coefficients based on the spatial approach.
Normalization of indicators is the initial and top-priority research stage. All indicators have different dimensions and orientations, so the correct normalization will bring the indicators within the [0; 1] range and to one comparable series. To avoid the zero value of indicators within the group and the situation when the integral index value is equalized by some parameters at the expense of others, a multiplicative form of the index is used instead of the classical approach (weighted-sum method). The methodology for calculating the integral coefficients of financial wellbeing allows taking into account the nonlinearity of social and economic processes through the use of a logarithmic function.
RESULTS
The financial wellbeing of a household is the condition of the household that secures fulfillment of financial obligations and opportunities to be confident in the financial future and make respective financial-investment decisions. The system of determinants of households' financial wellbeing, including assets and debts, in addition to the distribution of financial assets for different financial products, wages, and wage arrears, as well as investment capacity of households, faces substantial changes in crises and periods of economic recovery. So, increasing accessibility of secured and unsecured loans shows the development of the financial system and growing financial capac-ity of households. The transformation changes in the economic system of the country and its regions and the changes in the economic behavior of households have caused the selection of respective structure of the system of financial wellbeing determinants (Table 1).
Weight coefficients were determined for each oblast based on normalized data of financial wellbeing indicators to map the "variation scope" of the significance of financial determinants for framing the wellbeing by oblasts of the region. Table 2 shows that in the income and living conditions group, the consumer price index had the lowest weight in Ivano-Frankivska oblast (18.5%), coverage with housing -in Zakarpatska and Lvivska oblasts (14.67% and 10.48 %, respectively), the decile coefficient of the total income of the population -in Chernivetska oblast and on average in Ukraine (11.36% and 12.39%, respectively).
Social protection
Average governmental assistance to low-income families € Average housing, electricity, and fuel benefits and subsidies (non-cash) Per a household, € Wage arrears Number of the population with average monthly per capita income below the subsistence level % of the total population The financial wellbeing of households in Ukraine has a causal relationship with social protection and living conditions. Thus, in Ukraine, the wellbeing of households is determined by income and living conditions at 41.01 % and by the size of social assistance and various benefits and subsidies -at only 20.35% (Figure 1). Resourcing is quite significant for financial wellbeing in the oblasts of the Carpathian region, namely weight significance of the group for Zakarptska oblast is 41.31%, Lvivska 35.07%, Chernivetska 36.38%, Ivano-Frankivska 38.64%. The financial wellbeing of Ukrainian households by 38.64% depends on the efficiency of the use of aggregate resourc-es, namely the consumer expenditures-investment resources-savings structural relationship. In particular, investment expenditures and accumulated resources (savings) determine the highest values of weight significance in the process of securing the financial wellbeing.
Resourcing of households
The coefficients calculated for the groups of households' financial wellbeing show significant divergence of the oblasts of the Carpathian region from the average rate in Ukraine in 2010-2019. For instance, the oblasts of the Carpathian region essentially lag behind by social protection level (Table 3). Therefore, the role of social protection in the structure of aggregate resources has no significance in the process of securing the financial wellbeing of households because of small volumes or certain restrictions regarding access to social programs. Financial wellbeing is not high by the resourcing of households' component in Ivano- The divergence of the Carpathian region from Ukraine by financial wellbeing of households was most observed in 2018-2019 ( Figure 2). There is a corresponding trend in wellbeing in Ukraine and the Carpathian region, namely, financial resourcing of households was the highest in the period under research in 2012-2013, and the lowest in 2015. It is worth emphasizing that since 2017, eco-nomic recovery has fostered the improvement of households' financial wellbeing and thus the volumes of investment resources, which are the drivers of regional economic growth.
The financial wellbeing of households is determined by factors such as financial behavior (savings, investment, consumption, loans, and borrowings), social factors (age, employment status, unemployment, migration activity), psychological factors (attitude towards money, etc.), economic factors (income, wages, consumer price index), financial knowledge and experience (experience with financial products). High financial wellbeing means efficient control over daily finance, the ability to absorb financial risks, and the freedom to make financial decisions that allow households to secure a high quality of life. Table 3 and Figure 1. The received results show that financial wellbeing is determined by the level of income, the higher level of which is leveled due to availability of one income source, which immediately causes substantial wellbeing decline in case of a crisis situation (the loss of job, disability); the domination of consumption orientation (forced or conscious) in the entity's economic behav-ior that reduces the capacity of income sources differentiation and further wellbeing increase; additional expenditures (on housing, treatment, transport, etc.). Financial wellbeing correlates with the stability of income sources determined by employment and type of activity, demand for profession and qualification, and surrounding conditions.
CONCLUSION
The financial wellbeing of households in conditions of instability depends not only on income, but also on socio-economic conditions, which are the drivers of the growth of financial capacity and destimulators of economic entities' activity. The paper estimates the level of the financial capacity of households in several regions of Ukraine. Among the oblasts of the Carpathian region of Ukraine, the highest values of empirical parameters of financial wellbeing were in Zakarpatska (0.537) and Chernivetska (0.459) oblasts (2019). The divergence of the Carpathian region by the financial wellbeing of households from Ukraine was observed the most in 2018-2019.
Since 2017, economic recovery has contributed to improving households' financial wellbeing and, accordingly, increasing the volume of investment resources, which are drivers of regional economic growth. Based on the integral estimation, the paper proves a close relationship between the wellbeing of households and financial (income level and sources, income and expenditures structure, economic behavior model) and property-related (real estate, securities, precious metals) assets and a moderate relationship with behavioral aspects (financial knowledge, efforts and motivation, employment portfolio, mobility) and surrounding conditions (purchasing power, labor remuneration and social assistance standards, labor conditions, economic growth and development, etc.). | 4,128.6 | 2022-02-10T00:00:00.000 | [
"Economics"
] |
Real-Time Process Monitoring Based on Multivariate Control Chart for Anomalies Driven by Frequency Signal via Sound and Electrocardiography Cases
: Recent developments in network technologies have led to the application of cloud comput-ing and big data analysis to industrial automation. However, the automation of process monitoring still has numerous issues that need to be addressed. Traditionally, offline statistical processes are generally used for process monitoring; thus, problems are often detected too late. This study focused on the construction of an automated process monitoring system based on sound and vibration frequency signals. First, empirical mode decomposition was combined with intrinsic mode functions to construct different sound frequency combinations and differentiate sound frequencies according to anomalies. Then, linear discriminant analysis (LDA) was adopted to classify abnormal and normal sound frequency signals, and a control line was constructed to monitor the sound frequency. In a case study, the proposed method was applied to detect abnormal sounds at high and low frequencies, and a detection accuracy of over 90% was realized. In another case study, the proposed method was applied to analyze electrocardiography signals and was similarly able to identify abnormal situations. Thus, the proposed method can be applied to real-time process monitoring and the detection of abnormalities with high accuracy in various situations.
Introduction
Network technology, cloud computing, and big data analysis are being gradually integrated with industrial automation in a digital transformation known as Industry 4.0. For example, the Internet of Things can be used to develop a smart monitoring system to enhance the transparency and automate the operation of a factory. Process monitoring involves analyzing non-structuralized data and combining them with structuralized data to determine potentially important parameters. However, many problems regarding procedure combination and information classification still need to be resolved.
In the field of process monitoring, fault detection and diagnosis (FDD) is focused on detecting abnormal situations, done through modeling, signal processing, and intelligence computation. The FDD methods can generally be classified into three categories: model-based online data-driven methods, signal-based methods, and knowledge-based history data-driven methods [1]. Yan et al. proposed a hybrid method to detect faults for chiller subsystems, only using the normal data to implement the training procedure. This online monitoring framework was constructed to use an extended Kalman filter (EKF) model with a recursive one-class support vector machine (ROSVM) [2]. Sun et al. presented a hybrid RCA fault diagnosis model combined support vector machine (SVM) with wavelet de-noising (WD) and an improved max-relevance and min-redundancy (mRMR) Processes 2021, 9,1510 2 of 13 algorithm for dealing with the complexity of variable refrigerant flow (VRF) systems [3]. Rogers et al. reviewed and evaluated state-of-the-art methods for performing FDD for air conditioning systems. Herein, the emerging field of fault detection for residential air conditioning systems was also reviewed by using cloud-based thermostat data [4]. Gangsar and Tiwari reviewed the conventional time and spectrum signal analyses for the two most effective types of signals (the vibration and various induction motor (IM) faults). The existing research and development in the field of signal-based automation of condition monitoring methodologies for the FDD of various electrical and mechanical faults were also summarized and evaluated [5]. Neupane and Seok summarized the recent works for evaluating the applications in deep learning algorithms; this study also used the Case Western Reserve University (CWRU)-bearing dataset in machinery fault detection and diagnosis [6].
In semiconductor processes, Fan et al. proposed an anomaly detection method that used a denoise autoencoder (DAE) to learn the primary representation of normal wafers from equipment sensor readings and serve as the one-class classification model. Next, the Hampel identifier, a robust method of outlier detection, was also adopted to determine a new threshold for detecting defective wafers, called MaxRE without outlier (MaxREwoo) [7]. Data visualization is applied to transform original data, highlight the process trends and outliers by using the data in an easy-to-understand format, and help researchers comprehend the data's relevance. Visualization tools enable practitioners to transform every element of the data into interactive charts and pictures. Fan et al. utilized the texture analysis technique with 2-D Fourier transform to analyze images of the critical parameters for detecting defective wafers [8].
In the precision machining industry, the automatically productive equipment frequently needs to execute tool processing, such as turning, cutting, drilling, grinding, and so forth. However, most abnormal situations in these processes are from the extensive tool wear, and the motors (or transmission mechanisms) in the equipment generate the abnormalities; these abnormal situations often accompany the abnormal sound frequency. Speech frequencies and other electronic signals have been studied for more than three decades to understand sound signals' anomalous nature. Xu and Jon applied traditional multivariate analysis to sound frequency estimation and proposed combining sound frequency and video signals to estimate the acoustic signal-to-noise ratio (SNR) [9]. Xie and Cao improved the Mel-frequency cepstral coefficients (MFCCs) to significantly reduce the computation and strengthen the hardware execution of sound frequency monitoring [10]. Nalini et al. applied sound frequency identification to a biometric recognition system to address loopholes in existing vision-based and sensor systems. They achieved a failure rate of 19.09%, which was slightly less than that of the existing NARX/HM system (20.91%) [11]. Nalini applied MFCCs and a residual phase to develop a model for identifying emotions in music. Furthermore, they used an absolute artificial neural network (AANN), a support vector machine (SVM), and a radial basis function (RBF) network to classify the music archives of different websites and achieved identification rates of 96.0%, 99.0%, and 95.0%, respectively [12]. Lee et al. proposed an audio-based event detection system to monitor the safety of workers and rapidly identify construction accidents [13]. Liu and Li presented a construction sound monitoring system with a double-layer identification scheme consisting of two random forest-based classifiers to prevent damage to underground pipelines. They were able to detect 95.59% of all threat signals [14]. For healthcare, Wei et al. applied adaptive support vector regression and weighted-index average algorithms to calculate fetal heart rates [15].
If the abnormal sound frequency with the critical process parameters can be utilized, it will define the process fault quickly, and the engineer can execute the proper operation. In addition, the production machine can maintain a stable vibration in advanced semiconductor processes, which is also critical for process quality. Many device anomalies can be identified by sound and vibration frequencies; thus, the efficiency of such monitoring systems can be further improved if portable radios are combined with algorithms for quick anomaly detection. Based on two such monitoring requirements, we used two cases of similarity signals (sound frequency and electrocardiography) to implement the monitoring of frequency signals and to evaluate its feasibility in practical application. Moreover, the monitoring in advanced processes needs a quick response and to display a simple visualization. Thus, the complicated frequency signals are converted to form a curve via the multiscale entropy (MSE) method, and the concept of profile monitoring is used to implement the monitoring task. Using the fitting model of the MSE curve, the model parameters for different curves can be obtained and linear discriminant analysis (LDA) is applied to execute the classification of an abnormal situation. When the parameters of abnormal classification are decided, the Hotelling T 2 control chart can be established to implement the monitoring of frequency signals. The advantage of such an operation is to convert the complicated frequency signals and form a more accessible control chart. Thus, the process engineer can quickly evaluate the abnormal situation and adopt the appropriate treatment to achieve online monitoring.
Methodology
The focus of this study was the construction of a real-time monitoring system that can quickly identify abnormalities based on a sound frequency signal. The empirical mode decomposition (EMD) method was first applied to decompose the original sound frequency signal and generate the different frequency domains of intrinsic mode functions (IMFs). Using the screening feature IMFs, the frequency signal is reconstructed to remove the noise and increase the detective effect. Next, the sample entropies (SampEn) for different scales are calculated using the recombined feature signal, and some sample points are produced to fit the appropriate model for the MSE curve. In terms of these model parameters, Hotelling's T 2 control chart is constructed, and then LDA is applied to determine the control limit to monitor the abnormal sound frequency signal.
Decomposition of Sound Frequency Signals
EMD is applicable to nonlinear and unstable data, such as sound frequency signals. EMD utilizes characteristic time scales in signals to define the vibrational modes [8]. A non-zero mean signal can also be used, and the decomposition procedure is called the sifting process. The sifting process is applied to obtain the IMFs of the original signal. Some IMFs have physical features that can be used for further analysis. The obtained IMFs are checked to determine whether they meet the given constraints. If they do, the sifting process continues to obtain the next group of IMFs. This process is repeated until all IMFs that meet the constraints are obtained [16][17][18][19]. The last group of IMFs exhibits the trend of the mean. Hence, the sifting process aims to eliminate the carrier waves of signals to achieve a more symmetric waveform, as follows [20,21]: Determine the partial maxima and minima of the original signal X(t). Then, use a cubic spline to connect the maxima to form an envelope and connect the minima to form another envelope. Aggregate and average the two envelopes to obtain the mean envelope m 1 (t). Subtract m 1 (t) from X(t) to obtain the vector h 1 (t):
2.
Check if h 1 (t) meets the constraints for the IMFs. If it does, return to Step (1), and take h 1 (t) as the original signal for the second sifting process to obtain h 11 (t): 3.
After the sifting process is repeated k times, the original signal X(t) meets the constraints and becomes the IMF vector h 1,k (t): Processes 2021, 9, 1510 4 of 13 4. Excessive sifting eliminates the original physical meaning. Hence, the following conditions are set for convergence to ensure that the IMFs maintain the original vibration amplitude and physical meaning: • The number of zero-crossing points must be equal to that of the partial extrema (i.e., the partial maxima and partial minima), and the standard deviation (SD) should be between 0.2 and 0.3: • If one of the conditions is met, the sifting process is complete, and the first IMF vector c 1 (t) is obtained. c 1 (t) is the shortest cycle of the entire set of signals:
If r 1 (t) contains a longer cycle vector, repeat steps 1-5 to continue sifting and decompose it into n (cardinal number) IMF vectors c n (t): 7.
If r n (t) cannot be decomposed into IMF vectors, the sifting process is suspended. The final r n (t) is the mean trend. All IMF vectors c n (t) are aggregated with the mean trend to obtain the original signal X(t). Combine Equations (6) and (7) to obtain After the above decomposition of the IMF vectors, the IMFs are classified and combined with correlating different sound frequencies with process abnormalities. Herein, the significant IMF vectors for distinguishing the abnormalities can be selected and reconstructed to form a recombined feature signal.
Recombination and Monitoring of Signals
The obtained IMFs can be used to select highly identifiable functions for recombination and recognize abnormal sound frequencies. The profile monitoring theory was adopted using the recombined feature signal to analyze sound frequencies for abnormalities. MSE was used to convert the recombined feature signal. It adopts the concept of multiple scales to calculate and represent complexity properly rather than causing any deviation. In addition, it can be used to observe trends of complexity on different scales. The basic principles of MSE are based on approximate entropy and sample entropy (SampEn). The recombined feature signal from the sound frequency data are preprocessed, where the time sequence data entries are shortened. Then, the approximate entropy or sample entropy is added to calculate the entropy. The basic structure of MSE is presented in Figure 1 [22]. The recombined feature signal for the coarse-graining procedure is presented in Figure 1. The values of every two points are averaged to obtain another group of the sequence. Then, the sample entropy for Scale 2 is calculated, as presented in Equation (9). Then, the values of every three points are averaged to obtain another group of the sequence, and the sample entropy for Scale 3 is obtained in the same manner. Thus, the sample entropies (SampEn) can be obtained for different time scales, which are aggregated to obtain the complexity index (CI): In this study, each feature signal was computed to generate 20 sample points from Scales 1-20 using Equation (9). The 20 sample points were then fitted to use the polynomial regression model (or the sum of sine functions) and obtain the model parameters of the MSE curve. Then, MSE converted the feature frequency signals into the profile graph presented in Figure 2. The profiles were then used to classify abnormalities in the sound frequency signals to construct the monitoring framework. The recombined feature signal for the coarse-graining procedure is presented in Figure 1. The values of every two points are averaged to obtain another group of the sequence. Then, the sample entropy for Scale 2 is calculated, as presented in Equation (9). Then, the values of every three points are averaged to obtain another group of the sequence, and the sample entropy for Scale 3 is obtained in the same manner. Thus, the sample entropies (SampEn) can be obtained for different time scales, which are aggregated to obtain the complexity index (CI): In this study, each feature signal was computed to generate 20 sample points from Scales 1-20 using Equation (9). The 20 sample points were then fitted to use the polynomial regression model (or the sum of sine functions) and obtain the model parameters of the MSE curve. Then, MSE converted the feature frequency signals into the profile graph presented in Figure 2. The profiles were then used to classify abnormalities in the sound frequency signals to construct the monitoring framework. The recombined feature signal for the coarse-graining procedure is presented in Figure 1. The values of every two points are averaged to obtain another group of the sequence. Then, the sample entropy for Scale 2 is calculated, as presented in Equation (9). Then, the values of every three points are averaged to obtain another group of the sequence, and the sample entropy for Scale 3 is obtained in the same manner. Thus, the sample entropies (SampEn) can be obtained for different time scales, which are aggregated to obtain the complexity index (CI): In this study, each feature signal was computed to generate 20 sample points from Scales 1-20 using Equation (9). The 20 sample points were then fitted to use the polynomial regression model (or the sum of sine functions) and obtain the model parameters of the MSE curve. Then, MSE converted the feature frequency signals into the profile graph presented in Figure 2. The profiles were then used to classify abnormalities in the sound frequency signals to construct the monitoring framework.
Apply Hotelling T 2 and Linear Discriminant Analysis to Sound Frequency Monitoring
When the MSE curve was formed and to obtain the model parameters (β), Hotelling's T 2 control chart was established, and the LDA was then used to determine the control limit to monitor and to distinguish the abnormal signals. The obtained profiles were used to simulate monitoring of the sound frequency signal for abnormalities at different scales. First, the sample points of the MSE curve were fitted into an appropriate model. Herein, the polynomial regression model and the sum of sine functions were used to construct the profile model. The polynomial model with a single explanatory variable is described as where β 0 and β r are the unknown parameters of the polynomial function and r is the order of the polynomial. The modified sum of sine functions is represented as In Equation (12), a r is the amplitude, b r is the frequency, and c r is the horizontal phase constant at each sine wave term. For example, when the profile model is considered as the sum of two sine functions, it could be represented as follows: where x jp is the explanatory variable for the jth observation in the pth profile,β p is the unknown parameter vector for profile p (β p = [a 1p , a 2p , b 1p , b 2p , c 2p , c 2p ]), and the error term is independent and identically distributed as a normal random variable with zero mean and constant variance (σ 2 ). Then, linear discriminant analysis (LDA) was used with the Hotelling T 2 control chart to monitor the sound frequency signal. The Hotelling T 2 control chart is a multivariate statistical method for quality control, which is an extension of the average control chart by Shewhart [23].
The Hotelling T 2 control chart can be described as follows. The multivariate T 2 control chart was used to monitor the parameter vector (β p ) from the different MSE curve. Then, the T 2 control statistic was calculated as follows: where S denotes the covariance matrix S = ∑ g j=1 (β j −β)(β j −β) /(g − 1) of the profile sample, and g denotes the number of the sound frequency signal profile. Therefore, if a sound frequency signal is classified as abnormal, the MSE parameters of normal and abnormal sound frequencies can be used for classification by LDA as well as the establishment of the control limit.
The classification structure is as follows. First, the control limit for LDA is constructed [24]. The slope w c and intercept w c0 of the control limit can be obtained as follows: In Equation (15), g c (β) is the linear combination for the upper control limit. The Fisher theory was used to establish the control limit. The norm was defined as the percentage value for the calculated inter-and intra-group variance. The slope w c of the control limit was obtained by calculating this norm.
Equation (16) was used to obtain w c when J was a maximum. m 1 and m 2 denote the means of different groups, and Σ W denotes the pooled within-class sample covariance matrix: Σ 1 andΣ 2 denote the maximum-likelihood estimate covariance matrices (n 1 + n 2 = n) for type 1 (ω Group 1 ) (abnormal frequency) and type 2 (ω Group 2 ) (normal frequency), respectively. The maximum slope w c in Equation (16) can be determined as follows: . The above classification steps were applied to convert the MSE parameters of the different sound frequency signals so as to construct the upper control limit in the T 2 control chart.
Validation
The proposed method was applied to two case studies for validation. In the first case study, the proposed method was applied to a simulation experiment, in which high-, medium-, and low-frequency abnormal data were added to normal data. In the second case study, the proposed method was applied to the analysis of electrocardiography (ECG) signals in a database.
Case Study I: Simulation Experiment
The simulation experiment considered 300 s. of sound frequency signals comprising 20 copies of normal sound frequencies. Two copies of 60-s.-long abnormal sound frequencies were added. Herein, the signals of 100 s. were intercepted and displayed as in Figure 3. Then, EMD was applied to the original signals to obtain the IMFs and residuals of different frequency sections. Figure 3 presents the IMF vectors of the original sound frequency signals after EMD.
The original signals were decomposed into 15 IMFs and one residual; then, the recombined sound frequency signals of IMFs 1-10 were selected for analysis. Then, Equations (9) and (10) were used to convert the signals into MSE values. The third-to fifth-degree polynomial models, the first-to third-degree sums of the sine equations, and the three-stage second-degree polynomial model were evaluated for model fitting. R 2 adj was used as an evaluation criterion. R 2 adj is the mean of R 2 adj , which is a modified version of R 2 that is adjusted according to the number of predictors in the model. R 2 adj increases when a new predictor improves the model more than expected by chance and decreases when a predictor improves the model less than expected. In contrast, R 2 increases with the number of predictors whether or not they are significant. The original signals were decomposed into 15 IMFs and one residual; then, the recombined sound frequency signals of IMFs 1-10 were selected for analysis. Then, Equations (9) and (10) were used to convert the signals into MSE values. The third-to fifthdegree polynomial models, the first-to third-degree sums of the sine equations, and the three-stage second-degree polynomial model were evaluated for model fitting.
was used as an evaluation criterion.
is the mean of , which is a modified version of that is adjusted according to the number of predictors in the model. increases when a new predictor improves the model more than expected by chance and decreases when a predictor improves the model less than expected. In contrast, increases with the number of predictors whether or not they are significant.
Case Study II: Electrocardiography Data
For this case study, 24 lead-I ECG signals were selected as the original data, and two signals contained arrhythmia. The original signals were decomposed to obtain eight IMFs of different frequency bands and one residual, as presented in Figure 4. After combining different IMFs, IMF2, IMF3, and IMF4 were found to have a strong ability to distinguish abnormal signals.
Case Study II: Electrocardiography Data
For this case study, 24 lead-I ECG signals were selected as the original data, and two signals contained arrhythmia. The original signals were decomposed to obtain eight IMFs of different frequency bands and one residual, as presented in Figure 4. After combining different IMFs, IMF2, IMF3, and IMF4 were found to have a strong ability to distinguish abnormal signals.
Frequency bands from IMF2 to IMF4 were used to reconstruct the ECG signals for analysis. Then, the reconstructed signals were converted into MSE profiles using Equations (9) and (10). Using the simulated evaluation, the fourth-degree polynomial model was used for model fitting: where β 0p and β rp denote the estimated parameters of the fourth-degree polynomial model. r is the number of scales, and ε p ∼ N 0, σ 2 . Frequency bands from IMF2 to IMF4 were used to reconstruct the ECG signals for analysis. Then, the reconstructed signals were converted into MSE profiles using Equations (9) and (10). Using the simulated evaluation, the fourth-degree polynomial model was used for model fitting: where and denote the estimated parameters of the fourth-degree polynomial model. r is the number of scales, and ~ , . Table 1 presents the fitting results of the different models to the signals, which were decomposed into 20 groups of mean MSE profiles. The fourth-degree and three-stage second-degree polynomial models had values of 0.9603 and 0.9712, respectively. Thus, they demonstrated the best fitting results, which were attributed to the smooth curves of the MSE profiles. The three-stage second-degree polynomial model performed slightly better than the fourth-degree polynomial model due to the smaller number of parameters used in this study. Based on its reduction of the type 1 deviation and convenience of use, the fourth-degree polynomial model was selected for profile monitoring in the simulation experiment. Table 1 presents the fitting results of the different models to the signals, which were decomposed into 20 groups of mean MSE profiles. The fourth-degree and three-stage second-degree polynomial models had R 2 adj values of 0.9603 and 0.9712, respectively. Thus, they demonstrated the best fitting results, which were attributed to the smooth curves of the MSE profiles. The three-stage second-degree polynomial model performed slightly better than the fourth-degree polynomial model due to the smaller number of parameters used in this study. Based on its reduction of the type 1 deviation and convenience of use, the fourthdegree polynomial model was selected for profile monitoring in the simulation experiment. To construct a reasonable control line, the original and normal sound frequency signals were mixed with high-, medium-, and low-frequency abnormal sound frequencies, where the signal amplitude was varied at different scales to generate 200 copies of simulated sound frequency signals. Specifically, 10 copies of abnormal sound frequencies in each frequency domain were chosen for repeated simulation. Four scenarios of sound anomalies were considered: high, medium, low, and mixed frequencies. Table 2 presents the results of repeated random testing. The results indicate that the proposed method detected the anomalies with an accuracy of over 90%. The original signal was detected with an accuracy of less than 50%; thus, it was judged as unidentifiable. Table 3 presents the fitting results of the 24 MSE profiles derived from Equation (19). The minimum, maximum, and average values of R 2 adj for all models were 0.97, 0.99, and 0.9846, respectively, which validates the proposed method. (β 0 , β 1 , β 2 , β 3 , β 4 ) of the fourth-degree polynomial model and LDA technique. The normal and abnormal profiles were distinguished with a detection accuracy of 100%. The results were utilized to establish the Hotelling T 2 control chart for online anomaly detection. Figure 5 demonstrates that Profiles 6 and 11 exceeded the control line. These two profiles corresponded to abnormal ECG signals in the original data. Thus, the proposed method can be used to detect abnormal ECG signals with excellent sensitivity. In this case, because the ECG signals were non-steady and nonlinear, Fourier analysis could not be applied. EMD was combined with IMF to handle the original lead-I ECG signals, similar to a noise filtering function, and the reconstructed signals were converted into MSE profiles to classify abnormal signals. The parameters of the fourth-order polynomial model were used to accurately monitor abnormal signals.
Case Study II
These two profiles corresponded to abnormal ECG signals in the original data. Thus, the proposed method can be used to detect abnormal ECG signals with excellent sensitivity. In this case, because the ECG signals were non-steady and nonlinear, Fourier analysis could not be applied. EMD was combined with IMF to handle the original lead-I ECG signals, similar to a noise filtering function, and the reconstructed signals were converted into MSE profiles to classify abnormal signals. The parameters of the fourth-order polynomial model were used to accurately monitor abnormal signals. To verify the detective performance for the proposed monitoring system, the abnormal signal of lead-I for irregular heartbeat was also tested to obtain the evaluated results. A rational control limit was established to adopt 60 signals of two sources: irregular heartbeat (20 sample signals) and normal ECG signals (40 sample signals) via LDA classification. The database for simulated ECG signals included 160 normal signals and 40 abnormal signals for an irregular heartbeat. Each simulation randomly sampled 80 normal ECG signals and 20 abnormal signals to implement the detective task. Using the simulations of 10 repetitions, the accuracy rate was calculated and the detective performance was evaluated. The results of the repeated random tests are shown in Table 4. According to Table 4, the accuracy of over 65% was achieved in monitoring irregular heartbeats, whether or not the EMD procedure was applied. However, it was also found that the utilized EMD procedure improved significantly. Comparing Table 4 and Figure 5, if the control limit was constructed to use the samples of a lower proportion, it also induced an inferior detection effect. Therefore, the EMD procedure and the control limit to be established are the critical operations and display a significant influence. To verify the detective performance for the proposed monitoring system, the abnormal signal of lead-I for irregular heartbeat was also tested to obtain the evaluated results. A rational control limit was established to adopt 60 signals of two sources: irregular heartbeat (20 sample signals) and normal ECG signals (40 sample signals) via LDA classification. The database for simulated ECG signals included 160 normal signals and 40 abnormal signals for an irregular heartbeat. Each simulation randomly sampled 80 normal ECG signals and 20 abnormal signals to implement the detective task. Using the simulations of 10 repetitions, the accuracy rate was calculated and the detective performance was evaluated. The results of the repeated random tests are shown in Table 4. According to Table 4, the accuracy of over 65% was achieved in monitoring irregular heartbeats, whether or not the EMD procedure was applied. However, it was also found that the utilized EMD procedure improved significantly. Comparing Table 4 and Figure 5, if the control limit was constructed to use the samples of a lower proportion, it also induced an inferior detection effect. Therefore, the EMD procedure and the control limit to be established are the critical operations and display a significant influence.
Conclusions
In this study, EMD was combined with IMF to process original signals and analyze the frequency domains for abnormal signals. The following conclusions were obtained: 1.
The sound and vibration frequency signals were complex and unstable, but EMD removed unexpected signals and detected abnormal ones. Even when abnormal frequencies were placed at different time points, EMD was more effective than directly converting the original signal into an MSE profile. This demonstrates that monitoring a specific sound frequency indeed improves the identification of abnormal sound frequencies.
The proposed method could also be applied to ECG signals. 2.
Good model fitting was obtained by converting the MSE profile into a fourth-order polynomial model. Although a complex model has a better fit than the third-order polynomial model, the latter is still advantageous owing to its fewer parameters and highly accessible LDA classification structure.
Although the selection of IMF vectors for signal combination and reconstruction is a critical procedure for monitoring sound frequency and ECG signals, it is still impossible to obtain high sensitivity if IMF vectors with features that can be clearly identified or excessive original signals are removed. The IMF selection procedure still needs to be improved. In future research, the proposed method may be combined with deep learning to obtain better combinations of sound frequency signals (or ECG data). Because signals reconstructed with EMD are easier to identify, multivariate control charts would be helpful for online monitoring if combined with a better IMF selection and reconstruction theory. However, different control graphs have different degrees of sensitivity. Therefore, another future research topic is combining classification processes and selecting a control graph with a lower probability of deviation. | 7,135.8 | 2021-08-26T00:00:00.000 | [
"Engineering",
"Medicine"
] |
A computational model to predict bone metastasis in breast cancer by integrating the dysregulated pathways
Background Although there are a lot of researches focusing on cancer prognosis or prediction of cancer metastases, it is still a big challenge to predict the risks of cancer metastasizing to a specific organ such as bone. In fact, little work has been published for such a purpose nowadays. Methods In this work, we propose a Dysregulated Pathway Based prediction Model (DPBM) built on a merged data set with 855 samples. First, we use bootstrapping strategy to select bone metastasis related genes. Based on the selected genes, we then detect out the dysregulated pathways involved in the process of bone metastasis via enrichment analysis. And then we use the discriminative genes in each dysregulated pathway, called as dysregulated genes, to construct a sub-model to forecast the risk of bone metastasis. Finally we combine all sub-models as an ensemble model (DPBM) to predict the risk of bone metastasis. Results We have validated DPBM on the training, test and independent sets separately, and the results show that DPBM can significantly distinguish the bone metastases risks of patients (with p-values of 3.82E-10, 0.00007 and 0.0003 on three sets respectively). Moreover, the dysregulated genes are generally with higher topological coefficients (degree and betweenness centrality) in the PPI network, which means that they may play critical roles in the biological functions. Further functional analysis of these genes demonstrates that the immune system seems to play an important role in bone-specific metastasis of breast cancer. Conclusions Each of the dysregulated pathways that are enriched with bone metastasis related genes may uncover one critical aspect of influencing the bone metastasis of breast cancer, thus the ensemble strategy can help to describe the comprehensive view of bone metastasis mechanism. Therefore, the constructed DPBM is robust and able to significantly distinguish the bone metastases risks of patients in both test set and independent set. Moreover, the dysregulated genes in the dysregulated pathways tend to play critical roles in the biological process of bone metastasis of breast cancer. Electronic supplementary material The online version of this article (doi:10.1186/1471-2407-14-618) contains supplementary material, which is available to authorized users.
Background
Metastasis is the main cause of death in breast cancer [1,2], and bone is the organ suffering from metastasis most frequently [3]. Breast cancer patients with bone metastases may suffer marked decreased mobility, pathologic fractures, neurological damage and other symptoms, and the patients with high risks of bone metastases should take agents tailored treatments [4,5]. Thus for cancer therapy, it is essential to identify the prognostic factors which can help to identify the patients with high risks of bone metastasis [4][5][6].
Because the ability of tumour cells metastasizing to a specific organ is an inherent genetic property [7,8], it is possible to predict bone metastasis of breast cancer by using gene expression profiles [8]. However, up to now only several researches have attempted to identify bone metastasis related genes from gene expression data [3,[9][10][11], and only one in which [3] has made use of the identified genes as signature to construct classification model for predicting bone metastasis risk of breast cancer. What is more, the published work just considered very limited number of samples when selecting gene signatures and did not perform strict independent tests on any larger data set. As breast cancer is a heterogeneous disease, the characters associated with metastases may vary widely across different patients [1]. Insufficient patient samples would not cover all aspects of the metastases, thus gene signatures selected from small number of samples may not be credible enough. In fact, it has been found out that the gene signatures identified using one data set may perform badly on another data set [12][13][14].
In recent years, several methods have been used to derive gene sets that are related to specific biological functions, such as protein-protein interaction network [15], pathway [16], GO Term [17], and so on. For example, the gene set statistics method [17] infers the activity of one gene set by counting all expression levels of genes in the set, and then uses the activity to build the classifier to predict the metastasis risk of breast cancer. Extracting gene sets rather than selecting single genes can provide more stable signatures, thus can construct classifiers with higher performances [18]. However, most of the existing methods consider all genes in the same set equally without noticing that some genes are less important than others. In fact in a pathway or other kind of gene set, only a part of genes would be dysregulated during the metastasis process of cancer. Although Lee et al. just considered a subset of the genes to infer the activity of each pathway, and used all activities to construct a model to classify cancer patients [18], there are still two drawbacks. Firstly, this method uses the inferred activities instead of the gene expression levels to construct the classifier, resulting in the loss of some important information for classification. Secondly, some pathways not involved in the disease process may be considered improperly, leading that some noises could be imported into the prediction model.
In this work, we present a new prediction model, Dysregulated Pathway Based prediction Model (DPBM), to predict the risk of bone metastasis of breast cancer ( Figure 1). To get enough samples, we integrate four breast cancer sets together to obtain 855 breast cancer samples, from which we select genes that are significantly correlated with bone metastasis of breast cancer by using bootstrapping strategy. The selected genes are also called as candidate genes. After that, we identify KEGG pathways that are enriched by the candidate genes as abnormal pathways in the bone metastasis process. We call these pathways as dysregulated pathways and the candidate genes involved in the dysregulated pathways as dysregulated genes. Since different pathways are involved in different aspects of the bone metastasis process, the genes related to them can correspondently be divided into different functional groups. Therefore, we can use the dysregulated genes in each pathway to construct one submodel, and then integrate all sub-models into an ensemble model (DPBM) to predict the bone metastases risks of breast cancer patients by majority voting strategy. We evaluate DPBM both on test set and independent set in terms of prediction accuracy and robustness. We also investigate the topological characteristics of the dysregulated genes in protein-protein interaction network and their functional annotations, trying to uncover the biological mechanisms that play important roles in bone metastasis of breast cancer.
Data sets and pre-processing
We have downloaded gene expression profiles of breast cancer patients along with the clinical information from UNC microarray database [8]. The downloaded data consists of four microarray data sets: GSE2034 [19], GSE2603 [20], GSE12276 [21] and NKI295 [22], and has been processed and normalized by the original paper [8]. Details of these data sets are shown in Table 1. In our work, GSE2034 was used as an independent test set. As for the other three data sets, we randomly selected 2/3 samples as the training set and the remainder samples as the test set. As a result, we got a training set consisting of 380 samples (113 are bone metastases and 267 are free of bone metastases) and a test set containing 189 samples (56 are bone metastases and 133 are free of bone metastases). In these data sets, if the first metastasis organ of a patient is bone, then the status is set as bone metastasis, otherwise it is set as free of bone metastasis (including cases of non-bone metastases and non metastases).
We have also downloaded the human protein-protein interactions from the HIPPI (Human Integrated Protein-Protein Interaction rEference) [23], and the pathways from the Molecular Signatures Database (MSigDB) [24].
Selecting candidate genes by bootstrapping
As is known to all, t-test is a popular method used to select discriminative genes, thus it could be used in our work. However, t-test method requires that every sample must be attached with a class label. While in our work, for the reason that the clinical information of some patients is censored, not every sample can be assigned as either low-risk or high-risk of bone metastasis according to the widely used criterion that patients who are bonemetastasized within a threshold of years belong to highrisk group, and patients who are free of bone metastases and survive longer than the threshold belong to low-risk group, which results that some valuable samples not satisfying the criterion have to be removed from the training set if t-test method is used. Different with t-test method, however, the Cox proportional hazards regression can involve all samples into the calculation, thus it is more proper for our work to select the bone metastasis related genes.
In this work, we used a simple bootstrapping strategy to select candidate genes of which expression levels were significantly correlated with the bone metastasis risk. Concretely, we first randomly selected 3/4 of all the 380 samples from the training set; and then for each gene, we applied Cox proportional hazards regression to calculate the coefficient between the gene expression level and the bone metastasis risk across the chosen samples. The above procedure was repeated 400 times, and the genes with Cox p-values less than 0.05 in more than 80% of all runs were regarded as the candidate genes. For every selected gene, its averaged Cox coefficient and Cox p-value over all the 400 runs were set to be its final corresponding values for further calculations.
Identifying the dysregulated pathways
The candidate genes are those significantly correlated with bone metastasis risk. If the candidates are enriched in a pathway (that is, the overlap of the candidate genes and the genes in the pathway is significant), then we call this pathway as a dysregulated pathway. In this work, we applied the widely used GSE2034 was used as an independent set. The other three data sets were combined into one merged set, from which we randomly selected 2/3 samples into the training set and the other 1/3 samples into the test set. Figure 1 The framework of DPBM prediction model.
hyper geometric cumulative distribution function to test the significance of the overlap: Where x stands for the size of intersection set; K represents the number of the candidate genes; N stands for the number of the genes in the pathway; and M represents the number of all genes in our calculation (the universal gene set). For a pathway, if the p-value is less than 0.05, then it is considered as the dysregulated one; and the genes belonging to the intersection set are called as dysregulated genes.
Constructing the DPBM
With the hypothesis that one dysregulated pathway may describe only one aspect of the bone metastasis mechanism, while all dysregulated pathways can provide a comprehensive view of the bone metastasis, we adopted the ensemble strategy [14] to construct DPBM to predict the bone metastases risks of breast cancer patients. We chose the dysregulated genes in each dysregulated pathway as features to construct a sub-model to distinguish the bone metastases risks of the patients, and all the sub-models were integrated as DPBM by majority voting strategy.
To construct each sub-model, we used a simple strategy, similar to the Gene expression Grade Index (GGI) [25], to calculate the bone metastasis risk for every patient, shown as the following equation: Where x i (x j ) represents the expression level of the dysregulated gene i (j) which has a positive (negative) Cox coefficient with metastasis risk. The higher the Risk-Score is, the greater the risk of bone metastasis. We applied 10-fold cross validation test to set the proper threshold value of RiskScore. In each run, the n-th smallest riskScore value (n is the number of training patients free of bone metastases) in the training samples was set as the cut-off to determine the class labels of the test samples, based on which, the performance (log rank test) can be obtained. The final threshold value was set as the one with the best performance in ten runs. Any patient with RiskScore value greater than this threshold is considered as high-risk of bone metastasis by this sub-model, otherwise it is considered as low-risk of bone metastasis.
For a patient, if more than half sub-models vote for "high-risk of bone metastasis", it will be finally predicted as "high-risk of bone metastasis" by DPBM, and vice versa. In order to assess the performance of DPBM, we used the log rank test to evaluate the significance of the risk differences between the patients in two groups. Kaplan Meier curves and the log rank test were performed using a tool (http://www.mathworks.com/matlabcentral/fileexchange/ 22317-logrank).
Topologically investigating dysregulated genes in PPI network
Protein-protein interaction network has been successfully applied to select signature genes [26]. For example, Hase et al. illustrated that the signature genes tended to have bigger degrees in the network [27]; and Yao et al. reported that the signature genes were usually with higher betweenness centralities in the network [28]. Thus we investigated two network topological coefficients (Degree and Betweenness Centrality) of the selected dysregulated genes by comparing with candidate genes (dysregulated genes excluded) and all genes in the PPI network (dysregulated genes excluded). The differences of the topological coefficients between the dysregulated genes and other two kinds of genes were tested by the Mann-Whitney-Wilcoxon non-parametric test for two unpaired groups. And the topology analysis of PPI network was performed by the Network Analyzer plug-in for Cytoscape [29].
Investigating dysregulated genes by functional analysis DAVID [30] was applied to extract the GO Terms (Biological Processes) which were significantly enriched by the dysregulated genes and the ones with p-values less than 0.05 were set as enriched GO Terms. All enriched GO Terms were clustered into several functional groups by the functional annotation clustering method with the default threshold of enrichment score [30].
Dysregulated pathways and genes
By bootstrapping method, we selected out 267 candidate genes (Additional file 1: Table S1), from which we got 35 dysregulated genes involved in eight dysregulated pathways (Table 2). In order to validate our strategy, we also used t-test to select the discriminative genes between the patients of the high-risk group and the low-risk group (see Additional file 1: Supplementary Methods), based on which, the dysregulated genes as well as dysregulated pathways can be gotten by using the similar strategy to ours. As a result, most of the identified dysregulated pathways and genes based on the candidates selected by bootstrapping method are significantly coincident with those selected by t-test method (Additional file 1: Figure S1). Moreover, most of the dysregulated pathways and genes are shown to be related to bone metastasis in literature.
Some cytokines have been reported to be related to breast invasion and metastasis site [31], while cytokine receptor interaction pathway has been found significant in our work. What is more, the dysregulated genes IL2RG, IL6R, IL7R and TGFB2 have been reported to be associated with metastasis site or prognosis [31], and CCR6 is associated with both live metastasis in breast cancer [32] and bone metastasis in human neuroblastoma [33]. Chemokines and their receptors have been shown to play critical roles in determining the metastatic destination of tumour cells [34]. In our work, the chemokine signalling pathway is also enriched with the candidate genes. In the meanwhile, among the nine dysregulated genes, Jak2 has been reported to be mediated by IL6 to involve in bone metastasis [35]; CCR6 is associated with bone metastasis [33]; PPKX regulates endothelial cell migration and vascular-like structure formation [36]; XCL1 and CCL19 are associated with organ specific metastasis [34,37].
Cell cycle pathway plays an important role in tumorigenesis and cancer prognosis [38], and it has also been found to be dysregulated in our work. Among its dysregulated genes, CCND2 is differentially expressed between breast cancer patients with bone metastases and other patients [11]; E2F1 can regulate DZ13 to induce a cytotoxic stress response in tumour cells metastasizing to bone [39]; TGFB2 is related to the bone metastases development [40].
It is interesting that non-small cell lung cancer and pancreatic cancer pathways have also be found dysregulated in bone metastasis. In fact, lung is the organ with the second frequent metastasis for breast cancer [8], and it has been reported that some breast cancer would metastasize to pancreatic [41]. This phenomenon suggests that either lung cancer or pancreatic cancer might share some common mechanisms with bone metastasis of breast cancer, for the dysregulated genes E2F1 [39] and TGFB2 [40] in pancreatic cancer pathway have been shown to be also involved in bone metastasis process; while E2F2 gene, the family member of E2F1, has been found to be the dysregulated gene in the non-small cell lung pathway.
We have also found that three immune related pathways have been dysregulated in bone metastasis of breast cancer: natural killer cell mediated cytotoxicity pathway, T cell receptor signalling pathway and primary immunodeficiency pathway. In fact, some immune related genes are essential in bone metastasis of breast cancer [42][43][44], and their family members, such as FAS, IL2RG and IL7R, have shown dysregulated in our work and have been reported to be either metastasis related or bone metastasis related [31,35,45]. Now that references [3,[9][10][11] have published bone metastasis related genes, we merged all the reported genes and investigated the overlap with our dysregulated genes. It is surprising that there are only four common genes (Additional file 1: Figure S2) between two sets of genes. We thus investigated the functions of published genes and found that they are most enriched in 'metabolic process' (data not shown), while our dysregulated genes are mainly related to immune system. By literature investigation, we further found that the immune cells can play essential roles in bone metastasis or metastasis of cancer [42,44], which illustrates that our dysregulated genes are The first column contains the names of the pathways; the second column contains the enrichment p-value of the candidate genes to the pathways; the third column (Gene ID) and the forth column (Gene Symbol) contains all candidate genes in the pathways; the fifth column contains the average Cox coefficients of the genes in the 400 runs; the fifth column contains the average p-values of the genes in the 400 runs and the last column contains the stability of the genes in the 400 runs (the ratios of the genes are significant across all the 400 runs). In the table, there are 35 unique genes (some genes may be present at more than one pathways).
related to some new biological mechanism of bone metastasis, compared to the reported genes.
Distinguishing bone metastasis risk by DPBM
From the training set we have extracted eight dysregulated pathways for bone metastasis in breast cancer, based on which, eight sub-models were constructed and then integrated into DPBM for predicting the bone metastases risks of patients. Therefore, we decided to evaluate DPBM on the training set, test set and independent set respectively. Just as expected, DPBM performed well in the training set. Among all the 380 patients, 308 have been classified as low-risk of bone metastases, and 72 as high-risk of bone metastases. The hazard ratio of the two groups was 3.25 (95% CI 2.21 -4.78), with p-value of 3.82E-10 ( Figure 2a).
Then we validated DPBM on the test set and found it also performed very well. Among the 189 patients, 150 samples were predicted as low-risk and the others as high-risk. Survival analysis showed that the hazard ratio was 2.89 (95% CI 1.67 -5.00), with p-value of 0.00007 (Figure 2b).
It is notable that both the training and test sets belong to the same integrated data set, the test set is hardly independent with the training set even though it has not taken part in the construction of DPBM. Therefore, it would be bias to evaluate DPBM just with the test set or even with the training set. Herein, we also used a completely independent set, GSE2034, to evaluate DPBM. The result shows that DPBM consistently performed well in the independent set. Among the 286 samples, 218 patients were predicted as low-risk group and the other 68 ones were assigned into the high-risk group. The hazard ratio between the two groups was 2.35 (95% CI 1.44 -3.83), and the p-value of log rank test was 0.0003 ( Figure 2c).
We noticed that different types of samples in any of the training, test and independent sets are imbalanced, which would lead to the overestimation problem. In order to address this issue, we also used random sampling methodology to choose the same number of cases from high-risk and low-risk groups and re-evaluated the DPBM on each of three data sets. We repeated the above process 1000 times, and the means of hazard ratios for training test and independent sets were 3.31 (p-value of 2.49E-04), 3.15 (p-value of 0.0082) and 2.48 (p-value of 0.015) respectively (Additional file 1: Table S2). The results further unveil the robustness of our model. In the meanwhile, the stable performance of the DPBM also indicates the reliability of the dysregulated genes identified by our method.
Topological analysis of dysregulated genes in PPI network
The degrees and betweenness centralities of three groups of genes (35 dysregulated genes, 232 candidate genes (the dysregulated genes excluded), all genes (the dysregulated genes excluded) in PPI network) are shown in Figure 3(a) and Figure 3(b) respectively, where three gene groups are correspondingly denoted as 'Dysregulated genes', 'Candidate genes' and ' All genes'.
From Figure 3(a), it is clear that the dysregulated genes tend to have bigger degrees than the other two groups of genes, and the p-values of dysregulated vs candidate genes, dysregulated vs all genes are 2.29E-04 and 4.86E-07 respectively. Moreover, Figure 3(b) demonstrates that the betweenness centralities of the dysregulated genes are usually bigger than the other two groups of genes (with p-value = 1.17E-05 and p-value = 1.68E-08 separately). From above results we can see that the dysregulated genes take up more important positions in the PPI network than the other genes, and tend to be essential genes for the bone metastasis.
Difference between bone and non-bone metastasis
We noticed that there are also some samples metastasized to other organs instead of bone in the data sets. By using the same strategy as we have done for bone metastasis, we have found nine dysregulated pathways and a total of 67 dysregulated genes related to non-bone metastases (metastases to other organs except for bone) (Additional file 1: Table S3). Therefore, we investigated the different functional groups to which these two kinds of genes belong, with the purpose of uncovering the biological mechanism of bone specific metastasis. By function annotating and clustering, the 35 dysregulated genes of bone metastasis were found to belong to 16 functional groups (Additional file 2: Table S4), and the 67 dysregulated genes of non-bone metastases were found to belong to 15 functional clusters (Additional file 3: Table S5).
By comparison, we found that these two kinds of genes shared a lot of common functional clusters. For example, cell differentiation related cluster, cell cycle related cluster, cell migration cluster, apoptosis related cluster, hormone stimulus related cluster, phosphate metabolic process and phosphorylation related cluster. As is known to all, cell differentiation, cell cycle, cell migration, and cell apoptosis are all famous caner hallmark related GO Terms that are related to cancer and cancer prognosis [46][47][48], while hormones are related to the risk of breast cancer and hormones-replacement therapy is a common therapy for breast cancer patients [49]. In addition, phosphorylation of some proteins have been reported to be related to breast cancer [50] and cancer prognosis [51].
The main difference between these two kinds of dysregulated genes was that dysregulated genes of bone metastasis are also enriched in biological processes associated with immune system, whereas dysregulated genes of non-bone metastases were not. The difference suggests that the immune system may be essential in the bone specific metastasis of breast cancer.
Comparing DPBM with other classification methods
In DPBM, we simply used a cut-off of the RiskScore in each dysregulated pathways to make a prediction, instead of training a complex classifier such as SVM (Support Vector Machine). In order to evaluate this option, we herein adopted two strategies to construct SVM classifers and investigated their performances. By one strategy, we used the RiskScore values of the eight dysregulated pathways as eight features to construct a SVM classifier. By the other strategy, we used all the 35 dysregulated genes as features to construct another SVM classifier to predict the bone metastasis risk. To construct both SVM classifiers, the patients in the training set were labelled as high-risk or low-risk as described in Additional file 1: Supplementary Methods. The performances of these two kinds of SVM classifiers are listed in Table 3. The comparing results indicate the superiority of DPBM even through it adopts a simple classification strategy.
As far as we know, there is only one published work to construct a model for predicting bone metastases risks of cancer patients [3], by using SCC (shrunken centroids classifier) [52] method. Therefore, we also compared DPBM with SCC. Since the data set used in the original work is too small, we constructed SCC and evaluated its performances on our data sets (the training samples were labelled as high-risk or low-risk as described in Additional file 1: Supplementary Methods, and 35 dysregulated genes were used as features). The results are also listed in Table 3, from which we can see that our DPBM performs better than SCC that has been used in previous work [3].
Discussion and conclusions
Predicting the bone metastases risks for breast cancer patients is essential in cancer therapy, which is an urgent challenge now [5]. In this work, we have proposed a Dysregulated Pathway Based prediction Model (DPBM) to address this problem. We first selected the candidate genes (correlated with the bone metastasis) by bootstrapping strategy. Then we identified the dysregulated pathways enriched by the candidate genes. After that, we used the dysregulated genes in each dysregulated pathway to construct a sub-model to predict the bone metastasis risk separately. Finally, we combined all sub-models together by using majority voting strategy as an ensemble model, DPBM, to predict the risk of bone metastasis. Validation results on test set and independent set have shown the great prediction power of DPBM.
By literature investigation, most of the dysregulated pathways and dysregulated genes are related to bone metastasis. In addition, the dysregulated genes tend to have higher degrees and betweenness centralities in PPI network, suggesting that they play critical roles in the biological functions. By comparing the functional groups to which the dysregulated genes of bone and non-bone metastases belong, we found that the immune system may be essential in the bone specific metastasis of breast cancer.
All the results illustrate that the dysregulated genes may be good biomarker candidates. The facts that DPBM consistently performs well in both test set and independent set may be due to the following merits: (1) we used the pathways to filter the candidate genes, which can help to remove those genes less essential to the bone metastasis; (2) instead of selecting pathways or other functional gene sets via the activity differences between different phenotypes, we selected the dysregulated pathways enriched by the discriminative genes, which can help to preserve the useful information for classification and reduce noises; (3) we constructed one submodel based on each dysregulated pathway, and then combined all sub-models by majority voting strategy. The ensemble classifier usually performs better than simple classifiers [53].
In this work, although we have collected 855 samples, the samples with the metastases to other specific organs are still insufficient, that is why we merged all samples with metastatic tumour of the other organs as one group (non-bone metastases group). This is reasonable for us to understand the difference between the bone metastasis and other organ metastases. Of course, if the samples with other organ metastases are sufficient, the differences among different metastases organs may also be well studied.
Additional files
Additional file 1: This file contains two supplementary methods, three supplementary tables (Table S1 -Table S3) and two supplementary figures ( Figure S1 -Figure S2).
Additional file 2: Table S4. (Functional clusters of dysregulated genes in the metastasis process to bone). This file describes the functional clusters of dysregulated genes involved in the bone metastasis process.
Additional file 3: Table S5. (Functional clusters of dysregulated genes in the metastasis process to non-bone). This file describes the functional clusters of dysregulated genes involeved in the metastases processes to other organs. | 6,637.4 | 2014-08-27T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Operation strategies to achieve low supply and return temperature in district heating system
Low temperature is the most significant feature of the future district heating (DH) the 4th generation district heating (4GDH). The revolutionary temperature level (50–55/25°C) will improve the efficiency of heat sources, thermal storages, and distribution systems, meanwhile, bring huge potentials to renewable energies. One challenge of transition to the future DH is the compatibility of current customer installations and the future temperature level. The aim of this study was to find the temperature potential of Norwegian residential buildings for the future DH system. A reference apartment was created, and typical space heating (SH) system was designed. A detailed building and SH system model were built in Modelica® language, and simulation was conducted via Dymola environment. Different operation strategies: PI control of the supply temperature, weather compensated control of the supply temperature, and PI control of the return temperature were tested. The results of the study showed the average supply temperature could be as low as 56~58°C, and only limited time the temperature was above 60 °C, when the controlled supply temperature strategies were applied. For the case with controlled return temperature strategies, the average return temperature were 30 and 37°C, while the average required supply temperature could be 72 and 94°C. The conclusion was that the low supply temperature could be achieved through optimized operation strategies. Whereas, the low return temperature was not able to be achieved only by improving the operation
Introduction
District heating (DH) is an energy service, which moves the heat from available heat sources to customers. The fundamental idea of DH is to use local fuel or heat resources, which would otherwise be wasted, to satisfy local customer heat demands, by using heat distribution networks [1].
In historical development of DH, the three generations of DH have been developed successively. The 1 st generation DH system uses steam as heat carrier. Almost all DH systems established until 1930 use this technology. The 2 nd generation DH system uses pressurized hot water as the heat carrier, with supply temperature mostly higher than 100℃. These systems emerge in the 1930s and dominate all new systems until the 1970s. The 3 rd generation DH system still uses pressurized water as the heat carrier, but the supply temperatures are often below 100℃. The system is introduced in the 1970s and take a major share of all extensions in the 1980s and beyond [2].
The direction of DH development has been in favour of lower distribution temperatures [1]. In addition, low temperature is the most significant feature of the future DH -the 4 th generation district heating (4GDH). The revolutionary temperature level (50-55/25°C) will improve the efficiency of heat source, thermal storage, and distribution system, meanwhile, bring huge potential to renewable energies [3].
One challenge of transition to the future DH is the compatibility between current customer installation and future temperature level. Older buildings will continue to make up large share of building stock for many years (for Denmark and Norway, the share will be about 85-90% [4] and 50% [5] in 2030, respectively). Those buildings are usually equipped with space heating (SH) systems designed with supply temperature around 70°C or higher, thereby reduction of supply temperature would be expected to cause discomfort for the occupants [6]. However, studies show houses from the 70s or 80s without any renovation are possible to be heated with supply temperature of 50°C most of the year, and only limited time the supply temperature has to be above 60°C. If original windows of the houses are replaced, it is possible to decrease the supply temperature to less than 60°C for almost the entire year [4,7,8].
The aim of this study was to find the temperature potential of residential apartment buildings for the future DH systems in Norway. Reference apartment was created, and typical space heating (SH) in the apartment was designed. Different operation strategies to achieve low supply and return temperature were tested. The results were used to analyse the possibilities and limitations of different control strategies. As Table 1 and Fig. 1 show, apartments built before 1990 account for about 70% of the total apartments, and the thermal requirement of envelopes during the years before 1990 show minor changes. Therefore, the apartment built around the years 1970s or 1980s can represent the thermal conditions of the majority of Norwegian apartments. In addition, the Norwegian building code TEK69 can be chosen as representative standard of the period.
The reference apartment was chosen from the middle floor of one building, there were five rooms: a living room, a children room, a bedroom, a bathroom, and a kitchen. The total floor area was about 70 m 2 . The condition of the reference apartment was selected based on the statistics of Norway, about 46% of the dwellings have 4-6 rooms, and about 17% of the dwellings have the size of 60-79 m 2 [12]. Detailed information of the apartment is listed in Table 2, and the sketch of the apartment is shown in Fig. 2.
Only natural ventilation was considered, and the air exchange rate was 0.5 1/h, which is recommended 0.2~0.5 1/h in [9] and 0.5 1/h in the standard SN-CEN/TR 12831 [13]. The set indoor air temperature in the living room, the children room, the bedroom, and the kitchen was 20°C, which is the recommended value for category Ⅱ in the standard EN 15251 [14]. For the bathroom, the set indoor air temperature value was 24°C, considering the higher thermal comfort requirement. The simulation result for the heat demand of the reference apartment was 121 kW·h/(m 2 ·year), which was close to 156 kW·h/(m 2 ·year) from a similar research [15].
Weather data
Test reference year (TRY) provides weather data for one year that characterize the local climatic conditions over a reasonably long period of time. TRY is widely adopted to get reliable simulation results [16]. The method to determine TRY is presented in ISO 15927-4 [17]. TRY for Trondheim, Norway was used in this study. The detail parameters for the air temperature, solar irradiance, and wind speed are shown in Fig. 3.
Language and simulation environment
The model was built in Modelica® language [18], and the simulation was conducted via Dymola [19] environment. The components of the model were mainly from Modelica standard library [18], AixLib library [20], and Buildings library [21].
Apartment model
The apartment model was a high order model, which included all individual elements of envelopes and their spatial context. It could be used for in-depth analyses of building thermal behaviours. The overview of the apartment model is shown in Fig. 4. For the submodule of each room, the following physical processes were considered: transient heat conduction through walls, steady-state heat conduction through glazing systems, radiation exchange between room facing elements. The detail information and evaluation work are presented in [22].
Radiator model
The radiator model is presented in Fig. 5. The calculation methods of convective and radiative heat transfer were described in [23,24]. The calculation of water pressure loss was illustrated in [25]. The validation was conducted according to the standard EN 442-2 [26], and the simulation result was compared with the measured data from [27]. The results are presented in Fig. 6.
Thermostatic valve model
The behaviour of thermostatic valve (TV) depends on the characteristic of TV as well as the overall system. Both of them should be taken into account to build the TV model [28]. According to the standard EN215 [29] and researches in [28,30], the water flow rate through the TV depends on the difference between measured indoor air temperature and the closing temperature or opening temperature of TV. To simplify the control process, proportional integral (PI) controller is applied to approximate the performance of the TV in [31,32]. The PI controller in the TV model is shown in Fig. 7.
Space heating system model
The overview of the SH system is presented in Fig. 8. For each room one radiator was designed to satisfy the heat demand. Heat demand of each room during heating season and heat output of corresponding radiator at nominal condition are shown in Fig. 9. According to the standard SN-CEN/TR 12831 [13], the outdoor design temperature for heat load calculation is -12 °C, and the sizing of system is based on the calculated heat load. The room heat loads, nominal heat output of radiators, and system oversizing values are listed in Table 3. The oversizing values in this study agree with the median oversizing values from one investigation research, which is range from 15% to 25% [33] .
Scenarios
The considerations of different scenarios are listed as follows: • The scenarios with low return temperature: Low supply and return temperature reduce the costs of heat generation and distribution. Some DH companies incentivize their customers through motivation tariffs to reduce their temperatures in exchange of discount in their energy bills. Researches show that low return temperature has higher economic benefits, and DH companies care more about low return temperature than low supply temperature [34].
• The scenarios with low supply temperature: In the future, more renewables will be integrated into DH system. Low supply temperature will increase output of solar energy, raise coefficient of performance for heat pumps, and increase the power to heat ratio of combined heat and power plans [3].
• The scenarios with minimum supply temperature: The favourable conditions for legionella proliferation ranging from 25 to 45°C [35]. In the European standard CEN/TR16355 [36], drinking water installation without hot water circulation, should be capable of reaching the minimum of 55°C. For a drinking water installation with circulation, should be the minimum of 55 °C, and within 30 s after fully opening a draw off fitting the temperature should not be less than 60°C. Meanwhile, to decrease the required temperature, some researches recommend to use supplementary heating devices, and the supply temperature can be decreased to 40°C [35].
In this study, six scenarios were proposed, see Table 4. For those scenarios, the controlled supply or return temperature were adjusted once in an hour, based on outdoor temperature, or the difference between the set and the measured indoor air temperature. Meanwhile, the maximum supply temperature for all the scenarios was 130°C.
Results
The system supply and return temperature during heating season are presented in Fig. 10. The relation between the average supply temperature, the average return temperature, and the average temperature difference during heating season is shown in Fig. 11. The indoor air temperatures for different scenarios are shown in Fig. 12. The total heat rates of the flat for different scenarios are displayed in Fig. 13. A summary of all the results is given in Table 5.
As Fig. 10 and Table 5 show, the low supply temperature was achieved via the PI control in the scenarios TS_PI_NL and TS_PI_WL, and with the WC control in the scenarios TS_TC_NL and TS_TC_WL. The average supply temperature in those scenarios could be as low as 56~58°C, and only limited time the required supply temperature was above 60°C. In addition, compared with the WC control, the PI control shown an advantage lowering the supply temperature. The percentage of required supply temperature above 60°C was 17% when the PI controllers were applied, while, the corresponding value was 36% in the case of the WC controls. The reason was that the WC control is an open loop control strategy and the supply temperature is only decided by the outdoor air temperature, ignoring any other impact factors, such as heat gain from solar radiation, occupant, and other devices. Whereas, the PI controller is a feedback control strategy and any overheating caused by extra heat gain will be compensated by the change in the supply temperature. In that way, unnecessary high supply temperatures are avoided. In this study, heat gains from occupants and devices were not taken into account, even though they might show more advantages for the PI controller. This model extension will be a topic for the future work.
As Fig. 10 and Table 5 show, the low return temperature was achieved through the PI controller in the scenarios TR_PI_TC and TR_PI_TV. The average return temperature in those scenarios were 30 and 37°C respectively, and the temperature below 40°C covers most of the heating season, with share of 99% and 83%, respectively. However, one obvious disadvantage for those operation strategies was the high supply temperature. The average supply temperature in the scenario TR_PI_TC was 94°C, and sometimes in order to achieve the low target return temperature, the required supply temperature was even up to 130°C. One way to solve this issue is to use flexible target return temperature. Compared with the scenario TR_PI_TC with constant target return temperature, TR_PI_TV uses flexible target return temperature. The average supply temperature of the scenario TR_PI_TV decreased to 72°C, meanwhile the maximum supply temperature decreases to 111°C.
As Fig. 10, Fig. 11, and Table 5 show, there is a clear relation between the supply temperature, the return temperature, and the temperature difference. As shown in Fig. 12 and Table 5, different operation strategies show small differences in the indoor air temperature. During most of the heating season, 98~99% of the entire season, the range of the indoor air temperature is within 0.5°C around the set value. In addition, the results reveal the importance of indoor temperature control device, specifically TVs. Wellfunctioning TVs guarantee the indoor air temperature fluctuating within a certain range around the set values, no matter what operation strategy was applied. On the contrary, the indoor air temperature and the system return temperature would be with fault when the TVs have malfunctions [37,38]. As shown in Fig. 13 and Table 5, different operation strategies show little difference on apartment heat use. As mentioned before, heat gains from occupants and devices were not taken into account in this study, otherwise energy savings from the PI controls would become bigger. More advanced control strategy, such as model predictive control (MPC) can bring even more energy savings and make the control process more smooth [39].
Conclusion and discussion
This study aimed to optimize the operation strategy, and achieve the low supply and return temperature of the DH system. A building and a SH model were built using Modelica® language, and simulation was conducted in Dymola environment. Six scenarios, with controlled supply temperature or return temperature, were analysed based on the model. The low supply temperature was achieved through the controlled supply temperature operation strategies: the PI control and the WC control. The average supply temperature could be as low as 56~58°C, and only limited time the required supply temperature was above 60°C, with 17% and 36% of the heating season, respectively. Meanwhile, the low return temperature was achieved through controlled return temperature operation strategy, the PI control. The average return temperature could be 30 or 37°C, while the temperature below 40°C covered the most time of heating season, with share of 99% and 83%, respectively.
One question come from this study, whether it is possible to achieve the low return temperature through optimizing operation strategy, without inappropriate high supply temperature. The results showed a clear coupling among the supply temperature, the return temperature, and the temperature difference between them. When the strategy with the constant target return temperature of 30°C was applied, the average supply temperature was 94°C, and during the coldest days, it was even up to 130°C, which is too high for the secondary side of DH system. One way to mitigate this issue was using flexible target return temperature. After the flexible target return temperature from 30 to 50°C was applied, the average supply temperature decreased to 72°C, and the maximum supply temperature decreased to 111°C. The results were still some distance from the temperature requirement of 4GDH, which is 50-55°C for the supply temperature, and 25°C for the return temperature. However, please note that the results are valid for the existing apartment building, built before 1980s and not for new buildings.
Another conclusion was the importance of TVs. Different operation strategies in this study showed small differences in the indoor air temperature and heat use. During the heating season, about 98~99% of the time, the fluctuation of the indoor air temperature was within 0.5 °C around the set value. The results revealed the importance of TVs, which was the critical device to prevent overheating.
There were some limitations in this study. Renovations of buildings is a critical influencing factor in building energy analyses. Buildings have gone through reasonable renovations, such as changing the windows, use less heat and require a lower supply temperature. Building renovation was not taken into account in this study. In addition, the average heat gains from occupants and equipment can be assumed as 0.81 and 1.55 W/m 2 respectively [34]. If those heat gains were added, the oversizing of radiators would increase 5~10 %, the final value of oversizing will range from 21 to 27%. Under such condition, the required supply temperature could be further lower. Meanwhile, simplified occupant behavior mode was applied in this study, with constant set indoor temperature and fixed air exchange rate. Studies show occupant behaviour influence building energy use [40][41][42]. The simplification of the model may cause some inconsistent between simulation and reality. Finally, the research was conducted based on the simulation of two pipe SH system in an apartment with five rooms. To obtain more general and proper conclusions, further researches and experimentation studies are needed. | 4,064.2 | 2019-05-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Hydrogen Environmental Bene fi ts Depend on the Way of Production: An Overview of the Main Processes Production and Challenges by 2050
of electrocatalyst composites based on Prussian Blue Analogue supported in graphene-based substrates for water oxidation reaction at neutral pH.
Introduction
Hydrogen is the smallest and lightest element in the periodic table (atomic ratio 53 pm and atomic mass 1.008), and surprisingly, it is the most abundant element in the whole universe. It is the tenth most abundant element on Earth (0.14%), [1] and it can be found in our atmosphere (0.6 ppm), [2] in water, in organic molecules, or other chemical compounds. In the same way, hydrogen can be found in large quantities in the sun. [3,4] Moreover, hydrogen represents 73.4% of the sun's mass, being responsible for 85% of its energy, that comes from hydrogen atoms fusion, forming helium and releasing a huge amount of energy, %10 34 J Year À1 . [5] The first report of molecular hydrogen is dated at the beginning of the 16th century when a gas was identified as a product of the reaction between sulfuric acid and iron. This gas was first identified as a unique substance by Henry Cavendish in 1776; however, it was only named in 1788 by Antoine Lavoisier, who named the substance from the Greek roots "hydro" (water) and "genes" (creator). Since then, H 2 has been extensively studied and used for a wide range of applications. [6] Nowadays, one of the most important applications of hydrogen is in the petrochemical industry, including hydrocracking (hydrogenation to produce refined fuels with smaller molecules and higher H/C ratios) and hydroprocessing (hydrogenation of sulfur and nitrogen compounds to further remove them as H 2 S and NH 3 ) for the purification of petroleum and fuels. In addition, hydrogen is essential in the base industry especially through the synthesis of ammonia from the direct reaction with N 2 at high temperatures and pressure in the well-known Haber-Bosch process. [7] It is worth mentioning that ammonia is fundamental for fertilizers production and by the improvement of agricultural performance. Hydrogenation can also be applied to decrease the degree of unsaturation in fats and oils and on some fine chemical synthesis. Hydrogen has been used in the electronics industry as a protective and carrier gas, in deposition processes, for cleaning, in etching and reduction processes. Another example is its use in the metallurgic industry in the reduction stages and also in the direct reduction of iron ore, which involves the separation of oxygen from the iron ore using hydrogen and synthesis gas (syngas). A strategic application of the H 2 is to consider it as a fuel, being able to be applicable for direct combustion, by itself or in some blends with natural gas, and also in fuel cells (FCs), where it can provide a reliable and efficient energy power, that can be used in stationary power stations and also as a good candidate for transportation vehicles. [3,6,[8][9][10][11] Although presenting great potential for several applications, according to a sense from 2018, [7] 51.70% of total H 2 worldwide is used for refining, 42.62% is used for ammonia production, and only 5.68% is used for other applications, including its use as a clean and renewable fuel.
Nevertheless, H 2 application as a renewable fuel is the most promising application for the future, and its main advantage is related to its cleanliness and low greenhouse gas emissions, which is determined by the hydrogen production pathway (HPP). Therefore, the study and understanding of every HPP are essential for the development and advance of the so-called "hydrogen economy," mainly focused on the use of green hydrogen. During the analysis of HPP, a few primary challenges must be conquered, such as the choice of the feedstock-(FF or water), the energy source needed to extract hydrogen from the feedstock, and the catalyst that is needed to overcome some kinetic and thermodynamic limitations that are present regardless of the process. [4,12,13] A meticulous study on how to overcome these challenges can help the development of an efficient and economically viable green HPP, which can contribute to a more sustainable future.
Due to these advantages and the extreme importance of hydrogen for our society, this review has presented a discussion about hydrogen production processes. First, H 2 was classified according to its color codes, which reflects how sustainable are the processes. After this, the reforming processes used for hydrogen production were described, highlighting the advantages and drawbacks. The same approach was used to describe the hydrogen produced from the water (electrolysis) where the technologies are converging to net-zero carbon emission goals, such as the green hydrogen. In addition, it included a detailed discussion about the source of water (wastewater and seawater) to hydrogen production, including biohydrogen. Technologies of hydrogen production outside the Earth were also included to motivate the scientific community to adopt new technologies. By the end, a summary of the hydrogen value chain addresses topics related to the financial aspects and perspective for 2050: green hydrogen and zeroemission carbon.
The Cleanliness Level of Hydrogen: Representation by Colors Code
The level of cleanliness of the energy produced from hydrogen is related to the amount of greenhouse gases produced during H 2 production. Furthermore, the sustainability of all energy chain also depends on the energetic input, the type of raw material, the design of the industrial process, and CO 2 emissions. [14,15] An interesting approach for classifying carbon emission during hydrogen production is the use of color labels. The color codes of the hydrogen production process might be the statement of sustainability from the suppliers to the consumers. This strategy allows a fast indication of the kind of hydrogen (in terms of carbon emission) you or a company are handling. Therefore, it is expected an environmental responsibility and greater competitiveness from H 2 suppliers by sustainable products. [16,17] The first proposed model for the H 2 classification is based on three colors, according to the CO 2 emission, as shown in Figure 1. Gray H 2 is produced through the steam reforming process and uses FFs as a raw material. In addition, there is no restriction to carbon emission, and it is considered "dirty" hydrogen. The process to produce blue H 2 is similar to the gray one; however, the produced carbon is captured and stored, decreasing the CO 2 emissions. On the other hand, green hydrogen is considered as a renewable hydrogen due to the use of water as a source of H 2 and renewable energy (RE) in the electrolytic process (water splitting (WS) process), which fits with the zero-emission carbon approach. Figure 1 presents a comparative scheme of these three processes. [14,16,17] The H 2 chain is plural and complex, and because of this, new color codes were added to improve the description of the cleanliness level of the hydrogen production. Based on this concept, a complete color codes table can be found in Figure 2.
The brown hydrogen (black hydrogen can be a synonym) is produced from coal in the gasification process, which generates large amounts of CO 2 and high environmental impact, even though the low cost of produced H 2 is hard to achieve. Gray and blue hydrogen were described before. Like the brown, blue, and gray hydrogen, turquoise hydrogen is also produced from FFs, but the methane pyrolysis at high temperature allows the carbon elimination in solid form, which reduces the CO 2 emission. The key point of this strategy is the source of energy used and its carbon emissions. In other words, if the input energy is renewable, the process would be clean. Thus, it can have a lower environmental impact ( Figure 2). Pink, yellow, and green hydrogen are produced from the electrolysis process (also known as water splitting (WS)), and they use water as a raw material. However, the final environmental impact also depends on the input energy. Pink hydrogen is obtained from the electrolysis process powered by nuclear energy, and yellow uses the same strategy, but the H 2 is produced using the input of mixed origin (FF and renewable). Green hydrogen is produced by the cleanest process, where the water electrolysis is driven exclusively by RE. [14] The challenge for incorporation of the green hydrogen in the hydrogen chain is the cost. The price of sustainable hydrogen is approximately four times higher than those produced from the FFs process. [12,18] White color, for example, is used only to classify the H 2 from natural origin, and due to the rare occurrence on the Earth, there is no commercial interest. [19] This was the first proposal for the white H 2 . However, some authors have considered white hydrogen as a product of thermochemical WS produced by concentrated solar energy. [20] In addition, the company Recupera [21] has defined white hydrogen as H 2 produced from plastic, biomass, or garbage. The definition of white hydrogen is still open.
As hydrogen color code is directly related to its production pathway, in the next section, we present a brief discussion on the main HPP, obtaining gray and blue hydrogen from FFs. And then, a discussion on the best ways of obtaining green hydrogen is presented, along with the main perspectives for the future applications of hydrogen production.
Steam Methane Reforming
Steam methane reforming (SMR) represents the most important industrial pathway for large-scale production of H 2 , being responsible for %48% of the overall production of molecular hydrogen in the world. [22][23][24] The technique itself consists of three fundamental steps: syngas generation (Equation (1)), water-gas shift (WGS) reaction (Equation (2)), and hydrogen purification. [25] WGS is used to increase the hydrogen content and to convert CO into CO 2 , whereas in the final purification step, H 2 and CO 2 are separated by different methods. [26] The following reactions occur when CH 4 is used as feedstock [27,28] As it is shown, the SMR is very endothermic. Thus, it is necessarily an external heat source, and usually, FFs are used to reach operating temperatures between 800 and 900 C, which makes SMR a non-sustainable process. [27] Also, the steam-tocarbon (S/C) ratio plays an important role in the efficiency of methane conversion, and after several attempts, an optimum value in which coke formation is prevented at high temperatures (around 800 C) was found to be in the range of 2.5-3.0, the steam being in excess. [27,29,30] Even though high temperatures and elevated steam pressures reforming operation conditions are needed, a catalyst is still required to speed up the reaction due to the high stability of methane. [27,31] In this sense, Ni-based catalyst can lower the activation barrier, thus increasing the reaction rate. [27,32] However, these Ni-based catalysts may be poisoned by sulfur and by the deposition of carbon. The latter can block pore structure and cover the active sites of the catalyst, decreasing its efficiency. Hence, to prevent S poison, a desulphurization step is added as the first process before the reaction with steam begins. [27] A support material (usually magnesium and aluminum spinel, MgAl 2 O 4 , or α-alumina) is used as support for Ni in the SMR process to prevent carbon formation on the active sites. [28] Catalysts based on Co, [33] noble metals, [31,34] Ru, [35] and Rh [36] are also used in SMR. However, the high cost of these metals is the main drawback for large-scale use. In some cases, a small percent of noble metal can be added to enhance the catalytic activity of Ni-based materials. From an economical point of view, the production costs per kg of H 2 in the SMR using Ni catalyst can reach about USA$ 2.08 when the carbon capture and storage (CCS) process is not included. On the other hand, these costs can up to USA$2.27 kg À1 H 2 À1 ) when CCS is applied, which reduces the environmental damage and still keeps the process economically competitive. [4,29] SMR is a process classified as gray hydrogen, and this way must be replaced by more sustainable processes in the next 30 years. The integration of SMR with CCS strategy changes the color code of the H 2 produced from gray to blue, and it is an important starting point for decarbonization for HPP.
Partial Oxidation Process
The partial oxidation process (POxP) is an attractive and cheaper alternative for H 2 production, because it minimizes large amounts of expensive superheated steam. [37] POxP basically involves the conversion of steam (H 2 O), O 2 , and different hydrocarbons into H 2 and CO (Equation (3)-(5)). [29] An important feature of this method is that heavier feedstocks, such as oil residues and even coal (gasification process), can be used, and this gives it a wide range of feedstock possibilities. Despite heavier oil fractions requests desulphurization, which increase costs, the overall process exhibits a competitive economic price.
POxP is performed at elevated temperatures and high pressures, and the understanding of the mechanisms reactions still remains a challenge. [38,39] An important dilemma emerges from this point, where the question is putting effort to study POxP and improve the process or change the focus to green hydrogen. Two reaction mechanisms have been proposed. [40,41] In the combustion and reforming reaction (CRR), the methane, for example, reacts with O 2 (first step) generating CO 2 and H 2 O. The remaining CH 4 reacts with steam and CO 2 by typical SMR and dry reforming processes, respectively, giving rise to a CO/H 2 mixture that will be further separated. In the direct partial oxidation (DPO) where the mixture CO/H 2 is formed in a single step via CH 4 þ ½O 2 ! CO þ 2H 2 . In addition, catalysts are needed to improve the process, turning it faster and more effective. They are usually made of group-VIII noble metals, such as Rh, Pt, Pd, Ir, and Ru, and non-noble metals, such as Ni and Co. [42][43][44] The conversion of CO with steam in a typical WGS reaction, hence, complements the process generating more H 2 and also CO 2 .
From the carbon emissions point of view, POxP is classified as gray, because a large amount of CO 2 is generated by the end of the process. However, if the CCS approach is used, the POxP color code can change from gray to blue. On the other hand, the PoxP using coal can be named gasification process, and it is the worst way to produce H 2 in terms of pollution, and because of this, the H 2 produced by this industry is labeled as brown or black hydrogen.
Autothermal Reforming
When partial oxidation and steam reforming are combined in the same reactor, a new route to produce hydrogen gas is founded, and the process is known as autothermal reforming (ATR). [45] In this system, the partial oxidation step generates the heat amount that will be consumed later on the steam reforming process, and the overall procedure is thermally neutral. When a general hydrocarbon is used, the following reaction takes place ΔH 0 % 0 As shown, the syngas is produced; however, the practical advantage of ATR is the combination of low-temperature operation conditions prevenient from partial oxidation and the high hydrogen/ carbon ratio from the SMR process. [46] Furthermore, ATR is usually used for generating hydrogen on a smaller scale. [47,48] The catalyst for ATR must be compatible with SMR and partial oxidation reactions, which is a real challenge. In addition, its selection should consider the fuel used, and there are two ways in which this process can occur, either using the same catalyst or using one catalyst for SMR reaction and another for partial oxidation step. [47,49] For fuels with lower molecular weight, a Cu-based catalyst is generally used, and for heavier hydrocarbons, Pt, Rh, and Ru catalysts or ion conduction ceria supported non-noble metal formulation, such as Fe, Co, and Ni, are used. [47] ATR is also classified as gray hydrogen because of the CO 2 emissions. Over again, the introduction of the CCS approach can change its classification to blue.
Methane Pyrolysis
Besides the previously discussed processes, another way that could be used to produce H 2 is methane pyrolysis. In this procedure, also called methane decomposition, thermal treatment is applied to convert natural gas into H 2 and C without CO 2 emissions. [50][51][52] The endothermic reaction is described as follows The solid carbon produced avoids the CO 2 emission and can be transported and stored permanently, which decreases the overall cost, because CO 2 sequestration units are absent in the system. Furthermore, the produced carbon may have some value for further purposes, such as color pigments or tires. [50,51] As a thermal input is necessary, methane pyrolysis is usually performed at high temperatures to reach a homogeneous reaction rate. In a view of that, the main drawback for this technique is the widely known coke formation, which occurs in the tubular reactor walls and may deactivate the catalyst, [50,51,53] such as supported metals or oxides.
In terms of carbon emission, the methane pyrolysis for H 2 production is classified as turquoise (see Figure 2). This color represents a cleaner process than the others based on FFs. Nevertheless, the cleanliness of the process depends on the energy input, because the energy produced from renewable generates low environmental impact processes.
Electrolysis of Water
The search for clean, renewable, and environmentally friendly hydrogen sources has made water an excellent feedstock candidate to produce hydrogen. [3,29] The production of clean H 2 from water occurs by a system known as WS, which in its simplest form uses an electrical current passing through two electrodes to complete the endergonic hydrolysis of water into hydrogen and oxygen. The overall process consists of two half-reactions, in which the anodic process is called oxygen evolution reaction (OER), where the water oxidation reaction (WOR) takes place, and the cathodic process is known as hydrogen evolution reaction (HER), where the hydrogen gas is produced, as it is shown in the following reactions at pH ¼ 0. [12,[54][55][56] The great limitation for hydrogen production through this process resides in the anodic reaction, where the oxygen evolution (water oxidation) takes place, which is the most energy-intensive and kinetically slow step in the overall WS process. The water oxidation process to oxygen implicates a complex electronic transfer involving four electrons and four protons. [57] Therefore, the WS is either kinetic or thermodynamic unfavorable, and in ideal conditions, a potential of 1.23 V (V equilibrium ) must be applied to the system to start the process. In addition, efficient and stable catalysts are required to decrease the overpotential of the reaction, and an external energy source, such as electricity (electrolysis) or solar (photocatalysis), must be used. [58,59] The system where electrolysis of water takes place is known as electrolyzer, which basically consists of a cathode and an anode separated by a membrane immersed in an electrolyte. So far, three main electrolysis cells are used and studied: alkaline electrolysis cells (AECs), proton exchange membrane electrolysis cells (PEMECs), and solid oxide electrolysis cells (SOECs). [60] Figure 3 shows these cells' setup and their main differences.
AEC has been widely used for industrial and large-scale applications since 1920, being already available, stable, and exhibiting a considerable low capital cost (around USA$ 1180 kW À1 ). [61] In addition, this cell was shown to operate for over 55 k hours, proving itself to be very stable, which is very important for its large-scale application. However, one of the main drawbacks for this cell is its low current density (<0.45 A cm À2 ) and high cell voltage (1.8-2.4 V), [62][63][64] which can increase the cost for hydrogen production. Thus, some developments need to be done to make a more suitable cell, with a significantly lower cost, which is still three times more expensive than steam reforming processes. [60,62,65,66] Figure 3. Main electrolysis cells technologies setup. They can be classified as: AECs, PEMECs, and SOECs. [60] www.advancedsciencenews.com www.advenergysustres.com PEMECs are based on a solid polymer electrolyte. They were developed in 1960 as an attempt to overcome the problems presented by the AECs, and after considerable effort in research and improvements, PEMEC has been reported a milestone in the electrolyzer field. [66,67] Membranes are the cornerstone for the PEMEC, and they are responsible for separating product gases, transporting protons, and supporting the cathode and anode catalyst layer. The most used membranes are based on a perfluorosulfonic acid polymer, such as Nafion, Fumapem, Flemion, and Aciplex. [62] Although all the beforehand cited polymers present great advantages for membrane applications, Nafion is the one that is usually used, due to its excellent chemical and thermal stability, mechanical strength, high durability, high proton conductivity, and the fact that it can operate at high current densities. [62,68,69] However, one of the main drawbacks of the use of Nafion is its disposal, which can be very expensive due to the presence of fluorine in the structure. Thus, alternative membranes have been studied, yet they present low current densities and low durability, which make them unviable. [62,70] Even in the same cell voltage as AECs, PEMECs present higher current density (1.0-2.0 A cm À2 ), efficiency, and great stability, operating for over 60 k h, being able to produce pure hydrogen. However, the main catalysts are made of noble metals, which increases the capital cost (around USA$ 2300 kW À1 ). [63,64] The system also requires pure water, which limits its application. Hence, studies have been made trying to reduce the system complexity and its cost, aiming to find less expensive materials. [61,66,70] SOEC is a more recently developed cell, and it is not widely commercialized, because the system has been demonstrated to work only on laboratory scales. The cell uses solid ion-conducting ceramics as the electrolyte, which allows it to operate at higher temperatures (900-1000 ºC). [67] These cells have high electrical efficiency, moderate current densities (0.3-1.0 A cm À2 ), operate at lower cell voltage (0.98-1.3 V), [63,64] low material cost, and also the option to operate in reverse mode as an FC or in co-electrolysis mode. Nonetheless, the high-temperature operation can cause material degradation, which is a huge drawback and elevates the capital cost (higher than USA$ 2400 kW À1 ). Therefore, the research in this area is focused on the development of catalyst materials that can outstand perform in high and low temperatures, further making it commercially viable. [61,66,67,71] Nevertheless, it is worth mentioning that all these data presented for the electrolyzers are directly dependent on the stack level in the cells, which have an impact on the cells performance, efficiency, and stability; therefore, it is an important parameter to be considered in the study of better systems for water electrolysis. [66] In overall WS electrochemical system, the actual operational voltage (V op ) is different than V eq , because it depends on the reaction's kinetics and on the cell design, being represented by the following equation where η A and η C are the overpotential for the anodic (OER) and cathodic (HER) reactions, respectively, and η Ω is the additional overpotential required to compensate for resistance losses within the cell. [58] In an ideal system, η A and η C would be close to zero, and V op would depend only on η Ω , which could be minimized by the cell design. Nonetheless, this is not what happens in reality, in which the reactions face a very high activation energy barrier due to the kinetics limitations, increasing the overpotential that needs to be overcome. [72] The system can become even more complex for photocatalysis, where semiconductor materials are incorporated into the electrode in a way that the solar energy is directly harvested, requiring more mechanistic steps and lowering the overall production efficiency, as it will be discussed further in Section 4.2. [57] Thus, the research in water electrolysis processes focuses on approaches to reduce this overpotential by improving electrodes, electrolytes, and catalysts, trying to unravel ways to boost reaction kinetics. [55,59,[73][74][75][76] Among the numerous catalysts that have been studied, Ru, Ir, and their respective oxides stood out as the state of the art for WOR, presenting the best electrocatalytic activities toward OER in both acidic and alkaline solution. [77] Ir, Ru, and their respective oxides activity as water oxidation catalysts are ordered in the following sequence: Ru > Ir % RuO 2 > IrO 2 . The overpotential needed to achieve a current density of 5.0 mA cm À2 is 300 and 400 mV for RuO 2 and IrO 2 , respectively. [78] Even though both oxides have a good performance and activity during water oxidation, their main drawbacks are their low stability and high cost. [79] The state-of-the-art catalysts for the HER are based on Pt, which is found to be an efficient electrocatalyst, exhibiting a near to zero overpotential and high current densities. [56,80] Pt-based catalysts present an overpotential of 0.05 V in acid media and are able to keep the same value even after 2 h of reaction. At an overpotential of 0.1 V, they are also able to achieve the current densities of 110 AE 70 and 220 AE 80 mA cm À2 for two different studies with platinum electrodes, with differences in the electrolytes. [55,81] Even though these catalysts are considered as the state of the art for WS, their high cost and scarcity create a huge obstacle to their large-scale application. Hence, the development of catalysts for HER and OER should be efficient, stable, cheap, operate at low overpotential, and based on earth-abundant elements. This is crucial to a suitable hydrogen generation through WS. In a view of that, an enormous amount of effort, both theoretical and experimental, has been put into the development of different catalysts based on earth-abundant transition metals, especially from the first transition row (e.g., Co, Fe, Ni, Cu, Mn, Cr, and Zn) as an attempt of substituting noble metals catalysts, making the WS an economically viable process for the hydrogen production. [54][55][56]59,75,76,[82][83][84][85][86] Jaramillo and co-workers [55] published a study comparing the main catalyst for both OER and HER, working in alkaline and acid medium. In this article, the authors present a benchmarking for these catalysts, showing how the activity of earth-abundant catalysts can be compared with the one from noble metals, demonstrating the main features that need to be improved for the development of suitable non-noble metal catalysts.
Even though a huge amount of research has been made in this area, the use of earth-abundant catalysts is still limited by their low activity and, most of the time, stability, which inhibit their applications for large-scale hydrogen production. [12,55,56] Based on the development of the WS technology so far, the cost per kg of hydrogen is in the range of USA$ 7.98-8.40 for electrolysis with an efficiency of up to 60% and is about USA$ 10.36 kg À1 for the photocatalysis, and the efficiency up to 12%, which makes the electrolysis a better option for the WS. Nonetheless, this system is still too expensive, and the costs need to be reduced to be competitive with the gray hydrogen (steam reforming, for example). [10,12,18,87] Nonetheless, it is important to highlight that the WS process can only be considered as a good and eco-friendly alternative for green H 2 production if the input energy in the electrolyzer is being supplied from RE sources; otherwise, this system would not produce a 100% green and clean hydrogen. [13,88] Pink hydrogen has been considered as an alternative with a low-carbon emission in the hydrogen production process, because the input energy comes from nuclear energy reactors. In these systems, nuclear electricity can be directly used in an electrolysis unit for hydrogen production. [89,90] Therefore, electrolyzers can be built inside or close to the nuclear plants, to facilitate an in-site H 2 production. [15,91] The hydrogen can then be transported in pipelines for other applications or be transferred to an FC for electricity generation, which would be further transferred into the grid. Nuclear electricity has zero-carbon emission, and because of this, some governments are trying to implement the use of this technology. However, the main drawback is that nuclear plants have a high risk of catastrophic consequences. In addition, nuclear waste is very dangerous, and its disposal and treatment are expensive, which can make its use not suitable. [6,15,[89][90][91] In the search for renewable and safer energy sources, wind is very important, being clean, renewable, and low cost. In addition, it can bring outstanding benefits for regions and countries with great wind conditions. [92,93] One of the great challenges for wind power utilization is that the wind-power plants are usually installed in locations far from regions with high electricity consumption, resulting in long-distance transmission and high-power losses. Thus, the exploration of local use of wind power has been considered, and the utilization for local green H 2 production using electrolyzers would be a great opportunity. [93][94][95] In this case, H 2 can be produced via a water electrolysis system, which its energy input is supplied by a wind turbine. If necessary and during periods with no wind, the power needed to produce the hydrogen could be withdrawn from the grid. The H 2 produced through electrolyzers using a combination of RE and non-RE is classified as yellow hydrogen. [92] Nevertheless, the greatest challenge for a wind-power utilization for water electrolysis is that, sometimes, the wind-to-power conversion is not efficient; thus, the power would not be enough to supply the electrolyzer's requirements. [93,95] To overcome this challenge, windpower farms are usually installed close to the ocean or in other regions with constant and high velocities of wind. In addition, other studies try to improve the wind-power systems to increase their efficiency; also, the electrolyzer can be adapted to work better with the power supplied by wind-power systems. [92][93][94][95][96] 4.2. Sunlight to Produce H 2 Solar energy is inexhaustible, clean, and the most abundant energy resource on Earth, providing in 1 h (4.3 Â 10 20 J) more energy than the global annual energy consumption, 4.1 Â 10 20 J. [97] Among the strategies to produce green hydrogen from water electrolysis using sunlight, two approaches have received great attention and will be discussed as follows. The first one is the direct conversion of solar energy to hydrogen in a photoelectrochemical (PEC) cell. In the latter, an electrolyzer is powered by a photovoltaic (PV) cell, and the systems can operate independently. [55,98] The first PEC cell was proposed by Fujishima and Honda in 1972 [99] and was composed of an n-type TiO 2 photoanode and a Pt cathode, as shown in Figure 4. As the energy of the incident radiation is higher than the bandgap of the semiconductor (TiO 2 ), electrons (e À ) are photoexcited to the conduction band (CB), leaving holes (h þ ) in the valence band (VB). The photogenerated electrons flow to the Pt electrode through an external circuit to promote HER. At the same time, water oxidation by the holes occurs on the semiconductor surface. The HER takes place when CB potential is more negative than 0 V versus normal hydrogen electrode (NHE) (H þ /H 2 redox potential), and the OER is accomplished if the VB potential of the photocatalyst is larger than 1.23 eV (at pH ¼ 0), the minimum Gibbs free energy requirement for WS (Equation (10)).
The efficiency of the PEC cell depends on the light-harvesting capability of the semiconductor; e.g., the large bandgap of TiO 2 (3.2 eV in anatase phase) restrains its application to absorption of UV light, which corresponds to only 4% of the solar spectrum. In this sense, the modification of the electronic band structure of the semiconductors by doping has been proposed to extend the light absorption to the visible light region. Furthermore, semiconductors with a narrow bandgap, such as WO 3 , BiVO 4 , Fe 2 O 3 , and CdS can be used as alternatives to TiO 2 . [100][101][102] Besides the photon absorption and exciton generation, the dynamics of the electrons and holes, including trapping, recombination, and interfacial transfer, can also affect the PEC performance of semiconductors, [103] and a strong dependency on the crystal structure, the presence of defects, size, and conductivity of the photocatalyst has been observed.
Furthermore, many efforts have been devoted to developing heterojunction systems, which consists of coupling two or more semiconductors and where electrons and holes can be spatially separated, minimizing the recombination. Several heterojunction configurations were reviewed and discussed in detail by Tang and co-workers. [104,105] A promising configuration, inspired on the Z-scheme of the photosynthetic system of green plants, could meet the requirements for the efficient hydrogen production from solar-driven WS. In this mimicking system (Figure 4), two photocatalysts with small bandgap can harvest a wide range of the solar spectrum. Considering that the OER and the HER take place in the isolated photoanode and photocathode, respectively, photocatalysts that are active for only a half-reaction can be used. Water oxidation and reduction co-catalysts (WOC and WRC) can be attached to the electrodes to improve the PEC performance. Moreover, in the Z-scheme, a redox mediator is used in transportation electrons, allowing an efficient charge separation, suppressing the e À /h þ recombination. [106] The strategies inspired in nature are challengeable but still are a promising alternative. Researchers from all over the world have worked hard in the past decades to improve efficiency and the costs of the hydrogen produced from a PEC, but this strategy is out of commercial applications.
In this regard, according to Grimm et al., [107] the PV-electrolysis system can be more competitive than the PEC devices. In a PV system, the electrolyzer's energy input is supplied by PV devices that are connected to each other. In a PV-electrolysis system, the solar panels capture solar light and transport the energy, usually via wires, to a separate electrolyzer. [107][108][109] These systems can be typically either directly coupled or connected via a converter. However, the modeling tool for PV electrolysis, regarding the integration and coupling of the subsystems as well as the modeling approach for the solar cell device, has a direct impact on the system efficiency toward H 2 production. [107,110,111] The greatest challenge faced by the development of these coupled devices is the achievement of high solarto-hydrogen (STH) efficiencies, due to a limitation to solar energy conversion. The improvement of STH efficiencies can be a significant driving force for reducing the H 2 generation cost. [55,98,112] Thus, some changes and improvements need to be done to the system to make it suitable. Using a multijunction solar cell with two electrolyzers in series, researchers found an effective way to minimize the excessive voltage generated by a multijunction solar cell, allowing greater utilization of the high-efficiency PV for WS, achieving an STH efficiency of over 30%. [98] Nonetheless, these prototypes still need to be improved and adapted in a way to reduce the cost of H 2 and make the use of electrolyzers commercially suitable. [98,[107][108][109][111][112][113] 5. Sources of Water
Hydrogen from Wastewater
The use of wastewater as a feedstock for the WS process can provide on-site treatment for water recycling and reuse, along with the production of hydrogen. These features offer great advantages regarding the use of water, because the resources are not equally distributed around the world, besides, due to the population growth, they are on the edge of an emerging crisis. According to the World Health Organization, around 2.2 billion people do not have safely managed drink water. In addition, 4.2 billion people do not have safely managed sanitation services, and 3 billion lack basic handwashing facilities. [114,115] The poor water quality exposes these people to several diseases, which may also result in death. [116] In face of this scenario, the use of wastewater is a great alternative and excellent opportunity for the obtention of clean water and energy storage (H 2 ). [117] Each year, about 310 km 3 (310 Â 10 12 L) of municipal wastewater is produced around the world, and part of this amount (around 70%) is treated by conventional methods and reused for different approaches. This great supply of wastewater can be used as an alternative feedstock for WS, which would present another way to treat this water and make it clean and being able to be reused. In addition, energy, in form of hydrogen, could be produced and stored, www.advancedsciencenews.com www.advenergysustres.com displaying an incredible opportunity, especially for communities where water is a scarce resource. [118][119][120][121] When wastewater is used as a feedstock, H 2 can be produced using microbial electrolysis cells (MECs) or wastewater electrolysis cells (WECs). MECs utilize microbes at the anode to convert biodegradable substrates, such as organic matter, into electrical current and protons (H þ ). The electrons are transferred to the cathode, so the protons can be reduced into hydrogen gas. [122,123] These cells are based on the use of microbes to degrade pollutants, making the MECs part of the microbiological pathway for hydrogen production. Nevertheless, there are metallic cells that can oxidize the organic matter present in wastewater, without using microbes, known as WEC.
The WEC works in a similar way to the MECs, and the organic and inorganic matter are oxidized in the anode at the same time that WOR takes place, producing electrons (electrical current) and H þ . Then, the electrons migrate to the cathode where hydrogen is produced by the reduction of protons. [124] In this system, usually powered by PV cells, the organic pollutants can be eliminated through a direct or indirect process. [125] During the H 2 O oxidation to O 2 , some intermediates are formed (reactive oxygen species [ROS]). These ROS can be used for the direct oxidation of contaminants and pollutants. ROS can also react with chloride existent in wastewater and produce reactive chlorine species (RCS) and chlorine radicals, which will lead to indirect oxidation of organic and inorganic matter. [119][120][121] As the aforementioned reactions are totally dependent on ROS formation, the anode composition is a determining factor for wastewater electrolysis and purification. [126,127] In addition, the wastewater matrix is very complex, and various side reactions that happen during the electrochemical process may interfere (being benefic or malefic) directly with the H 2 generation efficiency. In this context, the current and energy conversion efficiency for hydrogen generation is around 40-80% and 30-60%, respectively. [124] As aforementioned, the WEC can certainly have the potential of becoming the future technology for on-site wastewater treatment, coupled with water reuse and energy storage, in the form of H 2 . In addition, a scaled-up prototype could easily be installed in various environments, such as urban and rural areas, offering great opportunities especially for remote locations, where they face a lack of sanitation facilities, not being able to treat the local wastewater. Besides that, these cells can also be used to treat industrial wastewater and landfill leachate, also contributing to the development of alternative methods for decentralized H 2 production.
The use of wastewater to produce hydrogen also opens opportunities for producing hydrogen using microorganisms, commonly known as biohydrogen production, in which waste can act as the substrate, and will be discussed in detail in Section 5.1.2.
Biohydrogen
The utilization of biohydrogen has been attracting researchers' attention, mainly because it is a carbon-free emission route for hydrogen production. This route could be classified as a strategy to produce white hydrogen as discussed previously.
In addition, it allows the use of waste as a substrate, which also contributes to waste degradation and treatment, coupled with energy generation, in the form of molecular H 2 . Besides being eco-friendly and carbon-free, biohydrogen has the advantage of being able to use a wide range of substrates to produce hydrogen, from biomass, to different types of organic wastes, which increases the range of applications for biohydrogen production. [128][129][130] The fundamental basis of microbial H 2 production is that the microorganisms act as the catalysts for the reaction, forasmuch as they can use redox reactions to obtain hydrogen. In general, they use protons (H þ ) and electrons (e À ) that are generated in some internal enzyme's reaction, to combine and form H 2 , as it is shown in the following equation. [128,131] 4 H þ þ 4 e À ! 2 H 2 (12) The different processes of biohydrogen production will change in terms of electron donor types, redox potentials, the substrate type, and the microorganism that will be responsible for carrying out the overall processes. Hence, the biohydrogen production routes are separated into two different classes: fermentation, which can be dark or photo-process, and photosynthesis, which can be a direct or indirect pathway. These processes will be briefly discussed subsequently, and Table 1 summarizes the main advantage and disadvantage for each process. [128,[131][132][133] Direct Biophotolysis: Direct biophotolysis for biohydrogen production is based on the photosynthesis system, a complex redox process that can be accomplished during the metabolic cycle in green algae and plants cells. [128,134] In this process, a microbial photosynthesis mechanism uses solar energy to convert a water molecule into molecular hydrogen and oxygen, being a combination of biological and chemical processes. During the mechanism, photosystem I (PSI) and photosystem II (PSII) play an important part in the H 2 production process. PSII is responsible for splitting water molecules into two protons and oxygen, whereas PSI is involved in the reduction of CO 2 . [128,[135][136][137] Thus, hydrogen can be formed by the presence of hydrogenase or by CO 2 reduction by PSI. [136,137] Indirect Biophotolysis: Indirect biophotolysis was developed to overcome the hydrogenase enzyme sensibility to oxygen, and in this process, hydrogen can be produced by microalgae (green algae) and cyanobacteria from starch or glycogen. [138][139][140] During the mechanism, two main steps are involved; first, a carbohydrate is formed using light energy, and then, H 2 is produced from the synthesized carbohydrate through the cell's metabolism, operating under dark conditions. Contrary to the direct process, during the indirect biophotolysis, adenosine triphosphate (ATP) needs to be formed and is an important part of the H 2 production. [128,134,139] After using the available O 2 , the cells can undergo anaerobic condition, which can facilitate the hydrogenase enzyme functionalization and activity, because this enzyme is extremely oxygen sensible. This process is dependent on environmental factors, such as light intensity, carbon sources, and degree of anaerobiosis. [128,135,138] Although the use of direct and indirect biophotolysis is promising, it has a low hydrogen yield, and the complex photosynthesis system difficulties make some changes that would enhance efficiency. Therefore, fermentation systems have been receiving more attention, because they are simpler and present a higher yield of hydrogen.
Dark Fermentation: Dark fermentation is the fermentative conversion of organic substrates to produce biohydrogen, which takes place in anaerobic conditions and without the presence of light. In this process, obligate anaerobes and facultative organisms consume complex carbohydrates and a large number of organic acids as by-products, which will need to be further removed to increase the H 2 purity. [141][142][143] In addition, some of the by-products can be toxic, which might increase the cost of purification and treatment. Although it has a relatively high yield, the yield of H 2 per substrate consumed (Y (H2/S) ) is limited by metabolic constraints of dark fermentative microorganisms, following the theoretical limit, known as "Thauer limit." [128,129] Thus, these systems still present some drawbacks that need to be overcome to make it more suitable. [144] Photo-Fermentation: Photo-fermentation is a process in which a photosynthetic microorganism uses light (sun or artificial) and consumes reducing sugars and organic acids, producing hydrogen. [133,141,143] During the mechanism, the electrons from water molecules are used for a photochemical oxidation by PSII, and these electrons are utilized by [Fe]-hydrogenase in the direct biophotolysis method, leading to the photosynthetic hydrogen production. [128,133,141,143] The greatest advantage is that it can use a wide range of substrates, including organic acids, organic acid-rich wastewater, or organic acid-rich biomass. In addition, some factors such as intensity and wavelength of light influence directly in the biohydrogen production in this system. [128,134,135,140,141] The overall reaction involved in the process, along with the advantages and disadvantages of this process, is shown in Table 1.
When comparing the biohydrogen production methods, fermentation methods are considered the better ones, due to their higher hydrogen yields and because they can act with a wider range of substrates, which makes them more suitable. The overall efficiency and hydrogen yields are directly related to the substrates and microorganisms that are being used; thus, research focuses on the search of the most suitable microorganism and substrate for each process. In terms of costs, they are also related to the feedstock and organisms; therefore, they can vary, and it is hard to predict an exact cost for each method. Nonetheless, biohydrogen production methods have great attention and great perspectives for the future of green hydrogen production, especially for its production coupled with waste treatment. [128,133,143]
Hydrogen from Seawater
The use of seawater as the WS feedstock for hydrogen production can also bring incredible advantages for different communities around the globe. As it is common knowledge, 70% of the Earth's surface is covered by water; thus, it is possible to state that water is the most abundant natural resource on the planet. In addition, considering the water reserves, the oceans represent 96,5% of them, containing approximately a total of 1.35 10 21 L of seawater with a fairly homogeneous geographic distribution. [145,146] The use of seawater instead of freshwater can facilitate the implementation of PV-powered electrolyzers in some remote and arid areas where this resource is scarce or its use for energy production (WS) would bring some harm to the local reserve. [147] The world's arid desert regions are mainly located in the Middle East, South Africa, the west coast of the Americas, Australia, and the west of China, with a large area of it being located near ocean coastlines. All in all, Coastal Arid zones would present a great opportunity for the H 2 production from seawater, forasmuch as these regions have limited access to freshwater yet plenty access to seawater. [146,147] In addition, these areas have a high incidence of solar light most of the time throughout the year, which can favor the use of PV-powered water electrolysis systems and FCs, as this will not only provide a way of producing and storing energy (in the form of H 2 ), but will also allow the obtention of fresh drinking water from seawater. [98,147] Seawater follows the same principles and reactions as when fresh water is used (reactions (9)-(11)). However, the huge amount of dissolved ions can be considered as the biggest drawback, considering that they can affect the catalytic system by decreasing its efficiency or causing some kind of degeneration to the electrodes. [56,146,147] Seawater is composed mainly by Na þ , representing almost 42% of the total amount (0.486 mol kg/H 2 O) and Cl À , which represents 49% of the total composition (0.565 mol gg/H 2 O). Even though the other ions can also interfere and compete with the WOR, they are present in a such small amount that their effect can be neglected. [148] However, it is important to highlight that this is a general composition, and it can change in different locations.
The biggest challenge for splitting the seawater is that the presence of the ion chloride (Cl À ), in acid conditions, can lead to the anodic chlorine evolution reaction (ClER), which competes with the OER (water oxidation) and produces undesired side products, such as molecular chlorine or chlorinated oxidants, as it is shown in the following equation. [149] 2 Cl À ! Cl 2 þ 2 e À V ¼ 1.36V versus RHE (13) Even though, from a thermodynamical point of view, OER is favored over ClER, the chorine evolution is a simpler twoelectron reaction, involving only one intermediate. Thus, ClER has faster kinetics and can take place at lower overpotentials, which can make them the major anodic reaction in acid conditions. [146,147] In this case, OER was found to be dominant only at current densities below 1 mA cm À2 or at very high current densities where ClER currents reach the mass transfer limitation. [149,150] When working in alkaline conditions, the hypochlorite formation, as shown in Equation (10), should be considered.
The hypochlorite formation is also a simple reaction involving two electrons, thus, kinetically favored over OER, even though OER is thermodynamically favored. In addition, the electrode potential for hypochlorite formation is pH-dependent, following the OER potential, as shown in the Pourbaix diagram ( Figure 5). [147] As both standard potentials are parallel in the pH range from 7.5 to 14, it is possible to obtain a standard potential difference (E ClER À E OER ) of 480 mV. Therefore, it can be stated that OER is favored at higher pH (≥7.5), provided that the overpotential required is at values lower than 480 mV, when hypochlorite formation is thermodynamically not allowed, and no other side reaction competes with the water oxidation. [150] A similar standard potential difference cannot be obtained for OER and ClER, because this potential difference is considerably smaller, making it more difficult to achieve higher current densities at an overpotential where ClER is thermodynamically not allowed. Hence, carrying out seawater electrolysis in alkaline conditions presents more advantages. [147,151] Besides operating at a high pH, different approaches can be made to improve seawater electrolysis, such as the design and development of catalysts with active sites that favor the adsorption of OER intermediates, making them more selective. Furthermore, as an attempt of overcoming the thermodynamic overpotential limitations, Cl À blocking layers can be added alongside the OER catalyst to avoid the diffusion of Cl À ions from the electrolyte toward the anodic catalyst, which would improve the surface selectivity regarding OER. [152] Furthermore, the presence of Cl À can bring corrosion problems, even in alkaline conditions; also, some insoluble precipitates can be formed on the surface of the electrodes, poisoning both OER and HER. [153] Faced with the aforementioned challenges, seawater electrolysis requires a catalyst, for both anode and cathode, that is highly selective for the OER and HER reactions, and also resistant to corrosion and other degradations that can result from the ions composition. Thus, the catalyst design is fundamental for implementing a state-of-the-art seawater electrolysis technology. [151][152][153] For OER in alkaline pH, these catalysts are required to operate at an overpotential lower than 480 mV and at the current densities of at least 10 mA cm À2 to be considered as good candidates for commercial applications. [147] However, recent works have been shown difficulties to achieve such high current densities at such low potentials especially in pH near to neutral (pH ¼ 7 to 8) for seawater. [152] Co-based catalysts have great research attention, being able to operate at high current densities and with high selectivity, even at overpotentials higher than 480 mV, where just a small amount of the total current is used to ClER À . [154,155] Nonetheless, a lot of work is still required, so a better understanding regarding OER selectivity can be achieved and, thus, enhanced. [152] Aiming to block Cl À ions approximation, MnO x protection overlayers have been studied and presented themselves as an excellent opportunity. The studies showed that MnO x was not involved in the OER mechanism, but acted as a Cl À diffusion barrier, while remaining permeable to water so the OER could take place at the actual catalyst coated over the anode. [152,156] Regarding the HER, the main challenge is not related to selectivity and faradaic efficiency; instead, they are related to species www.advancedsciencenews.com www.advenergysustres.com that are presented in seawater composition that can poison the active sites, by blocking them. In addition, they can degrade and corrode the cathode catalyst. Therefore, studies are more focused on ways to increase the catalyst stability against corrosion and degradation and also on ways to create barriers that will avoid the deactivation on the catalyst surface. [153,157] Pt is known as the state-of-the-art catalyst for both alkaline and acid conditions; however, its high cost is the biggest drawback for its use, and new non-noble-based catalysts have been studied. Provided that, Ni or Ni-based metal alloys are usually used as HER catalysts due to their high performance and good stability. [55] The use PEMEC could also bring great advantages, because the membranes could also work as a filtration barrier, protecting the cathode against deactivation. However, its configuration provides a minimal overpotential window that it is too good for the OER operation conditions aiming to avoid the ClER. Therefore, the study of catalysts that exhibit outstand performance at a neutral to alkaline media can bring excellent opportunities for seawater splitting. [152] The efforts are focused on the mixing of Pt with a different metal (usually earth-abundant), aiming to maintain or increase their activity, but lowering the cost at the same time. [146,150,152,158] Besides that, different earth-abundant-based catalysts have been studied, mainly involving transition metals of the first row. These metals can form complexes with different structures, ligands, and inorganic ions, making them very versatile, cheap, and easy to synthetize. Furthermore, they have been reported to achieve high current densities for seawater HER at low overpotentials, with high faradaic efficiencies for hydrogen production, which present them as a great opportunity for future large-scale seawater electrolysis cells. [146,150,153,157,159] Another outstanding opportunity that has been extensively studied is the use of bifunctional catalysts, that can effectively operate as both the anode and cathode catalyst, making the cell design and construction easier and cheaper, with great operation conditions. [85,146,150,153,160] Although much effort is still required for the development of economically viable seawater electrolysis technology, all those foregoing mentioned catalysts have the potential of contributing to the development of robust and active catalysts that can use seawater as a feedstock for large-scale hydrogen production. In addition, using seawater electrolysis, some remote and arid areas that do not have access to fresh water can benefit from this feature, because this process can not only produce and store chemical energy but also presents the opportunity of obtaining fresh water directly from the ocean. Moreover, seawater is the most abundant natural resource on the planet, so its use presents remarkable energy opportunities for the near future.
Hydrogen Outside of the Earth
Although energy technologies still need to evolve and get more mature, it is important to reflect on some perspectives for the future of energy, considering matters like where it can be used the most in 100 years from now and what will be the energy demand then. All things considered, we dare to assume that the world and energy as we know will not be the same, because life as we know is doomed to go through an imminent transformation that will change the planet in ways that cannot even be imagined. One of the biggest transformations that are expected for the near future is an increase in space exploration in different aspects, such as low-Earth orbit (LEO), high-Earth orbit (HEO), near-Earth asteroids (NEAs), Moon, Mars, and deep space missions. [161][162][163] However, one of the biggest challenges of these space missions is the systems for energy generation and storage technologies. Most of the limitations that are faced can be related to the systems durability, caused by low chemical reaction kinetics and efficiency, materials mechanical strength, environmental issues, and by the operating mode. [161,163] Consequently, the development and expansion of new materials and technologies that can provide better energy generation and storage in space may bring enormous benefits for most of the space exploration goals, contributing to spacecraft, launch vehicles, landers, rovers, spacesuits, tools, habitats, communication networks, and basically anything that requires power and energy. Scientists argue that a breakthrough in power generation or energy storage can enable new space missions, bringing a rapid advance in the scientific understanding of outer planets and deep space. [161,164] Solar energy converted into electric energy through PV panels is undoubtedly the most important system of energy in space. However, another important alternative outside of the Earth is the FCs, which consists of an electrochemical cell that can convert the chemical energy of hydrogen and oxygen, directly into electricity, with high efficiency, durability, low cost, and with water as a side product. [165,166] Both here and in space, the FCs can be used for stationary applications, in distributed power generation facilities, on both small and large scales. In addition, it can also be used for transportation vehicles, from personal motorcycles and small cars to buses, airplanes, and some spacecraft, creating the hydrogen fuel cell (FC) electric vehicle (HFCEV). [167][168][169] These cells are designed to take the place of conventional internal combustion engine vehicles, being the main power source. [168] Regardless of the FCs drawbacks, related to power density and power response, they present great advantages and perspectives for future applications in space. [170] In addition, electrolysis also produces O 2 , which can be used for life support applications, helping to renew the oxygen for breathing supply in spacecraft and International Space Station (ISS). [162] One of the great advantages of electrolysis and FCs utilization is that they can easily be adapted to scale and be used for in situ resource utilization (ISRU), where the cells are adapted into landers, rovers, spacesuits, and robots, producing electricity directly into them during the missions. For this reason, it is worth mentioning important that the systems have great durability and operate at high efficiency throughout the whole mission, which sometimes can take from days to years. [161,162] In this context, PV cells can also be used as a power source for electrolysis, because sunlight incidence in space is enhanced. In addition, the sunlight energy capitation needs to be carefully studied and designed, because it changes depending on the Sun's distance and position. [161,162] Some agencies are already developing some regenerative FCs systems prototypes, [171] that consists of a closed-loop system where water electrolysis takes place in a solar powered cell. Hydrogen and oxygen are then stored and fed later into a coupled FC, so electricity and heat can be produced. The water can be recycled and used again. [172] One of the most recent www.advancedsciencenews.com www.advenergysustres.com prototypes has been developed by the European Space Agency (ESA) and is planned to be used during the HERACLES, a European-led robotic mission to the Moon expected to happen in 2026.
As electrolysis uses only water as the feedstock, the use of toilet wastewater could also be an astonishing possibility for space applications. Using it, astronauts would be able to use their own toilet effluents as the energy generation source, which could also further be disinfected and used for drinking purposes. [121] Space stations already have systems that can turn urine and other effluents into fresh drinkable water; still, putting it together with an electrolysis cell may bring great future advantages in the energy field. [173] Thus, it can be stated that using electrolysis cells in space is a very promising opportunity; however, the understanding of how to make this energy conversion and storage systems better calls for a deep study on the materials that may be used on it. Some specific properties such as the material size, weight, and costs are important and have a direct impact on the utilization viability, principally when space applications are being considered. Because of this, a meticulous material engineering is crucial. [161,163] Provided the aforementioned requirements, the use of 3D printing technologies can bring outstanding advantages for both electrolysis and FC design, contributing to cheap, versatile, and robust materials. [174] The main advantages for the use of 3D printing are the ability of fast prototyping, waste management, and generation of low-cost products. In addition, its use for space applications is extremely promising, because the parts can be printed and easily assembled directly in space, avoiding the costs of transporting materials from the Earth. [175,176] According to Leach, it is estimated that the cost for transporting one ordinary brick to the moon can cost around two million dollars, which is unviable; however, as it was mentioned, 3D printing can overcome this. [176] It is also possible to design and produce electrodes that can work both as the cathode and the anode at a very low price, which could be extremely beneficial to regenerative FC design, obtaining electrodes with a great efficiency toward O 2 and H 2 and an easy fabrication. [85,86,177] As H 2 storage and transportation can be dangerous, technologies involving the storage of H 2 in the form of ammonia (NH 3 ) can bring benefits to space exploration and mainly to the chain of hydrogen on the Earth. [178][179][180] NH 3 can be easily and safely transported, and then, by a simple decomposition process, using heat and a catalyst can release N 2 and H 2 (mainly), spontaneously. Furthermore, there are some FCs that have been studied in which NH 3 can be directly used and converted into power, which can be promising as well. [178,181,182] Even though the perspectives presented here still require massive research for the development of state-of-the-art technologies, the use of hydrogen energy in space is truly promising and has the potential of becoming the major energy source in the future, because the advantages presented by it are outstanding and can bring a rapid advance in the space exploration, providing a better scientific understanding of outer planets and deep space. The development of energetic strategies for space exploration requires high technology approach, which is also very sustainable, being able to inspire us here on the Earth to solve our problems.
Hydrogen Value Chain
The hydrogen chain ( Figure 6) is complex and requires a detailed analysis. Hydrogen production processes might be a starting point for understanding the peculiarities and costs for the consumers. The feedstocks for H 2 production change all processes and influence the prices and chain sustainability. FFs are the most used feedstocks in the reforming processes, and this strategy represents about 95% of the H 2 produced in the world. An important point to be featured is the implementation of a complicated step of carbon capture, utilization, and storage (CCUS), which can decrease the carbon emission; however, it increases the price. The electrolysis, on the other hand, uses water as feedstock and has zero-carbon emission, but depends on the energy input. Considering the use of water and RE input, the ideal scenario can be created. Therefore, the costs and the complex processes of WS must be considered before being www.advancedsciencenews.com www.advenergysustres.com widely used or applied for a detailed plan in a sustainable future. Any choice has consequences, benefits, and challenges. The purification of H 2 is an important point, because its use depends on the purity level, and as a consequence, the prices are different. Reforming processes produce H 2 with purity ranging from 87% to 94%, whereas electrolysis delivers H 2 with purity superior to 99.9%. The purity requirement of the H 2 for FC vehicles is grade 4.
Storing processes is also a challenge for the H 2 economy, because it has a low volumetric energy density, which is a limitation. H 2 can be stored in high-pressure vessels, in liquid form (high pressure and low temperature), adsorbed in high porous materials, or in liquid form as ammonia. Each strategy has an intrinsic cost and requires a specific transport. Local production can decrease transportation costs, but it requires investments for proper infrastructure. Logics costs for H 2 transportation through road, rail, or maritime depend on the distance, suitable storage tanks, security, and laws. In addition, the key point in the hydrogen chain is the diversification of the industrial portfolio. Nowadays, H 2 is mostly used in the petrochemical sector and agribusiness. The use of H 2 as a clean fuel is still in early stages, with plenty of possibilities to explore and elucidate. The better understanding about the theme could bring new prospects in fine chemicals, steel, metallurgy, semiconductors, and other industries.
All aspects that have been discussed are relevant, but safety must be the key point for social recognition about the H 2 importance. Both an essential feedstock for a sustainable industry and a safe product with a low risk of accidents are crucial.
Another important point to understand is how the chain, the costs, and the market are connected. The discussion about the hydrogen chain was previously made, and one of the biggest challenges of the companies is to estimate the final costs and sales price of the hydrogen. First of all, the costs of any product depend on countless factors and can be different according to the strategy, vision, and resources chosen by the companies. In general, the costs could be divided into Capex (capital expenditure) or Opex (operational expenditure), but the focus here is to call the attention of scientists and engineers to how difficult it is to calculate the price of hydrogen. No particular formula will be shown, but we will focus on the items that can be included in the H 2 production cost calculation.
In terms of capital expense, some main elements must be considered, such as land, industrial machines, buildings, and system to storage. Operational costs must include some items as the process (feedstock, catalysts, purification, CCSU, and efficiency), industrial maintenance, energy input, transportation, legislation, safety, insurance, and so on. Other aspects to be highlighted are the market that uses the H 2 (each market has a specific added value associated with the product), the scalability (the industrial scale changes all expenses), and the profit required (if it is considered the price to consumer). Each topic mentioned earlier could be described in detail and generates a great discussion. The sum, the particularity, the specific details, and the mathematical weighting of these items will generate the final price of hydrogen. The costs of H 2 , according to the process of production, are presented in Table 2. The idea of this table is to exemplify the prices and compare the production, efficiency, and cleanliness. Because of this, some fluctuation of values can be found.
Looking at 2050: Green Hydrogen and Zero-Emission Carbon
According to the International Energy Agency (IEA) in the document called "Net Zero by 2050," published in May 2021, [183] humanity must reduce the global carbon emissions to net-zero by 2050. It is worth mentioning that the 26th Conference of the Parties (COP26) of the United Nations Framework Convention on Climate Change in November 2020 was an important moment for improving the global goals and action on climate based on the 2015 Paris Agreement. These efforts were defined to limit the long-term increase average of the global temperatures up to 1.5 C. Some important characteristics are foreseen to 2050, as shown in Figure 7.
Strategic reports of specialized companies in the market for H 2 have reported an estimation that the price of green H 2 will decrease, and it will be slightly lower than gray hydrogen prices up to 2050. Blue H 2 will also have its price reduced in the next 30 years, but the drop will not be as sharp as the green H 2 . In addition, by 2050, the H 2 production will be more than five times the amount produced nowadays, and 15% of the production will be blue H 2 % and 85% will be green H 2 . Finally, the markets are betting on investments amounts around USD 15 trillion across the H 2 chain over the next 30 years (see Figure 7). [17,[183][184][185][186] [4,29,190,191] Auto-reforming Methane Heat 60-75 1. 48-1.70 No clean with emission [4,29,190,191] [4,184,192,193] Photocatalysis Water Solar 2-18 8. 43-10.36 Clean with no emission [4,89,107,191] www.advancedsciencenews.com www.advenergysustres.com
Conclusion
Hydrogen is a raw material essential for the petrochemical industry, for ammonia synthesis, and for a clean source of energy. However, the environmental benefits depend on the way of H 2 production. It is the same situation for electric cars, and if electric energy source comes from FFs, the environmental impact of electrification is drastically reduced. As previously described, the current industrial processes to produce H 2 from FFs release into the atmosphere a huge amount of CO 2 , eliminating any positive contribution to the environment. The understanding of these processes can inspire us to reach cleaner alternatives.
The challenge for the next 30 years is to replace FFs by a clean source of H 2 , such as water. Nonetheless, although the process of producing H 2 from water is totally clean with no emission, being able to consider it as green hydrogen, the production from methane, for example, is approximately four times cheaper. Given this scenario, a tremendous effort must be done by governments, companies, and scientists if our society wishes to optimize H 2 production via WS technology, making it competitive with the current energetic matrix.
Water is essential to our life, and for this reason, alternative sources of water must be studied to avoid competition between drinking water and H 2 . Thus, water unfit for human consumption, such as wastewater and seawater, can be interesting sources to produce clean energy with social responsibility. In addition, there are uncountable challenges that must be addressed to expand the source of water for WS. The major limitation found in WS processes is the overpotential and low kinetics of reactions in the cathodic processes and especially in anodic ones. Thus, low-cost, efficient, and stable catalysts must be researched. These catalysts should be composed of Earth-abundant elements and present high performance even operating under mild conditions.
Another aspect to be considered is the decentralization of the H 2 production processes. The concept of small plants can introduce the production of H 2 on-site and minimize the logistic costs and environmental impact. Furthermore, the use of 3D printing technology opens a plethora of possibilities.
Therefore, this review expounded the main challenges faced by the production of a cleaner and green H 2 , and we believe that this can help to bring a reflection on the next steps toward a green hydrogen economy implementation based on H 2 production from water sources. | 16,357.6 | 2021-06-02T00:00:00.000 | [
"Physics"
] |
Utilization of Marine Waste to Obtain β-Chitin Nanofibers and Films from Giant Humboldt Squid Dosidicus gigas
β-chitin was isolated from marine waste, giant Humboldt squid Dosidicus gigas, and further converted to nanofibers by use of a collider machine under acidic conditions (pH 3). The FTIR, TGA, and NMR analysis confirmed the efficient extraction of β-chitin. The SEM, TEM, and XRD characterization results verified that β-chitin crystalline structure were maintained after mechanical treatment. The mean particle size of β-chitin nanofibers was in the range between 10 and 15 nm, according to the TEM analysis. In addition, the β-chitin nanofibers were converted into films by the simple solvent-casting and drying process at 60 °C. The obtained films had high lightness, which was evidenced by the CIELAB color test. Moreover, the films showed the medium swelling degree (250–290%) in aqueous solutions of different pH and good mechanical resistance in the range between 4 and 17 MPa, depending on film thickness. The results obtained in this work show that marine waste can be efficiently converted to biomaterial by use of mild extractive conditions and simple mechanical treatment, offering great potential for the future development of sustainable multifunctional materials for various industrial applications such as food packaging, agriculture, and/or wound dressing.
Introduction
In Chile, near 46,000 ton/month of Humboldt squids (Dosidicus gigas Wild) are caught for human consumption. According to the annual fishing statistics reports from 2019, the Chilean fishing quota of the giant squid of Humboldt is 200,000. During its industrial processing, the squid pen is generated as a by-product, which is currently used as a fish flour extender, which presents low value-added processing. Hence, there is an urgent need for the development of new strategies to convert marine waste into value-added materials from an economical point of view. One of the approaches is the extraction of components from waste and by-products of the fishery industry and their conversion to value-added materials [1][2][3]. Particular attention is given to squid pens because they are a valuable source of β-chitin and protein.
Generally, chitin or poly (β-(1→4)-N-acetyl-D-glucosamine) is a natural polysaccharide derived from large numbers of living organisms [4][5][6]. It is the second most abundant natural polymer, next to cellulose. In the native state, chitin occurs as ordered crystalline microfibrils, which are structural components in the exoskeleton of arthropods or fungal cell walls. Depending on its source, chitin exists as two crystalline allomorphs, namely in α-and β-form. The α-chitin isomorph is by far the most abundant because it can be found in the exoskeletons of crustaceans and cell walls of fungi and yeasts. On the other hand, β-chitin is rare in nature and is found in association with proteins in squid pens and the tubes of some worms [7,8]. The α-chitin was mainly studied due to vast abundance; however, this allomorph is highly crystalline and insoluble in aqueous or common organic solvents, limiting its application. In addition, when it comes to the extraction process of α-chitin from crustaceans, the general procedure requires a first demineralization process using a dilute mineral acid for several hours to separate calcium carbonate. Then, a deproteinization under harsh conditions (high temperature and longtime of treatment) is required to extract proteins. The amount of extracted chitin at these conditions is approximately 20%. On the other hand, the extraction of β-chitin from squid pens does not require harsh demineralization conditions since squid pens contain only up to 5% of minerals. The yield of β-chitin extracted from squid pen is usually around 50% [9]. It is known that β-chitin is less crystalline allomorph and more soluble in organic solvents, making this type of chitin more reactive. Moreover, due to parallel chain arrangements, the β-chitin-based material possesses higher mechanical stability than α-chitin based material [10]. Hence, squid pen presents a beneficial source of chitin from the environmental and economical point of views, where the extraction process can be performed at mild conditions, with reduced chemical and energy consumption, and higher reactivity of obtained biopolymer, in comparison to the α-chitin obtained from crustaceans.
The present work aims to obtain and characterize β-chitin microfibers (MF) and nanofibers (NF) from squid pen Dosidicus gigas. Up to date, there are several studies in the literature for the extraction of β-chitin from different species of squid pens [11][12][13]. However, the conversion of β-chitin microfibrils into β-chitin nanofibrils and their detailed characterization is a new topic, and only a few papers in the literature can be found. Suenaga et al. studied the Star Burst system (wet pulverization technique under high pressure) to obtain β-chitin nanofibers in distilled water and under acidic conditions [7,14]. Ifuku et al. also used the Star Burst system to obtain chitin nanofibers [15]. On the other hand, numerous methods were used to obtain α-chitin nanofibers such as ultrasonication [16], grinding [17], or dynamic high-pressure homogenization under acidic conditions [18]. This work will present a new and simple method to process β-chitin nanofibers at a bench-scale of low energy and acid consumption. Namely, β-chitin microfibers were dispersed in an acidic aqueous solution (pH 3) and converted into nanofibers by pass-through a collision machine. SEM, TEM, and XRD characterized the obtained nanofibers. It is important to underline that obtained β-chitin nanofibers can be directly shaped into transparent films by a simple drying method in the oven. Hence, chitin nanofibers-based films were also characterized along with the raw material and nanofibers by different physicochemical and microscopic techniques. These chitin nanofibers can find applications in food, cosmetics, biomaterials, agriculture, electronics, adhesive, and biomaterials areas [19][20][21][22][23][24].
The extraction of β-chitin from squid pen was performed by demineralization in HCl (1 M) and deproteinization in NaOH (1 M). The yield of extracted chitin was 39.4%. Other authors reported similar yields for the extraction of β-chitin from squid pens (35-42%) [12,13]. It can be noticed that inorganic compounds are significantly removed (<1%) during extraction of β-chitin, which is evidenced by reduced ash content. Regarding the DA of obtained β-chitin, it was higher than reported from Loligo Chenisis squid pen (80.3%) [12], but lower than reported for Loligo vulgaris (100%) [27].
Molecular weight determination of chitin samples is quite complex because it requires sophisticated analytical capabilities. Hence, simple indirect methods to compare the macromolecules' molecular weight are used. Among these, viscosity methods are very cheap and useful. The reduced viscosity parameter is measured in a dilute solution of macromolecules and can provide information on its shape, flexibility, and (for no spherical particles) molar mass of macromolecules. It can be used as a comparison value because of its concentration-dependent nature. The obtained η red value for the β-chitin sample was high (2648 mg/mL), suggesting that this macromolecule can have a high molecular weight, and the employed extraction method does not lead to an extensive chain degradation. This result agrees with previously reported values of reduced viscosity for other crustacean and squid pen β-chitin [28].
13 C CP/MAS Solid-State NMR Analysis
The degree of deacetylation of isolated β-chitin was determined by 13 C CP/MAS solid-state equipment and spectrum is presented in Figure 1. Seven signals were detected that were ascribed to the eight carbon atoms of the N-acethylglucosamine repetitive unit, which appear at the following chemical shifts: δ = 173.3 ppm (C=O), 104.5 ppm (C-1), 85.2 ppm (C-4), 75.7 ppm (C-3 and C-5), 59.9 ppm (C-6), 55.8 ppm (C-2), and 23.2 ppm (CH 3 ). The C=O signal appears as a sharp and symmetric profile. The C-3 and C-5 signals merge into a single resonance centered at 75.6 ppm, which is a characteristic pattern for β-chitin. All of these signals are typical for β-chitin and confirm its conformational state. The relative intensities of the resonance of the ring carbon (IC1, IC2, IC3, IC4, IC5, IC6) and CH 3 group (I CH3 ) were compared in order to calculate the degree of acetylation (DA) value. It was found that the DA of β-chitin isolated from D. gigas was 96.4%. Generally, DA of chitin obtained from different species of squid pen varies from 80 to 98%, depending on the different extraction parameters: presence or absence of demineralization process, time, and temperature of operated demineralization and deproteinization process as well as the concentration of solutions used for these processes [11,24,25,29,30]. The high content of DA obtained in this work indicates that mild extraction conditions for the extraction of chitin are used, thus preserving the native structure of β-chitin.
FTIR Analysis
The FTIR spectra of squid pen and isolated β-chitin are presented in Figure 2. The main difference between spectra (e.g., the bands intensity, band shifting, and some signals overlapping) is due to the raw material protein contents, as previously discussed (see Section 2.1). Hence, the main focus will be given to the discussion of the β-chitin spectra. The β-chitin FTIR spectra patterns are similar to those reported in the literature [7], suggesting that good chitin quality had been obtained. An exhaustive examination of the spectrum revealed that the spectra had the characteristic bands of polysaccharides. Namely, the wide area in the region between 3600 and 3000 cm −1 was detected: bands at 3472, 3326, and 3101 cm −1 , which correspond to the stretching vibration of -OH and -NH groups (νOH, ν as NH, and ν s NH), respectively. The first band belongs to -OH groups involved in hydrogen bonds (O-6-H ···O=C and O-3-H·· ·O-5). The other two bands are due to C=O ·H-N intermolecular hydrogen bonding and H bonded -NH groups, respectively. The chitin characteristic bands related to -CH groups stretching vibration appears at 2962 cm −1 (ν as CH3 ), 2929 cm −1 (ν s CH2 ), and 2880 cm −1 (ν as CH3 ). Moreover, the spectra also feature characteristic bands at 1627, 1548, and 1313 cm −1 that are describing the vibrations of Amide I (primarily C=O stretch), Amide II (bending vibration of N-H), and Amide III (ν C-N + δ NH ) groups, respectively. The presence of one single peak in the region between 1600 and 1670 cm −1 confirms the presence of β-chitin. It is known that the carbonyl oxygen of the acetamide group formed intermolecular hydrogen bonding between the primary -OH and -NH 2 groups, forming a two-dimensional hydrogen bonding network in a plane perpendicular to the pyranosyl plane. The spectra of β-chitin also revealed two additional bands of -CH group deformations around 1425 (δ CH2 ) and 1375 cm −1 (δ CH + δ C-CH3 ), and a greater number of narrower bands in the region between 1200 and 1032 cm −1 , related to C-O-C and C-O stretching vibrations. Another characteristic marker is the CH deformation of the β-glycosidic bond. This band appears in β-chitin at 895 cm −1 (γ CH , C1 axial of β-linkage). Therefore, FTIR analysis confirmed that the isolated biopolymer is β-chitin. This result is in agreement with the results obtained by 13 C CP/MAS NMR analysis, where the crystalline structure of identified chitin was proven.
Thermogravimetric Analysis
Thermograms of squid pen and isolated β-chitin are shown in Figure 3. The thermal analyses of squid pen and isolated β-chitin were undertaken in the interval from 20 to 550 • C. The thermogravimetric (TG) and its derivate (DTG) curves are presented in Figure 4. The squid pen's overall decomposition process (66% weight loss) consists of three degradation steps, which are reflected as three peaks in the DTG curve. The first degradation stage (25-126 • C), with the maximum degradation at 46 • C is due to the water evaporation in the squid pen, and it accounted for a weight loss of 23%. In the range of 126-264 • C, the second weight loss can be attributed to the depolymerization and degradation of the squid pen proteins that wrap chitin fibers. This stage is the main difference between squid pen and isolated β-chitin thermograms. A third stage between 264 and 550 • C and maximum degradation at 294 • C is caused by the degradation, pyrolysis, vaporization, and elimination of the volatile products of remaining proteins and the chitin chain [31]. On the other hand, the thermogram corresponding to the decomposition of β-chitin showed two-stage degradation step mechanisms. In the first stage, 3.6% of weight loss occurred due to the water evaporation, followed by a second degradation step with the maximum decomposition temperature at 325 • C, which describes the polymer chain degradation. In this stage, the associated mass loss was 59.4%. This is a complex process where simultaneous monomer dehydration, chain scission, and thermal decomposition of glucosamine and N-acetylated units occurs [4]. Consequently, chain packing is loosened. Hence, a relatively small amount of heat is required for its degradation [32]. This result agrees with those found during thermal degradation of β-chitin isolated from Loligo vulgaris gladii [33]. Finally, the results are summarized in Table 2. Table 2. Thermal analysis results from squid pen and β-chitin decomposition process.
Sample
Temperature (
SEM Analysis
The surface structure of squid pen and isolated β-chitin were studied, and their morphology is presented in Figure 4. The SEM micrographs showed that the squid gladius blade region had a rough and fibrous surface morphology ( Figure 4A,B). The micrographs from the cross-section view of the same material ( Figure 4C,D) showed that the squid gladius blade was composed of large microfiber (5-10 µm diameter) aggregates. Similar fibrous morphology was observed in the gladius from other squid species [33]. It is known that proteins and β-chitin chemically form squid gladius [34]. Moreover, the squid gladius's hierarchical structure was reproduced at the nanoscale and it was found that β-chitin nanocrystallites were wrapped in a protein layer to form nanofibrils. These were the building blocks of 200 nm sized nanofibers. The nanofibers aggregated into 2 µm, 10 µm, 100 µm, and 500 µm thick fibers, respectively, which eventually formed the gladius.
In the case of β-chitin surface analysis ( Figure 4E), a highly fibrous structure was observed. This could be associated with the preparation methodology, which allowed for the removal of the protein that surrounded the chitin fibers by a smooth alkaline treatment. Such morphology has been associated with the chain arrangement in β-chitin crystals that are packed in a parallel manner. In this polymorphic structure, the carbonyl oxygen of the -NHCOCH 3 group is involved in intermolecular hydrogen bonding between the primary -OH and -NH groups, forming a two-dimensional hydrogen bond network in a plane perpendicular to the chitin pyranosyl plane [35].
As a result of the fibrillation process, the macromolecular fiber entanglement is separated into individual fibers. Then, a conversion from a low-viscosity chitin suspension to a high-viscosity gel network formation occurs. Viscosimetry is a cheap and widespread technique used to measure the polysaccharides solution viscosity at an industrial scale. Hence, it can be used to follow the progress of chitin defibrillation in acidic aqueous media. Results from viscosity measurements of nanofibrillated chitin gels obtained after different cycle numbers (pass through collision machine) are presented in Figure 5. It can be observed that the 1 wt% chitin suspension viscosity increased along with the cycle number from 10 to 60. A white gel with good stability at low temperatures (4 • C) can be obtained this way. It is known that a high aspect ratio (length/diameter) or longer and thinner chitin nanofibers promotes the entanglement possibility within nanofibrils, improving the shear viscosity of the suspension.
TEM Analysis
TEM analysis was performed to study the effect of the fibrillation process on the βchitin hierarchical structure. For this purpose, dilutions of the fibrillated gels obtained after different fibrillation cycles were studied. Results are presented in Figure 6. All samples were analyzed at a gel concentration of 0.01 wt% due to the high concentration of originally obtained nanofibers. It can be observed that after 20 cycles (collision time), there was still the presence of microfibrils in the suspension gel. The width (diameter) of β-chitin nanofiber was in the range of 100-250 nm and it did not disperse uniformly. Thicker fibers in the range of 600 nm could also be detected. Conversely, from 40 cycles onward, the processing of nanofibers was improved, and the decrease in nanofiber diameter within an increase of cycles through collision equipment from 40 to 60 was noted. The obtained β-chitin nanofibers after 60 cycles had a mean diameter in the range of 10-15 nm. A similar nanofibril diameter (6.4-6.9 nm) was previously observed for β-chitin nanofiber prepared from Loligo vulgaris at different pH levels [36] and from Todarodes pacificus (5-10 nm) [5].
Nanofibrous β-Chitin Film Preparation and Characterization
One of the main drawbacks of chitin application is their very low solubility in common organic solvents and their solubility in complex solvent mixtures. It makes processing difficult to obtain novel materials with industrial applications such as films, sponges, nano/microparticles, fibers, aerogels, and composite materials. Recently, ionic liquids and other alkali-based solvents have been proposed to overcome this issue [37][38][39][40]. However, in all cases, there is still a need for solvent removal. Nanofibrillated chitin gels obtained in an acidic aqueous solution (pH 3) offer a unique possibility to process chitin-based materials in different forms and shapes for different applications. For instance, it can be used as reinforced material for biocomposite preparation [41] and it can be readily processed into films, hydrogels, or aerogels [8,16,42]. In the present work, the preparation and characterization of nanofibrillated chitin films are presented (Figure 7). This is only one example of their feasibility to be handled. This process takes advantage of chitin nanofibrils' natural tendency to self-assemble in aqueous media to form β-chitin fibers [36].
X-ray Diffraction (XRD) Analysis
The crystalline structure of the β-chitin, nanofibrillated β-chitin film, and nanofibrillated β-chitin foam was analyzed by x-ray diffraction. The corresponding diffractograms are shown in Figure 8. It is important to underline that β-chitin nanofibrils in films showed a similar pattern as β-chitin microfibril powder (see Table 3). This means that the β-chitin crystal structure is preserved after mechanical treatment. The diffractogram of β-chitin powder displays two broad peaks at 2θ = 8.22 • (inter-sheet distance: 10.76 Å) and 2θ = 19.32 • (4.59 Å), which corresponds to the crystal plane [010] and [110] reflection values, respectively [7,32]. The crystallinity index of β-chitin microfibril powder was 69.7%, which indicates a high crystalline structure. This value was higher than that obtained for chitin from squid pen L. vulgaris (58%) [30], similar to the one found for squid pen of the same specie (67%) caught in the Northern Hemisphere [43], and slightly lower than for chitin obtained from squid pen Illex argentinus (75%) [11]. The crystallinity index of the β-chitin nanofibrils film is slightly lower than that for the β-chitin microfibril powder. Moreover, the peak of the [110] plane of NF β-chitin shifted to a higher value, indicating a decrease in d-spacing (see Table 3). The d spacing values found for the β-chitin powder agreed with those previously reported (10.0 and 4.5) for this specie [43]. The D ap values in the perpendicular direction of the [010] plane were 6.0, and 4.7 nm for the original squid pen β-chitin and the nanofibers, respectively. This is an indication that the crystallite size in the [010] plane region decreases during fibrillation. These values are higher than those found in similar products obtained from the squid pen of T. pacificus (4 and 3.5 nm) [44]. According to these authors, the specific surface areas highly increased during the fibrillation process, and disordered regions in the XRD analysis were explained by the contact of nanofibers with the water. This could explain the lower crystallinity and crystallite size reduction in the NF β-chitin samples. All of these results confirm the disintegration of β-chitin powder into nanofibrous β-chitin during the mechanical treatment.
SEM Analysis of Films
In order to study the morphology of the obtained chitin nanofiber films, SEM analysis was performed. The results are presented in Figure 9. The surface of the nanofibrillated chitin film had a non-continuous rough and fibrous morphology ( Figure 9A). At higher magnification ( Figure 9B), a fiber collapse shape with some porosities could be seen. The side view (cross-section) depicts the films' inner fibrous appearance ( Figure 9C,D) and shows a formed three-dimensional interconnected network under the surface. It is important to note that chitin films seem to be self-assembled in nanofiber sheets. These micrographs confirm the high defibrillation reached during chitin processing at the bench scale.
Mechanical, Colorimetric, and Swelling Degree Properties
Mechanical properties of the β-chitin nanofiber films with different thicknesses were evaluated and are presented in Table 4. As the thickness of film increased from 0.02 to 0.05 mm, the tensile strength of the film increased significantly (from 3.9 to 17.4 MPa), whereas elongation at break decreased (from 3.5 to 1.7%). The Young's modulus of βchitin films also increased from 0.9 to 1.1 GPa. The mechanical parameters obtained in this work were in the same range as the data from the literature for other chitin films. Namely, Kaya et al. showed that E and TS values of α-chitin films extracted from B. giganteus cockroach dorsal pronotum (thickness 0.07 mm) were 492 MPa and 11.7 MPa, respectively, and the elongation at break was 3.3%. These parameters for chitin films extracted from cockroach wings (thickness 0.01 mm) were 476 MPa, 6.2 MPa, and 2.2%, respectively [45]. Other authors reported that nanofibrillated α-chitin films (thickness 0.06 mm) from crab showed E and TS values of 3.3 GPa and 32 MPa, respectively. However, they found a lower e value (1.3%) [46]. Furthermore, Ifuku et al. informed that NF αchitin films (thickness 0.06 mm) had a Young's modulus of 2.5 GPa and a TS of 44 MPa, respectively [47]. It is worth noting that those films had higher thicknesses than those from this work, affecting the final mechanical properties. Other authors have studied β-chitin nanofiber preparation from squid pen, and they found that the resulting film's (thickness 0.045 mm) mechanical properties (E = 2.8 GPa, TS = 19 MPa) changed with the number of passes during preparation [48]. The β-loss in chitin nanofiber crystallinity accounted for the decrease in TS observed from 10 to 50 passes, while the E values remained almost constant. In these previous works, the fiber cross-sectional width was near half (3-9 nm) that obtained in the present work, and it is known that thinner and larger nanofibers show higher mechanical properties. In general, the mechanical properties of NF films are influenced by chitin chemical composition (e.g., Mw, CI, DA) and nanofiber structure (length/width ratio), which in turn depends on the biopolymer source along with the isolation procedure, and the NF preparation methods [43,49,50]. Other properties that could affect the nanofibrillated chitin film mechanical properties are the porosity, protein content, thickness, preparation method, and NF orientation [51][52][53]. Finally, these NF were prepared at a laboratory scale under controlled conditions. Considering this fact, the obtained results from this work at a bench-scale are auspicious if this cheaper process would be further scaled. In order to check the colorimetric parameters of the obtained chitin nanofiber films and their changes within the different thickness of films, the CIELAB color system test was applied. According to the parameters presented in Table 4, the films had a high L* value, which means that they had high lightness. The negative a* value and positive b* value indicated that films were green-yellowish. ∆E* is a parameter that can show the difference in the color between the samples. In this paper, the color differences among chitin nanofiber films of different thicknesses were tested toward the blank white surface. As can be seen, the ∆E* value was not significantly different for two different chitin nanofiber films, which means that there was no significant difference in their color. Moreover, the values of ∆E* were above 3, indicating that color change is visually perceivable.
The swelling degree of β-chitin nanofiber films was measured at three different pH values (4, 6, and 8). As shown in Table 4, the swelling degree of the tested films slightly decreased when the pH increased from pH 4 to pH 6. However, these changes were not significant. In addition, there were no significant changes in the swelling degree value between films of different thicknesses. On the other hand, under alkaline conditions, all swelling degree of all tested films decreased in a range of 7-12%. This result is expected since it is known that the pK0 of chitosan is around 6-6.5. At pH 4, there are positively charged glucosamine units in chitin chains, and hydrogen bonding with the molecules of water occurs. However, due to the high DA of β-chitin, the low amount of free-glucosamine units in the backbone are not enough to provoke a significant change in the degree of swelling of those films. As the pH increases and switches to the alkaline environment, the amine groups from chitin become deprotonated, which leads to repulsion interaction between the chitin chain and water molecules and reduced the swelling degree.
Finally, the biodegradable chitin nanofibers obtained in the current process could find application in several areas. For instance, it can be used as reinforced materials to obtain nanocomposites with improved mechanical and higher barrier properties against humidity and gases (O 2 and CO 2 ) [54]. This flexible and transparent nanomaterial can be used in food packaging applications to replace petroleum-based polymers. Moreover, chitin NF is using for food Pickering emulsion stabilization. This is due to its polycationic nature, capable of interacting with anionic proteins to stabilize the emulsion drops by hydrogen bond and hydrophobic interaction [55].
Additionally, chitin nanofibers can be used to prepare novel biological adhesives [56]. In the biomedical field, several biomaterials (e.g., films, membranes, aerogels) containing chitin nanofibers have been prepared. These were tested as a wound dressing and con-trolled release devices [57,58]. Furthermore, due to the antimicrobial properties of chitin nanofibers, they can be used in agriculture to protect plants against plant diseases and promote plant growth [59,60]. It is expected that the number of chitin NF applications will increase shortly, and some of them will reach the market, thus in this way triggering the commercial production of this material at a low cost.
β-Chitin Isolation at Bench Scale
First, squid pens (gladius) were washed with tap water to separate the remaining protein debris. The samples were cut in pieces using a cutting mill SM 200 (Retsch GmbH, Germany) to obtain samples of particles size ranging from 1-3 cm. Pen sample (7 kg) was demineralized with 1 M hydrochloric acid (solids to solvent ratio of 1:10 w/v) at room temperature for 2 h in a steel reactor. The mixture was mechanically stirred at 600 rpm. The demineralized sample was filtered and washed with deionized water until the washing became neutral (pH 7) and dried at 60 • C for 24 h. The deproteinization procedure was carried out in the same reactor, where the solid was stirred with 1 M sodium hydroxide (solids to solvent ratio of 1:20 w/v) at 100 • C for 3 h. The isolated β-chitin was washed with distilled water and dried at 60 • C in a vacuum oven and then weighted to determine yield percent.
Water and Ash Content
Squid pen water content was determined gravimetrically. Each sample was placed in a porcelain crucible and heated at 105 • C in a Thermo oven (Thermo Fischer Scientific, Waltham, MA, USA) up to the constant weight. Later, the dried sample was placed in a muffle Thermo (Thermo Fischer Scientific, Waltham, MA, USA) and heated at 900 • C in order to determine the ash content [1]. Each procedure was performed in triplicate. The same methodology was followed for chitin samples.
Total Protein Content
The total protein content (TPC) was measured by elemental analysis and calculated according to the following equation: TPC(%) = (N(%) − 6.9) × 6.25 (1) where N% represents the percentage of nitrogen determined by elemental analysis for each sample; 6.9 corresponds to the theoretical percentage of nitrogen in fully acetylated chitin (this value was adjusted as a function of the degree of acetylation); and 6.25 corresponds to the theoretical percentage of nitrogen in proteins. All the determinations were done in triplicate.
Chitin Content
The chitin content was estimated by the method reported by Black and Schwartz [2]. Namely, 0.5 g of sample was immersed in 50 mL of 0.1 M HCl, and the flask was heated at 100 • C for 1 h. Afterward, the flask was cooled to room temperature, and the content was centrifuged at 3500× g rpm for 10 min. The supernatant was separated from the precipitate, mixed with 45 mL of fresh DI water, and the resultant suspension was centrifuged again. This step was continuously repeated until the washing was no longer acidic. Then, 50 mL of 1.25 M NaOH solution was added to the chitin pulp product, and the mixture was heated at 100 • C for 1 h. At the end of this period, the mixture was centrifuged at 3500× g rpm for 10 min. The supernatant was carefully decanted from the precipitate, and 45 mL of fresh DI water was added, and the resultant suspension centrifuged. This process was repeated until the washings were no longer basic. Finally, the precipitate was washed twice with 50 mL acetone. The resulting precipitate was placed into a crucible and dried in the oven at 110 • C to constant weight. The residue should consist of chitin and silica present in the sample. The contents of the crucible were incinerated to constant weight in an electric muffle-furnace at a dull red heat (770 • C) until all carbonaceous matter was consumed. The crucible was cooled down and reweighed. The loss in weight was reported as chitin and compared to the original mass of the sample to examine the content of chitin.
Total Lipid Content
Total lipid content was determined according to the established method [61], but with slight modification [62]. Squid pen (10 g) was blended in a top-drive blender for 2 min with a mixture of distilled water (4 mL), methanol (20 mL), and chloroform (10 mL). An additional amount of chloroform (10 mL) was added, and the mixture was mixed in a vortex for 30 s. Distilled water (10 mL) was added to the mixture and vortexed for 30 s. The mixture was filtered under pressure on a glass filter. The content was collected in a separating funnel and the two layers were isolated. The water-methanol layer was removed by suction and the chloroform layer was recovered and concentrated on a rotary evaporator. The solute was finally dried in a Thermo vacuum oven (Thermo Fischer Scientific, Waltham, MA, USA). During the process, a few crystals of 2,6-bis(1,1-dimethylethyl)-4-methylphenol (BHT) were added to the samples to prevent oxidation. All the lipid extracts were weighed, and the lipid content percentage was calculated.
Infrared Spectroscopy Analysis
Fourier transform infrared (FTIR) spectra were recorded by a Nicolet Magna FTIR spectrophotometer (Nicolet Analytical Instruments, Madison, WI, USA). The spectrophotometer is connected to a PC with OMNIC™ software (Thermo Electron Corp., Woburn, MA, USA) to process data. The samples were prepared in KBr pellets at a concentration of 2% (w/w). The transmission spectra were recorded at 4 cm −1 resolution and 64 scans.
Thermogravimetric Analysis
The thermogravimetric studies were performed by a thermogravimetric analyzer Cahn-Ventron 2000 (Cahn Scientific, Irvine, CA, USA) with a microprocessor driven temperature control unit and a thermal analysis data station. The weight of the samples ranged between 5 and 10 mg. The aluminum sample pan was placed in the balance system equipment, and the temperature was raised from 25 to 550 • C at a heating rate of 10 • C min −1 under a N 2 gas flow of 50 mL min −1 . The sample pan weight was continuously recorded as a function of temperature.
Solid-State Cross-Polarization/Magic Angle Spinning 13 C NMR Spectroscopy (CP/MAS 13 C NMR)
The solid-state CP/MAS 13 C-NMR spectra of chitin samples were registered in a Bruker AMX 300 spectrometer (Bruker, Billerica, MA, USA). In all cases, 3072 scans were accumulated. The contact time was 1 ms, the repetition time 5 s, and the acquisition time was 50 ms. The internal reference (0 ppm) was 4,4-dimethyl-4-silapentane-1-sulfonic acid (DSS). The chitin degree of acetylation (DA%) was determined from the spectrum using the ratio between the intensities (I) of the -CH 3 group signal and the sum of the intensities of all carbon signals from the glucopyranosic ring, according to the following equation: DA(%) = I CH3 I C1 + I C2 + I C3 + I C4 + I C5 + I C6 /6 × 100
Scanning Electron Microscopy (SEM)
Morphological analysis was performed on an ETEC autoscan Model U-1 scanning electron microscope (University of Massachusetts, Worcester, MA, USA). The samples were fixed in a sample holder and covered with a gold layer for 3 min, using an Edwards S150 sputter coater (BOC Edwards, São Paulo, Brazil).
Reduced Viscosity Determination
The reduced viscosity of β-chitin was determined by an Ubbelohde capillary viscometer in a water bath at 25 ± 0.1 • C. First, β-chitin was dissolved in N,N-dimethylacetamide/LiCl 5 wt%, at a polymer concentration of 0.03 g d.L −1 at 25 • C. The relative (η r ), specific (η sp ), and reduced viscosities (η red ) of solutions were calculated according to the following equations: where η is the viscosity of the solution (or dispersion); η 0 is the viscosity of the solvent; and c is the (mass) concentration in g·mL −1 . The reduced viscosity is expressed in mL·g −1 .
β-Chitin Nanofibrils (NF) Preparation at Bench Scale
The β-chitin powder (2.1 kg) was ground in a blade mil IKA (IKA WERKLE, Staufen, Germany) and sieved up to a particle size 2 mm in diameter. The obtained solid (2 kg) was suspended in distilled water (200 L) at 1 wt%. The pH of the slurry was set up with HCl to pH 3, and the slurry was disintegrated using a proprietary massive collider operated at 25 • C ( Figure 10). The β-chitin slurries were made to shear the fibers apart by passing the fibers through a 12-inch rotating disc. The disk gap was first set to zero, corresponding to the starting point, where the two disks graze without pulp. The mechanical defibrillation machine was operated at 1 atm pressure. The equipment loaded with a 1 wt% chitin suspension (viscosity close to cero) was fibrillated at 1200 rpm. The disintegrated β-chitin was sampled at different cycles and finally recovered. The number of collision times (=cycle pass) was set to 10, 20, 40, and 60 passes. The obtained nanofibrous material was in the form of a concentrated suspension with a gel-like appearance. Figure 10 shows the NF β-chitin processing at a bench scale. The viscosity of the nanofibrillated gels was measured in a rotational viscometer (Fungilab, Barcelona, Spain) at 25 • C using a TL7 spin for all samples.
Transmission Electron Microscopy (TEM)
The chitin nanofibrillated samples were diluted to obtain a 0.01 wt% solution. The solution drops were placed in a Cu grid of 100 mesh and dried at room temperature. Then, the samples were analyzed by JEOL JEM 1200 EX II TEM equipment (JEOL, Tokyo, Japan).
X-ray Diffraction (XRD)
XRD diffractograms of chitin samples were obtained in order to evaluate their crystallite size and crystalline index. This analysis provided information about the changes in the crystalline structure of differently processed chitin nanofibrils. The XRD analysis was performed by a Bruker AXS model D4 Endeavor diffractometer (Bruker AXS GmgH, Karlsruhe, Germany) using monochromatic CuKα radiation (λ = 0.15418). The device generated a signal at 40 kV and 20 mA. The intensities were measured in the range of 5 • < 2θ < 40 • C for all samples, with a step size of 0.02 • and scans at one s/step. The intersheet distance was determined by Equation (6). The apparent crystallite size (Dap) was calculated using the Scherrer equation (Equation (7)), while the crystalline index (CrI) was calculated using Equation (8): where n is the order of reflection; λ is the radiation wavelength (in nm); and θ is the plane angle.
where β (in radians) was the half-width of the reflection; K was a constant, indicating the crystallite perfection, equal to 0.9; λ was the radiation wavelength (in nm); and θ was the plane angle.
where A Cryst was the sum of all crystalline signals and A Total was the total area of the diffractogram.
Preparation of Nanofibrous β-Chitin Films
In order to obtain the nanofibrillated films of different thicknesses (0.02 and 0.05 mm), different volumes of concentrated nanofibrous chitin suspension (1 wt%) were placed in Petri dishes and dried in a Thermo vacuum oven (Thermo Fischer Scientific, Waltham, MA, USA) at 60 • C for 24 h.
Characterization of Films
The obtained nanofibrous β-chitin films were subjected to XRD and SEM analysis, as described in Sections 3.3.8 and 3.4.3, respectively.
Mechanical, Colorimetric, and Swelling Degree Analysis
Mechanical analysis was performed on a universal testing machine SmarTens 005 (KARG Industrietechnik, Krailling, Germany), with a load cell of 1 kN and at 23 ± 2 • C, 45 ± 5% RH. The tensile test was performed on nanofibrous β-chitin film with a width of 5 mm and length of 3 cm. The crosshead speed was 2 mm/min. All mechanical analyses were carried out in triplicate.
In order to evaluate the color changes of the β-chitin nanofiber films of different thicknesses, colorimetric analysis was performed by a Biobase BCM-200 colorimeter (Biobase Meihua Co, Jinan, China). Two measurements (center and border) were taken on each sample. Colorimetric parameters were obtained using the CIELAB color scale to measure color: L* = 0 (black) to L* = 100 (white); −a* (greenness) to +a* (redness); and −b* (blueness) to + b* (yellowness). A white (L 0 * = 94.3; a 0 * = −0.9; b 0 * = −0.7) standard color was used for equipment calibration. The color difference (∆E) was calculated according to the following equation: In order to check the swelling degree of chitin, the film samples were cut into 1 × 1 cm 2 slices, and then the samples were kept in a desiccator with silica-gel for seven days. After this procedure, the samples were weighed and then subjected to immersion in glass vessels containing 10 mL of different buffer solutions (pH4, pH6, and pH8). After 24 h, samples were removed, put on tissue paper, weighed, and analyzed/recorded by a camera. The swelling degree (SD%) was calculated by Equation (10): where m 0 and m t are the initial weight and weight in a specific time interval.
Conclusions
This work presented the efficient extraction of β-chitin from marine-waste, squid pens of Dosidicus gigas. The β-form of chitin was confirmed by FTIR and NMR analysis. The degree of acetylation was 96%, as evaluated by NMR. Successful conversion of β-chitin into nanofibers at a semi-industrial scale was carried out in a collider machine under acidic conditions, which XRD, SEM, and TEM confirmed. It was shown that the number of passes of chitin through the collider machine could significantly influence the nanometer scale of the fibers. Namely, after 10 passes through the collider machine, the obtained β-chitin was in the form of visible microfibrils and nanofibrils. On the other hand, above 20 passes, the conversion of β-chitin to nanofibers was significantly improved, and the obtained nanofibers were in the range between 10 and 15 nm. The obtained nanofibers after the collider machine were in the gel-like form and were further converted into films by the solvent-casting method. Due to moderate swelling degree and good mechanical resistance, these β-chitin nanofibrous films have the potential to be further developed into food packaging, agricultural, wound dressing, or 3D-bioink material. Hence, the present work demonstrated a promising approach of utilization and industrial conversion of marine waste to β-chitin functional material with versatile potentials. | 9,256.8 | 2021-03-26T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Isolation of Onchocerca lupi in Dogs and Black Flies, California, USA
We implicated the black fly as a vector for this filarial zoonotic parasitic infection.
O nchocerca lupi is a zoonotic parasite capable of infecting dogs, cats, and humans. Human infection was first suspected in 2002, when a case of human subconjunctival filariasis was found to have a worm with morphology similar to that of O. lupi (1). Human infection was confirmed in 2011, when a subconjunctival nematode in the eye of a young woman in Turkey was identified by molecular methods as O. lupi (2). Overall, ≈10 confirmed or suspected human cases have been reported in Turkey (3,4), Tunisia (4), Iran (5), the southwestern United States (6), Crimea (1), and Albania (1). In most cases, clinical findings were similar, with a single immature worm found within a periocular mass. In the US case, a mature, gravid female worm was found within a mass in the cervical spinal canal of a young child in Arizona (6). The Centers for Disease Control and Prevention recently confirmed 5 additional cases in humans in the southwestern United States (M.L. Eberhard, unpub. data).
Several parasites of the genus Onchocerca are known to occur in North America, including 2 in cattle (O. gutturosa and O. lienalis) (7) and 1 in horses (O. cervicalis) (8).
In addition, at least 2 parasites of the native cervid species (9) are known to be endemic to North America; at least 1 of these (O. cervipedis) has been identified in deer in California (10). Although most Onchocerca species are associated with ungulates, O. lupi is unique in that it is primarily associated with canids. The first report of O. lupi infection was in a wolf in Russia (11). In the past 20 years, ≈70 cases of O. lupi infection have been reported in domestic dogs in the United States, Greece, and Portugal (12)(13)(14)(15)(16)(17). Probable cases also have been reported in Germany, Hungary, Switzerland, and Canada (16,18,19). Many affected dogs contained gravid female worms, presenting the possibility that canids may be a reservoir host for the parasite. The only additional species reported to have been infected were cats: 2 cases were documented in Utah, USA (20). Both cats were infected with gravid female worms, suggesting that cats also might be reservoir hosts. However, both cats also were infected with feline leukemia virus and probably were immunosuppressed and therefore not representative of most cats.
In the United States, confirmed and probable O. lupi infection has been documented in at least 12 dogs (17) and 2 cats (20) since 1991. That 6 of the 12 cases in dogs were in southern California (17,21) highlights this area as a focus of infection. Clinical signs in dogs typically involve
Isolation of Onchocerca lupi
in Dogs and Black Flies, California, USA 0.3-0.7-cm periocular masses that contain adult worms. Infections may be associated with additional ocular pathology ( Figure 1). The masses are typically subconjunctival or episcleral but can be found anywhere in the orbit (22). The life cycle of O. lupi, including the vector and its primary reservoir host, remains unknown. Determining the vector is the critical step in preventing exposure. Black flies (Simulium spp.) and biting midges (Culicoides spp.) are vectors for other species of Onchocerca (23) and might be vectors for O. lupi. Black flies are routinely detected in certain areas of Los Angeles County, including a 29-km stretch of the Los Angeles River (http://www.glacvcd.org/), in the San Gabriel Valley area (http://sgvmosquito.org/), and in western areas of the county (http://www.lawestvector.org/). We report 3 additional O. lupi infections in dogs in southern California and present molecular evidence implicating the black fly species S. tribulatum as the possible vector for this parasite.
Identification of Cases and Parasites
In Los Angeles County, the Los Angeles County Department of Public Health conducts animal disease surveillance. Private practice veterinarians report diseases in all species, including companion animals. Veterinarians are required to report infectious diseases, particularly those listed as being of priority (http://www.publichealth.lacounty. gov/vet/docs/AnimalReportList2013.pdf), as well as any unusual diseases.
In March 2012, a local veterinarian reported a case of onchocerciasis in a local dog (dog B). Discussions with the veterinary ophthalmologist and the laboratory examining ocular tissue from the dog revealed an earlier case (dog A) and a concurrent case (dog C).
In May 2006, a 10-year-old, spayed female Labrador Retriever mix (dog A) was examined by a veterinary ophthalmologist in Los Angeles, California. The dog had a brown, lobulated 16-mm episcleral mass in the lateral temporal area of the left eye. One week of topical ophthalmologic antimicrobial and corticosteroid therapy failed to shrink the mass, and it was surgically removed. No other abnormalities were found. The mass contained mixed inflammatory cells surrounding 2 fragments of a cuticle with 2 striae per ridge, characteristic of O. lupi. The dog was from the Hollywood Hills area of Los Angeles, ≈3 km south of the 29-km black fly control zone of the Los Angeles River. A travel history was not available.
In February 2012, the same veterinarian examined an 8-year-old female spayed Boxer (dog B). The dog had severe bilateral corneal ulcerations, a 10-mm conjunctival mass in the nasodorsal area of the right eye, and persistent mydriasis in the left eye. No other abnormalities were found. Corneal ulceration is a common disorder in Boxers, but the mass was considered to be unrelated to the ulcers (24). The mass was surgically removed and the ulcers treat- In January 2012, a 4-year-old pit bull mix (dog C) was examined by a veterinary ophthalmologist in San Diego. The dog had 2 episcleral masses (10 mm and 5 mm) at the lateral limbus of the left eye. The masses were immediately adjacent to each other and were associated with the lateral rectus muscle. No other abnormalities were found. The masses were surgically removed, and O. lupi was identified morphologically in the tissues. The dog was living at a humane society in San Diego, and further history on the dog was not available.
Black Fly Collection and Processing
During April-August 2013, we collected 248 black flies from 13 locations in the San Gabriel Valley in Los Angeles County throughout an area ≈380 km 2 . This area is ≈40-50 km east of the veterinary clinic that diagnosed O. lupi infection in dogs A and B and ≈180 km north of the clinic that diagnosed it in dog C. The area contains the watershed of the San Gabriel River. The convenience sample of black flies was caught in CO 2 -baited encephalitis virus surveillance traps that had been set for mosquito collection. Each fly was identified as belonging to the Simulium genus by standard taxonomic keys and was fixed in 70% ethanol.
We prepared DNA from the individual flies by using the DNeasy Blood and Tissue Kit (QIAGEN) following the manufacturer's instructions. Flies were prepared in batches of 12 samples; each batch contained 10 individual flies and 2 sham extractions that served as negative controls. A total of 2 μL of the purified genomic product was then used as a template in a nested PCR targeting the O. lupi CO1 gene. All PCRs were conducted in a total volume of 50 μL. The initial amplification reaction was conducted in a solution of 300 mmol/L Tris-HCLl (pH 9.0); 75 mmol/L (NH 4 ) 2 SO 4 ; 10 mmol/L MgCl 2 ; 200 µmol/L each of dATP, dCTP, dGTP, and dTTP; 0.5 mM of each primer, and 2.5 U of Taq DNA polymerase (Invitrogen). Primer sequences employed in the initial reaction were 5′-TGTTGCCTTT-GATGTTGGGG-3′ and 5′-GGATGACCGAAAAAC-CAAAACAAG-3′, and amplification conditions were 94°C for 3 min, followed by 35 cycles of 94°C for 45 s, 52°C for 30 s, and 72°C for 90 s, with a final extension of 72°C for 10 min. This reaction produced an amplicon of 475 bp. A total of 1 μL of the product of the first reaction was used as the template in the nested reaction, which used the buffer conditions described above and primers with the sequences 5′-TCAAAATATGCGTTCTACTGCTGTG-3′ and 5′-CAAAGACCCAGCTAAAACAGGAAC-3′. Cycling conditions consisted of 94°C for 4 min, followed by 40 cycles of 94°C for 45 s, 50°C for 45 s, and 72°C for 90 s, and a final extension of 72°C for 10 min. This reaction produced an amplicon of 115 bp. Products from the nested reaction were analyzed by electrophoresis on a 1.5% agarose gel. Samples producing a band of the appropriate size in the initial screens (115 bp) were subjected to a second independent PCR. Samples producing products of the expected size were considered as putative positives. Amplicons of putative positives were subjected to DNA sequence analysis to confirm the identity of the product, using a commercial service (Genewiz, South Plainfield, NJ, USA). We calculated 95% CIs surrounding the estimate of the proportion of infected flies using standard statistical methods (28).
Fly Identification
We amplified a portion of the mitochondrial 16S rRNA gene from the DNA prepared from the infected flies, following previously published protocols (29). The primers used in the amplification reaction were 16S F: 5′-CGCCT-GTTTATCAAAAACAT-3′ and 16S R: 5′-CTCCGGTTT-GAACTCAGATC-3′. The resulting amplicons were subjected to DNA sequence analysis, as described above. The DNA sequences obtained were submitted to the GenBank sequence database under accession numbers KP233211 and KP233212.
Black fly larvae were collected from 4 sites near the locations where the infected flies were trapped. The isolated larvae were cut in half horizontally immediately upon collection. The anterior end of each larva (head) was fixed in 70% isopropanol (rubbing alcohol), and the posterior end (abdomen) was fixed in Carnoy's solution (3 parts 95% ethanol and 1 part glacial acetic acid by volume).
DNA was prepared from the heads of the fixed larvae and used to amplify the portion of the mitochondrial 16S rRNA gene, as described above. The abdomen of each larva was opened ventrally with fine needles and stained with the Feulgen method (30). Salivary glands with stained nuclei and 1 gonad for sex determination were dissected from the abdomen, placed in a drop of 50% acetic acid, flattened under a coverslip, and examined with oil immersion. Identifications were based on diagnostic species-specific rearrangements of the polytene chromosomes (31,32).
Results
Unstained, formalin-fixed, paraffin-embedded tissue from the 3 dogs was used to amplify parts of 2 mitochondria-encoded (CO1 and 12S rRNA) and 1 nuclear-encoded (rRNA ITS1) genes. On the basis of the alignments and phylogenetic analyses, each parasite isolated from the 3 dogs was shown unequivocally to be O. lupi. For example, the sequences obtained from the ITS1 amplicon from the parasites from each dog were close to 100% identical to an O. lupi isolate from Hungary ( Figure 2). Identical relationships were obtained when the 12S rDNA and CO1 PCR amplicons were analyzed (online Technical Appendix Figures 1, 2, http://wwwnc.cdc.gov/EID/article/21/5/14-2011-Techapp1.pdf).
Of the 248 individual black flies collected, 213 were screened using the nested PCR targeting the O. lupi CO1 gene. Of these, 6 (2.8%; 95% CI 0.6%-5.0%) produced nested amplicons of the expected size of 115 bp. The sequences of all 6 amplicons exactly matched the published O. lupi CO1 sequence (data not shown). We then attempted to recover the amplicon from the first reaction and determine the DNA sequence of this larger fragment. This attempt was successful in 4 of the 6 positive flies, resulting in 399 bp of sequence between the primer sites. The sequence of the recovered amplicons matched that of the GenBank reference sequence almost exactly in all 4 samples ( Figure 3). We noted single-nucleotide polymorphisms in 3 of the 4 amplicons when we compared them with published sequence; 2 of the isolates shared 1 polymorphism (Figure 3).
Of the 13 locations sampled, 5 contained positive flies (online Technical Appendix Figure 3); 1 location had 2 positive flies (Table). These 5 locations spanned ≈270 km 2 , covering most of the sampled area, except its northwest corner. Of the 5 positive sites, 4 were within a circle with a radius of 17.5 km (online Technical Appendix Figure 3). Three of the 5 positive collection sites were located along the San Gabriel River (online Technical Appendix Figure 3). All positive flies were collected during the spring (April 22-June 4, 2013). To determine the identity of the infected flies, we amplified a portion of the black fly mitochondrial 16S rRNA gene from the remaining DNA samples. This sequence has previously been shown to be phylogenetically informative in distinguishing several North American black fly species (29). A comparison of the sequence data obtained from the amplicons with the GenBank sequence database showed that the sequences were most similar to members of the genus Simulium; however, an exact match was not obtained to any of the sequences in GenBank, which precluded identification of the infected flies to the species level (data not shown). To identify the flies to the species level, we collected black fly larvae from sites near the locations from which the infected flies were trapped (online Technical Appendix Figure 3). These larvae were bisected and preserved for molecular and cytotaxonomic analyses. The diagnostic portion of the 16S rRNA gene was then amplified from the collected larvae and compared with the sequences obtained from the infected flies. Larvae that had sequences matching those of the infected flies exactly were then identified by cytotaxonomy. Of the 6 infected flies, 5 were identified as S. tribulatum using this process (Table). Two 16S mitochondrial alleles were identified in the population of larvae and infected flies identified as S. tribulatum. These were designated S. tribulatum A and S. tribulatum B, which were 97% similar to one another (online Technical Appendix Figure 4). S. tribulatum B was identified in the infected flies from the Santa Fe Dam, whereas infected flies from the Monterey Park City Yard, Bernard Biostation, and Walnut Coop contained S. tribulatum A (Table). The sequence of the infected fly from Rainbow Canyon Ranch matched that of 1 larva in the collection; however, definitive cytotaxonomic identification of this larva was not successful because of poor fixation.
Discussion
Our data imply that O. lupi infection in dogs is ongoing in southern California. The possibility that dogs might be serving as sentinels for this infection suggests that humans and cats in the area also could be at risk for infection.
Several other Onchocerca species are endemic to North America. However, except for O. lupi, all of these are known to use ungulates as their primary hosts. Thus, isolation of these parasites from dogs, together with the phylogenetic analysis of 3 different gene sequences that all group the isolates with O. lupi, strongly support the identification of these parasites as O. lupi.
The nested PCRs detected CO1-derived amplicons with sequences 99.5%-100% identical to the published O. lupi CO1 sequence in 6 flies. Some of these sequences could be derived from other Onchocerca because Simulium spp. flies are known to be vectors for several Onchocerca species for which sequence data are not available (33,34). However, previous studies have suggested that sequence variation in the mitochondrial genome varies from 7% to 15% among Onchocerca species (9,35), and intraspecies variation within the mitochondrial genome is limited in the genus Onchocerca (9,35). Therefore, the sequences detected in the flies are unlikely to have derived from a species other than O. lupi.
Our data suggest that the black flies collected frequently fed on a host species that was infected with O. lupi, a host that remains unidentified. However, our data implicate S. tribulatum flies as the vector for O. lupi in southern California. S. tribulatum, a member of the S. vittatum species complex, is one of the most abundant and widespread species of Simulium flies in North America (36). S. tribulatum flies generally feed on large mammals (e.g., cattle or horses) and rarely bite humans (36).
The implication of S. tribulatum flies as a possible vector for O. lupi also might provide insight about the reasons that O. lupi cases have primarily been found in the southwestern United States. Cities and human settlements there typically rely on anthropogenic water sources, such as aquifers, reservoirs, and other water impoundments. The S. vittatum complex (of which S. tribulatum is a member) includes some of the few black fly species in North America that prosper in these environments (36).
The assay we used to detect O. lupi in the black flies cannot distinguish between viable and nonviable parasites or immature and infective larvae. Thus, although our data implicate S. tribulatum flies as the vector, additional studies are needed to confirm this hypothesis. Laboratory colonies of S. vittatum, a sibling species of S. tribulatum, could prove useful in confirming that flies of this species complex are actually competent vectors for this parasite (37).
The black flies that tested positive for O. lupi came from geographic locations adjacent to the San Gabriel River and its watershed. During the past 20 years, southern California has tried to restore natural watershed and wetland habitats, including those in the San Gabriel Valley area (38). Black flies rely on rivers and other bodies of water, often with aquatic vegetation for egg laying and larval development, all of which the San Gabriel River and Los Angeles River watersheds provide.
The San Gabriel Mountains are directly upstream of the sites from which we collected the larvae and positive flies. The San Gabriel River, its watershed, and its recreational areas are likely to be providing a wildlife corridor that enables an easily accessible transmission interface. Although most cases in canids have been described in domestic dogs, the relative rarity of infections in domestic animals suggests that the parasite uses a different species as its primary reservoir. The ubiquitous presence of coyotes and other nondomestic canids in the San Gabriel watershed might provide a convenient natural reservoir for the parasite. Additional studies involving sampling of the coyote population in the area, coupled with molecular identification of the blood meals taken by the local black flies (39), would be useful in resolving these questions. Prevention of O. lupi infection ultimately might rely on effective Simulium control programs, which must address black fly breeding in a variety of settings. The most effective control methods used for the past 20 years in the San Gabriel Valley have been applications of VectoBac 12AS (Bti) (K. Fujioka, San Gabriel Valley Mosquito and Vector Control District, pers. comm.) and occasionally stopping the flow of water for a minimum of 48 hours because the larvae are vulnerable to desiccation (40). The role of ivermectin, milbemycin, and other heartworm preventive medications commonly used in dogs and cats is unknown. These medications would probably kill microfilariae, but their efficacy against infective L3 larvae of O. lupi is unknown. These medications in pets may play a role in preventing infection or in preventing infected pets from serving as reservoir hosts, reducing transmission of this infection. | 4,376.4 | 2015-05-01T00:00:00.000 | [
"Biology"
] |
Gated Dehazing Network via Least Square Adversarial Learning
In a hazy environment, visibility is reduced and objects are difficult to identify. For this reason, many dehazing techniques have been proposed to remove the haze. Especially, in the case of the atmospheric scattering model estimation-based method, there is a problem of distortion when inaccurate models are estimated. We present a novel residual-based dehazing network model to overcome the performance limitation in an atmospheric scattering model-based method. More specifically, the proposed model adopted the gate fusion network that generates the dehazed results using a residual operator. To further reduce the divergence between the clean and dehazed images, the proposed discriminator distinguishes dehazed results and clean images, and then reduces the statistical difference via adversarial learning. To verify each element of the proposed model, we hierarchically performed the haze removal process in an ablation study. Experimental results show that the proposed method outperformed state-of-the-art approaches in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), international commission on illumination cie delta e 2000 (CIEDE2000), and mean squared error (MSE). It also gives subjectively high-quality images without color distortion or undesired artifacts for both synthetic and real-world hazy images.
Introduction
Outdoor images are degraded by various atmospheric particles such as haze and dust. Especially, haze reduces the visibility of the image and disturbs the clarity of distant objects because of the effect of light scattering by particles in the air. Early dehazing techniques were based on mathematical optimization. Huang et al. proposed a visibility restoration (VR) technique using color correlation based on the gray world assumption and transmission map based on depth estimation [1]. Tan et al. proposed a Markov random field-based graph cut and belief propagation method to remove haze without using geometrical information [2]. Ancuti et al. removed the haze by identifying the hazy region through hue disparity between the original image and the 'semi-inverse image' created by applying a single per pixel operation to the original image [3]. Shin et al. removed the haze using a radiance and reflectance combined model and structure-guided l 0 filter [4]. Qu et al. presented a dehazing method based on a local consistent Markov random field framework [5]. Meng et al. presented the l 1 norm-based contextual regularization and boundary constraints-based dehazing method [6]. Liang et al. proposed a generalized polarimetirc dehazing method via low-pass filtering [7]. Hajjami et al. improved the estimation of the transmission and the atmospheric light applying to Laplacian and Gaussian pyramids to combine all the relevant information [8].
In spite of the mathematical beauty, a mathematical optimization-based method cannot fully use the physical property of the haze. To solve that problem, various physical model-based dehazing methods were proposed. He et al. estimated the transmission map by defining the dark channel prior(DCP), which analyzed the relationship between the clean and the hazy images [9]. Zhu et al. proposed a method of modeling the scene depth of the hazy image using color attenuation prior (CAP) [10]. Bui et al. calculated the transmission map through the color ellipsoid prior applicable in the RGB space to maximize the contrast of the dehazed pixel without over-saturation [11]. Tang et al. proposed a learning framework [12] by combining multi-scale DCP [9], multi-scale local contrast maximization [2], local saturation maximization, and hue disparity between the original image and the semi-inverse image [3]. Dong et al. used a clean-region flag to measure the degree of clean-region in images based on DCP [13].
However, these methods can lead to undesired results when the estimated transmission map or atmospheric light is inaccurate. Recently, deep learning-based transmission map estimation methods were proposed in the literature. Cai et al. proposed a deep learning model to remove the haze by estimating the medium transmission map through the end-to-end network, and applied it to the atmosphere scattering model [14]. To estimate the transmission more accurately, Ren et al. provided a multi-scale convolutional neural network (MSCNN) [15], and trained the MSCNN using the NYU depth and image dataset [16]. Zhang et al. presented a densely connected pyramid dehazing network (DCPDN), and reduced the statistical divergence between real-radiance and estimated result using the adversarial networks [17]. If the weight parameters, which reflect the transmission map and atmospheric light, are inaccurately learned, the DCPDN results in a degraded image. Figure 1b,c show image dehazing results based on two different atmospheric scattering models [9,17], respectively. As shown in the figures, inaccurate estimation results in over-saturation or color distortion since the transmission map is related to the depth map of the image. On the other hand, the proposed method shows an improved result without saturation or color distortion as shown in Figure 1d, which is very close to the original clean image shown in Figure 1e. As previously discussed, atmospheric scattering model-based dehazing methods result in undesired artifacts when the transmission map or atmospheric light is inaccurately estimated. To solve this problem, Ren et al. proposed a dehazing method that fuses multiple images derived from hazy input [18]. Qu et al. proposed a pixel-to-pixel dehazing network that avoids the image to image translation by adding multiple-scale pyramid pooling blocks [19]. Liu et al. observed that the atmosphere scattering model is not necessary for haze removal by comparing direct and indirect estimation results [20]. Inspired by these approaches, we present a novel residual-based dehazing network without estimating the transmission map or atmospheric light. The proposed dehazing generator has an encoder-decoder structure. More specially, in the proposed encoder-decoder network, the local residual operation and gated block can maximize the receptive field without the bottleneck problem [18]. In addition, to reduce the statistical difference between the clean and dehazing result, the proposed discriminator decides if the generator's result is real or fake by comparing with the ground truth. The proposed network is trained by minimizing the l 1 -based total variation and Pearson X 2 divergence [21,22]. This paper is organized as follows. In Section 2, the related works are presented. The proposed gated dehazing network is described in Section 3 followed by experimental results in Section 4, and Section 5 concludes this paper with some discussion.
Related Works
This section briefly surveys existing dehazing methods that partly inspired the proposed method. In particular, we describe: (i) the formal definition of the atmosphere scattering model, (ii) how the gated operation is used not only in the deep learning model but also in dehazing, and (iii) how the generative adversarial network (GAN) is used in a dehazing method.
Atmosphere Scattering Model
A haze image I (x) acquired by a digital image sensor includes ambiguous color information due to the scattered atmospheric light A whose effect is proportional to the distance from the sensor d (x). To estimate the scene radiance of the haze image, the atmospheric scattering model is defined as [23] where x represents the pixel coordinates, I (x) the observed hazy image, J (x) the clean haze-free image, A the atmospheric light, e −βd(x) = t (x) the light transmission, d (x) the depths map of the image, and β the scattering coefficient.
Gated Network
Ren et al. proposed a gated fusion network (GFN) that learns a weight map to combine multiple input images into one by keeping the most significant features of them [18]. In their original work, they referred to the weight map as the confidence map. The weight map in a GFN can be represented as [18,24,25] w 1 , w 2 , . . . , where F i for i = 1, . . . , n represents the feature map of the i-th layer and w i for i = 1, . . . , n, the weight or confidence map through the gate. Using F i and w i , the final feature is computed as where represents the element-wise multiplication. Chen et al. showed that the gated operation-based haze removal method is effective through the smooth dilated convolution with a wide receptive field and the gated fusion method using elemental-wise operation [24].
Generative Adversarial Network
The GAN is learned by repeating the process of generating a realistic image in which the generator tries to confuse the discriminator. GAN is formulated as [22] arg min where x and z respectively represent the real image and random noise, and D and G respectively the discriminator and generator [22]. When the random noise z is replaced to haze image I, this adversarial learning can be effectively applied to dehazing approach [17].
Proposed Method
Since it is hard to estimate the accurate transmission in Equation (1), we propose a novel residual-based dehazing method that does not use the transmission map. Figure 2 shows the effect of structures to the network. This section describes details of the proposed method including gate network, global residual, and adversarial learning.
The Gated Dehazing Network
To remove the haze, the proposed generator takes a single haze image as an input as shown in Figure 3a. The generator goes through the processes of encoder, residual blocks, gate block, decoder, and global residual operation. In the encoding process, the resolution of the input image is reduced twice through convolution layers, while the number of output channels accordingly increases. The encoding process of proposed method can be expressed as where W k , b k , and σ k respectively represent the weight, bias, and Relu activation function [26] of the k-th convolutional layer, and * the convolution operator. The hazy image I was used as an input in the encoding process to have 3 input channels. The following convolutional layers have 64 output channels with 3 × 3 kernel. To extract feature maps, the input image is reduced by 4 times using stride 2 twice. In addition, 128 output channels and K = 4 layers are used in the encoding block. Since the encoded features have low-scale and large channels, the proposed network has large receptive fields. In other words, the proposed network computes more wider context information [27]. However, the bottleneck effect decreases the performance of the proposed network because too many parameters are required to restore the features with large receptive fields [28]. If a network is constructed without residual blocks, the bottleneck problem is as shown in Figure 2b. To solve this problem, the proposed network also uses the hierarchical information from low-level to high-level features using five residual blocks as where W e 4 , E 4 , b e 4 , and σ e 4 respectively represent the weight, feature map, bias, and Relu activation function for the last layer of the previous encoding block. R 1 represents the first residual feature map through element-wise summation with the last feature map of the encoding block. W r n−1 , R n−1 , b r n−1 and σ r n−1 respectively represent the weight, residual feature map, bias, and Relu activation function of the n − 1th block of the residual process. For the residual blocks of the proposed method, 25 layers were used. Figure 2c shows that the bottleneck problem is solved by adding residual blocks. However, the enlarged red box does not converge due to the hierarchical information generated from the residual blocks. To solve this problem, we obtain the weights from low to high level through the gating operation inspired by the GFN [18] for the feature map to contain a hierarchical information generated in the residual blocks. The gating operation to obtain the feature map through the element-wise multiplication of the acquired weights and the value generated in the residual blocks can be defined as where W gate 1 and b gate 1 respectively represent the weight and bias of gate blocks. "[·]" represents the concatenation operation of residual feature maps with hierarchical information from low to high level, and "•" the element-wise multiplication of G f 1,2,...5 and hierarchical feature maps R 1,2,...5 .
The decoding layer reconstructs the restored image based on the generated and computed features [29]. In the decoding process, the resolution is restored as where W d n , b d n , and σ d 1 respectively represent weight, bias, and Relu activation function in the decoding layer, and "↑ 2 " the up-sampling operation with a scale of 2. The proposed decoding layer repeats the 3 × 3 convolution after up-sampling using bilinear interpolation twice to decode the image to the original resolution.
The global residual operation can effectively restore degraded images, and can improve the robustness of the network [30,31]. In this context, we can formulate the relationship between the global residual operation and the input hazy image I as where σ gr 1 represents the Relu activation function in the global residual operation. Through summation of decoded D up and input hazy image I, the generator's dehazed image G (I) is acquired. We designed the network structure to solve the bottleneck problem for clean results shown in Figure 2d, where the proposed gated network can generate more hierarchical features. A list of parameters of the generator are given in Tables 1 and 2.
Although it was successful to obtain enhanced results of synthetic data using only a generator, adversarial learning was applied to obtain more robust results in real-world images. For the adversarial learning, we also propose the discriminator inspired by [32], which increases the number of filters while passing through layers and has a wider receptive field. The discriminator takes the dehazed image G (I) or clean image J as input. To classify the images, the proposed discriminator estimates the features using four convolutional layers as where W d n , b d n , BN d n , Ψ d n , and Φ respectively represent the n-th weight, bias, batch normalization, leaky Relu function [33], and sigmoid function in the discriminator. As in Equation (10), the discriminator takes G (I) or J as input, and the input channel is set to 3. As the number of channels passed through the layer increases, 192 output feature maps were extracted. In the last layer, the discriminator extracts a single output feature map to classify the image, and determines whether it is valid (1) or fake (0) applying to sigmoid function. Detailed parameters of the discriminator are given in Table 3. Output:
Overall Loss Function
The loss function of the proposed method consists of L l 1 , L vgg , L adv(D) , and L adv(G) . L l 1 can be obtained by using the mean absolute error between the dehazed image G (I) and the generated clean image J, which is formulated as where M = H × W × C represents the size of input image. L vgg used a pre-trained VGG16 network that can extract perceptual information from images and enhance contrast [21]. The equation for obtaining the VGG16 loss can be formulated as where V k (·) represents the pre-trained VGG16 network, and k = 1, . . . , 4 the number of layers containing important features. N k is the product of the height, width, and channel of the VGG16 layer corresponding to k. For stable learning and higher quality outputs, least squares generative adversarial network (LSGAN) [34], which is an improved version of the original GAN, was applied to our proposed model. The generator creates a dehazed image G (I), and the discriminator distinguishes whether the dehazed image is real or fake. The resulting adversarial loss is calculated as where hyper-parameters λ D , λ G = 0.2 were used for the discriminator and generator losses, respectively. As shown in Equation (13), the proposed adversarial loss goes through an optimization process that minimizes the Euclidean distance between the discriminator and generator. Figure 4 shows the learning process of the proposed model including: (i) computing the mean absolute error and perceptual losses using Equations (11) and (12), (ii) updating the generator after adding the previously obtained losses, and (iii) updating the discriminator loss in Equation (13). The generator loss is finally updated.
Experimental Results
To train the proposed model, Adam optimizer was used with learning rate (10) −4 for combined L l 1 and VGG16 loss (L l 1 + L vgg ) and (10) −6 for adversarial loss (L adv(G) , L adv(D) ). We used 10,000 images from indoor training set (ITS) and 18,200 images from outdoor training set (OTS) Part 1 for the training dataset [35]. We implement our model using a personal computer with a 3.70 GHz Intel Core i9-9900K processor and NVIDIA GTX 2080ti 12GB GPU. Training time took almost 30 hours and Pytorch was used as the framework. In consideration of the training speed and stability, the batch size was set to 10, and every image was resized to 256 × 256 for the input patch of the network. For testing, an input image was resized to 512 × 512 to generate the corresponding output. We used 500 synthetic objective testing set (SOTS) outdoor and indoor images, and proved the dehazing performance of the proposed model for 500 real-world hazy images provided by fog aware density evaluator (FADE) [36]. For a fair comparison, state-of-the-art dehazing methods including: DCP [9], CAP [10], radiance-reflectance optimization (RRO) [4], all-in one network (AOD) [37], DCPDN [17], and GFN [18] were tested together with the proposed method.
Performance Evaluation Using Synthetic Data
To synthesize a hazy image, depth map information is required. Li et al. synthesized hazy images using both indoor and outdoor data with depth information [35]. For example, in an outdoor environment, the depth range extends up to several kilometers, whereas in an indoor environment, it has several meters. For this reason, we used 500 SOTS outdoor and indoor images for experiments with various depths [35] and synthetically simulated haze images to evaluate the objective performance in the sense of MSE and PSNR, SSIM [38], and CIEDE2000 that calculates the color difference in the CIELAB space [39]. If two images become closer, SSIM approaches 1, while CIEDE2000 approaches 0.
Tables 4-7 show the averages of quantitative evaluation results for 500 SOTS outdoor and indoor images with consideration of statistical significance. As shown in Tables 4-7, the proposed method shows higher performance than any other methods in the sense of all metrics. Table 7. Comparative MSE results of dehazing methods on SOTS. Best score is marked in red.
Similarity Comparison Using Benchmarking Dataset
In order to evaluate the qualitative performance, we show dehazing results using different methods for both outdoor and indoor images as shown in Figures 5 and 6. Figure 5b shows an increased contrast, but the road region is over-enhanced due to an inaccurately estimated transmission map, resulting in color distortion and halo effect near strong edges. Figure 5c also shows a color distortion problem and remaining haze. Figure 5d shows completely removed haze at the cost of over-enhancement in the sky region. In Figure 5e, haze in all regions is not completely removed. Figure 5f exhibits an over-saturation in the sky region and an increased brightness. Figure 5g shows halo effect around strong edges and over-enhancement problem. On the other hand, the proposed method provides high-quality images without the aforementioned problems. Figure 5. Comparison of dehazing results using synthetic outdoor hazy images [35] for qualitative evaluation: (a) hazy input image, (b) DCP [9], (c) CAP [10], (d) RRO [4], (e) AOD [37], (f) DCPDN [17], (g) GFN [18], (h) the proposed method, and (i) clean image. Figure 6. Comparison of dehazing results using synthetic indoor hazy images [35] for qualitative evaluation: (a) hazy input image, (b) DCP [9], (c) CAP [10], (d) RRO [4], (e) AOD [37], (f) DCPDN [17], (g) GFN [18], (h) the proposed method, and (i) clean image.
The contrast of the Figure 6b,c is significantly increased, but their colors were distorted due to the incorrect transmission map estimation. In Figure 6d, the contrast is increased, but the haze is not completely removed. Figure 6e can preserve the original color, but the image is turbid. Figure 6f is brighter than normal and exhibits over-saturation around the chandelier. In Figure 6g, haze is removed well, but there are still halo artifacts near the strong edges. On the other hand, the proposed method provides a clean, high-quality image without color distortion or over-saturation even in an indoor environment.
Ablation Study
We present results of the ablation study by comparing results using: (i) only L l 1 , (ii) L l 1 + L vgg , and (iii) the complete version of the proposed method as shown in Figure 7. We also present the quantitative evaluation of the same ablation study with respect to different numbers of epochs as shown in Table 8. The experiments were performed using 500 non-reference real-world images without clean image pairs of FADE [36], and computes natural image quality evaluator (NIQE) [40]. A lower NIQE score indicates a relatively higher quality image. As shown in the enlarged purple box in the second row of Figure 7b, the dehazing result only exhibits undesired artifacts between the haze and tree regions, which results in an unnatural image. Figure 7c still has undesired artifacts and over-enhancement in the hazy region. Furthermore, color distortion occurs near the yellow line in the enlarged green box in the fourth row. On the other hand, the proposed method provides high-quality images without undesired artifacts.
Subjective Quality Comparison
Both quantitative and qualitative evaluations were conducted with the methods compared in Sections 4.1 and 4.2 to confirm that the proposed method provides high-quality images even in the real-world. To evaluate the non-reference image quality, the averages of FADE [36] 500 images were measured by NIQE and entropy and are given in Table 9.
As shown in enlarged blue boxes in Figure 8, existing methods result in over-enhanced roads, whereas the proposed method successfully removes the haze in the entire image without an over-enhanced problem. As shown in enlarged yellow boxes in Figure 8, the DCP generates color distortion in Figure 8b, and the RRO generates over-saturation in Figure 8d. On the other hand, the proposed reconstructs the shape of the vehicle to become visible in Figure 8h. As shown in enlarged green boxes in Figure 8, the haze is not removed well, whereas the proposed method makes the person clearly visible. Figure 8. Subjective comparison of different dehazing methods using FADE [36]: (a) hazy input image, (b) DCP [9], (c) CAP [10], (d) RRO [4], (e) AOD [37], (f) DCPDN [17], (g) GFN [18], and (h) the proposed method. Table 9. Comparative results of dehazing methods on FADE [36]. Best score is marked in red, and the second best score is marked in blue. In Table 9, the best score is marked in red, and the second best score is marked in blue. The proposed method gives the highest NIQE score and the second best entropy score with little difference from the best score. Based on these results, the proposed method clearly outperforms existing methods in the sense of both qualitative and quantitative evaluations. In Table 10, the execution times of the proposed method and learning-based methods are compared.The AOD method was faster, but the proposed method not only has better scores than AOD in terms of both NIQE and entropy, but also was faster than DCPDN and GFN. In addition, when the proposed method implemented under a GPU environment, the processing times can be reduced over 100 times.
Conclusions
If the atmospheric scattering model is inaccurately estimated, the resulting images become degraded. To solve this problem, we proposed a residual-based dehazing network without estimating the atmospheric scattering model, where a gate block and local residual blocks are applied to widen the receptive field without a bottleneck problem, and global residual operation is applied for robust training. The combined structures were designed to construct a robust generator while solving the problem arising from each structure. The proposed model is trained by minimizing the combined VGG16 loss, mean absolution error loss. Furthermore, LSGAN-based learning was applied to acquire robust results for a real image. The discriminator reduces the statistical difference between dehazed and clean images to reduce the statistical divergence. In order to prove the effectiveness of key elements in the proposed method, we conducted an ablation study with an in-depth analysis in Section 4.1. We compared the dehazing performance of the proposed method with state-of-the-art methods for both synthetic and real-world haze images in Section 4 to show that the proposed method performs best in the sense of metrics such as PSNR, SSIM, CIEDE2000, MSE, NIQE, and second in sense of entropy. This shows that the proposed model is a robust haze removal network without estimation of the atmospheric scattering model. In the future research, we will improve the proposed networks to remove dense haze.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,766.6 | 2020-11-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Introduction
Many physical systems present ambient and intrinsic fluctuations that often are ignored in theoretical studies to obtain simple mean-field analytical approaches. Nevertheless, these fluctuations may play a fundamental role in natural systems. For instance, they may optimize signals propagation by turning the medium into an excitable one-e.g., the case of ionic channel stochasticity in neurons that can affect the first spike latency [1,2] or enhance signal propagation through different neuronal layers [3] -, originate order at macroscopic and mesoscopic levels is too high, the network activity will be dominated by the noise preventing the input stimulus from being detected by the system.
On the other hand, activity dependent synaptic mechanisms, such as short-term depression and short-term facilitation, may be highly relevant in signal detection in noisy environments and can play a main role, for instance, in SR [15,17]. These synaptic mechanisms may modify the postsynaptic neural response in a nontrivial way. Synapses can present short-term depression when the amount of neurotransmitters that are available for release whenever an action potential arrives is limited, and consequently, the synapse may not have time to recover them if the frequency of arriving spikes is too high. On the contrary, short-term facilitation is determined by the excess of calcium ions in the presynaptic terminal which can increase the postsynaptic response under repetitive stimulation. Both synaptic processes could interact with noise and some neuron excitability and adaptive mechanisms to induce a strong influence during the processing of relevant signals or stimulus in the brain. In particular, it has been recently reported in single perceptrons and in network of binary neurons that the complex interplay among these synaptic mechanisms allow for efficient detection of weak signals at different levels of the underlying noise [15,17] and maintain coherence for a wide range of the intensity of such noise [8].
In this work, we demonstrate that these intriguing emergent phenomena appear also during the processing of weak subthreshold signals in more realistic neural media and in many different conditions. Therefore, it is highly likely that also they may appear in the actual neural systems, where different types of signals and stimulus are continuously processing in the presence of different sources of intrinsic and external noise. Moreover, the fact that the processing of weak subthreshold signals occurs at well defined different levels of noise-normally one relatively low and the other relatively high-can have strong implications concerning how the signal features are being processed. This can be clearly depicted in the case of more realistic Poissonian signals (see Results section) To demonstrate the robustness of our findings, we performed a complete analysis of the emergent phenomena changing many variables in our system. This confirms that the same interesting phenomena emerges in all these situations, including, for instance the case in which the number of neurons in the network and the number of stored patterns is increased. In addition, the phenomenon of interest also remains for non-symmetric stored patterns provided that there is a phase of transitions between a high activity state (up state) and a low activity state (down state) in the network activity. Including short-time synaptic facilitation at the synapses, competing with synaptic depression, causes also intriguing features. This includes a dependency with the level of facilitation, of the level of noise at which the subthreshold signals are processed and detected and an enhancing of the detection quality for large facilitation. Finally, we checked the robustness of our finding for more realistic network topologies, such as diluted networks, and complex scale-free and small-world topologies confirming that phenomenon is robust also in these cases.
Materials and Methods
The system under study consists of a spiking network of N integrate and fire neurons interconnected each other. The membrane potential of the i − th neuron then follows the dynamics t m dV i ðtÞ dt ¼ ÀV i ðtÞ þ R m I i ðtÞ where τ m is the cell membrane time constant, R m is the membrane resistance and V th is a voltage threshold for neuron firing. Thus, when the input current I i (t) is such that depolarizes the membrane potential until it reaches V th = 10mV an action potential is generated. Then, the membrane potential is reset to its resting value-that for simplicity we assume here to be zero -during a refractory period of τ ref = 5 ms. We can assign binary values s i = 1, 0 to the state of the neurons, depending if they have their membrane voltage above or below the voltage firing threshold V th . Furthermore, we assume that synapses between neurons are dynamic and described by the Tsodyks-Markram model introduced in [18]. Within this framework, we consider that the total input current I i (t) in the Equation (1) has four components, that is, þ DzðtÞ where I 0 is a constant input current. The second term I ext i represents an external input weak signal which encodes relevant information and for simplicity we assume to be sinusoidal, that is with frequency f s and a small amplitude d s . The fourth component of I i (t) is a noisy term that tries to mimic different sources of intrinsic or external current fluctuations, where z(t) is a Gaussian white noise of zero mean and variance σ = 1, and D is the noise intensity. Finally, the third component I syn i is the sum of all synaptic currents generated at neuron i from the arrival of presynaptic spikes on its neighbors. Following the model of dynamic synapses in [18], we describe the state of a given synapse j by variables y j (t), z j (t) and x j (t) representing, respectively, the fraction of neurotransmitters in active, inactive and recovering states. Within this framework, active neurotransmitters y j (t) are the responsible for the generation of the postsynaptic response after the incoming presynaptic spikes, and become inactive after a typical time τ in * 2 − 3ms. On the other hand, inactive neurotransmitters can recover during a typical time τ rec which is order of a half second for typical pyramidal neurons [18], a fact that induces shortterm synaptic depression. Recovered neurotransmitters become immediately active with some probability U-the so called release probability-every time a presynaptic spike arrives to the synapses. In actual synapses, U can increases in time with a typical time constant τ fac -due to some cellular biophysical processes associated to the influx of calcium ions after the arrival of presynaptic spikes-which induces the so called short-term synaptic facilitation.
The synaptic current generated at each synapse then is normally assumed to be proportional to the fraction of active neurotransmitters, namely y j (t), so the total synaptic current generated in a postsynaptic neuron i is: Here A is the maximum synaptic current that can be generated at each synapses, ij is the adjacency matrix that accounts for the connectivity matrix in the neural medium, and J ij are fixed parameters modulating the synaptic current which can be related, for instance, with maximal synaptic conductance modifications due to a slow learning process. In this way, one can choose these synaptic weights J ij following, for instance, a Hebbian learning prescription, namely: Here, J ij contains information from a set of P patterns of neural activity, namely fx m i ¼ 0; 1g, with μ = 1, . . ., P and i = 1, . . ., N that are assumed to have been previously stored or memorized by the system during the learning process. Here x m i denotes the firing (with membrane voltage above V th ) or silent (with V m below V th ) state of a given neuron in the pattern μ. The parameter a measures the excess of firing over silent neurons in these learned patterns, or more precisely a ¼ hx m i i i;m . Since jJ ij j can be in general very small and it is multiplying the single synapse currents, we have considered in (4) an amplification factor κ = 2000 to ensure a minimum significant effect of the resulting synaptic current (3) in the excitability of the postsynaptic neuron. Moreover, we also choose a mean node degree factor hki, instead of N in the denominator of J ij which is more appropriate since it gives a similar mean synaptic current per neuron for all type of network topologies considered in this study, including fully connected networks, diluted networks and complex networks such as scale-free and the classical Watts-Strogatz smallworld networks [19].
Following standard techniques from binary attractor neural networks, we can measure the degree of similarity between a state of the network and a certain stored activity pattern by means of an overlap function m μ (t) defined as: as well as describe the activity of the system through the mean firing rate: In order to visualize if our system is able to respond efficiently to some input weak stimulus, it is useful to quantify the intensity of the correlation, during a time window T, between the weak input signal and the network activity by computing, for instance, the Fourier coefficient at a given frequency f, of the network mean firing rate, that is, The relevant correlation, denoted C(D) in the following, it then is defined as the value of that is, the ration between the power spectrum computed at the frequency of the input signal f s and the amplitude of this weak signal.
Results
The effect of short-term synaptic depression As it was stated above, it is important to investigate the mechanisms involved in the processing of different stimulus by a neural system, in the presence of noise. This would determine the conditions of ambient or intrinsic noise at which the transmission of information can be more efficient mainly, when the relevant information of the stimulus is encoded in weak signals. This is particularly important in a complex neural system as it is the brain, where certain brain areas have to respond adequately to some signals, for instance, arriving from other specific brain areas or the senses, within a background of noisy activity. Following this aim, we have first studied how efficient is the processing of noisy weak signals in a network of N spiking neurons, when it stores a single pattern of neural activity, and where the synapses among neurons present short-term synaptic depression. Our study reveals the relevant signals can be processed by the systems at more than one level of the underlying noise, as it is depicted in Fig. 1. More precisely, the correlation measure C(D) presents, two well defined maxima, one at relatively low noise D 1 = 97.5 pA and the second at relatively large noise intensity D 2 = 265 pA. Model parameter values are indicated in the caption of the Fig. 1.
A full description of collective behavior of the network, by means of the temporal evolution of the mean firing rate and the overlap function compared with the weak sinusoidal input, and for increasing values of the noise parameter D along the curve C(D), is depicted in Fig. 2. Moreover, raster plots of the network activity, for the same cases shown in Fig. 2, are presented in Fig. 3.
Both figures show (more clearly illustrated in Fig. 3), that for relatively low noise the system is able to recall the stored pattern, which becomes an attractor of the system dynamics. The system, therefore, shows the associative memory property. When noise intensity is increased to some given value D 1 (around 97.5 pA in this figure), the dynamic regime of the system changes sharply to an oscillatory phase where the network activity periodically switches between a pattern and anti-pattern configurations. Around this phase-transition point D 1 , these oscillations start to be driven by the weak signal which causes the first appearing maxima in C(D). This periodically switching behavior correlated with the weak signal is clearly reflected in the overlap function and the mean firing rate (see Fig. 2), and relatively large-amplitude oscillations in these order parameters with the same frequency as the sinusoidal weak input signal appear. However, by increasing further the level of noise D, we observe that the correlation with the input signal is lost. For a further increase of noise around a given value D 2 (which in the simulations performed in the figure is about 265pA), a second peak in C(D) appears, where a strong correlation of the neural activity with the input weak signal is recovered. This noise level corresponds to the critical value of noise at which a second order phase transition between the oscillatory phase and a disordered phase emerges.
To check the influence of short term depression in the appearance of these maxima of the correlation function C(D), we have varied the neurotransmitter recovery time constant τ rec , which is a well know parameter that permits the tuning of the level of depression at the synapses. In fact, large recovering time constants are associated to stronger synaptic depression because the synapses need more time to have available neurotransmitter vesicles in the ready releasable pool. Therefore, we repeat the numerical study for several values of the time recovery constant τ rec = 250, 300, 350 ms, considering a network of N = 2000 neurons. The results are depicted in Fig. 4, where C(D) is shown for different values of τ rec . Two main intriguing effects are observed. First, the maxima of the C(D), at which there is a high correlation with the weak signal, appear at lower noise intensities when τ rec is increased. This is due to the fact that, when the level of synaptic depression is increased, the transitions between ordered and oscillatory phases and between oscillatory and disordered phases appear at lower values of noise intensity. This is due to the extra destabilizing effect over the memory attractors consequence of synaptic depression [20] and to the fact the maxima of C(D) occur precisely at these transitions points [17]. The second effect is that the correlation with the weak signal (the height of the maxima) increases with the level of depression, that is, the weak signal is processed with less noise, which is consequence of the phase transitions points-and therefore the maxima of C(D)-appear at lower noise values when synaptic depression is increased.
The effect of short-term facilitation
In general, synapses in the brain and, in particular, in the cortex can present-in addition to synaptic depression-the so called synaptic facilitation mechanism, that is an enhancement of the postsynaptic response at short time scales [18,21]. Both opposite mechanisms can interact at the same time scale during the synaptic transmission in a complex way whose computational implications still are far of being well understood. The study of the influence of both mechanisms during the processing of weak signals in a neural medium, constitutes a very suitable framework to investigate this interplay. With this motivation, we present in this section a computational study of how synaptic facilitation competing with synaptic depression influences the detection of weak stimuli in a network of spiking neurons. In the following computational study, we consider a fixed time recovery constant τ rec = 300 ms, which is within the physiological range of the actual value measured in cortical neurons with depressing synapses Fig. 1. The stored pattern in such that for i = 0, …, 799 s i = 1, that is neurons are active, and for i = 800, …, 1600 s i = 0 that is, these neurons are silent in the pattern. At the first maximum of C(D) that occurs around D 1 = 97.5pA there are oscillations of the network activity around the stored pattern and its antipattern which are correlated with the weak stimulus. At a second critical level noise D 2 = 265pA a transition between the above oscillatory phase and a disordered phase appears, and a second maximum of C(D) emerges where a very noisy mean activity in the network also is correlated with the weak stimulus. All depicted panels corresponds to the same cases shown in Fig. 2.
doi:10.1371/journal.pone.0121156.g003 [21]. Also, we take several values for the characteristic facilitation time constant, namely, τ fac = 100, 200, 500 ms, and U = 0.02. The results obtained for the correlation function C(D) for a network of N = 800 neurons are depicted in Fig. 5. In this figure, we can observe a clear dependence between the level of noise at which maxima in the correlation function appear and the characteristic facilitation time constant τ fac . In particular, the figure shows that, as τ fac increases, the maxima of C(D) emerge at lower noise intensities. Moreover, one observes that the intensity of the correlation at its low noise maximum grows whenever τ fac is increased.
A possible explanation for this phenomenology is the following: it is well known, that facilitation favors to reach the stored attractors and their subsequent destabilization, in auto-associative neural networks [22]. In other words, synaptic facilitation favors the appearance of the oscillatory phase. So then, for the same level of noise D, more facilitated synapses induce an easy recovery and posterior destabilization of the attractors, and therefore, an easy transition to the oscillatory phase from the memory phase. This, in practice, means that the transition point between the two phases, appears at lower values of the noise for more facilitated synapses, and it is precisely at this transition point, where the low noise maximum of C(D) appears. On the other hand, synaptic facilitation favors the recovery of the memory attractors with less error [22]. In this way, when the transition to the oscillatory phase occurs, attractors are periodically and transiently recovered with less error during some time so that, the coherence of the activity of the system with the weak signals is larger since it is not affected by this extra source of noise. These findings provide a simple mechanism to control the processing of relevant information by changing the level of facilitation in the system which can be done, for instance, controlling the level of calcium influx into the neuron or by the use of calcium buffers inside the cells. The effect of network size In order to verify the robustness of the results reported above and to observe the possible effects that may arise due to the finite size of the system used in our simulations, we have carried out a study of the system increasing the number of neurons in the network as N = 400, 800, 1600, 2000, but maintaining the rest of the parameters and considering spiking neurons with pure depressing synapses. The computed correlation C(D) for all these cases is depicted in Fig. 6, which reveals that the main findings of our previous study remain and are independent of the number of neurons in the system. In fact, different C(D) curves for different values of N do not present significantly changes neither in their shape and intensity, nor in the level of noise at which the different maxima of C(D) appear. These results permit us to hypothesize that our main findings here are enough general and could also appear in large populations of neurons as in cortical slices or even in some brain areas.
The effect of storing many patterns in the network
In the studies reported in the above sections, we have considered just one activity pattern of information stored in the maximal synaptic conductances. We study now, how robust are these findings when the number P of activity patterns stored in the system increases with all other parameters of the model unchanged. In our study, we have varied P from 1 to 10 in a network of N = 2000 neurons with pure depressing synapses (τ rec = 300 ms) and the corresponding correlation functions C(D) for all these cases are depicted in Fig. 7. One can appreciate that the phenomenon remains when P is increased, and while the maximum of C(D) that appears at high noise-around D 2 = 265pA-does not dramatically change with P, the number of stored patterns has a strong effect on the maximum of C(D) appearing at low noise. In fact, this maximum appears at lower level of noise and with more intensity as P is increased. neurons. This shows that as P increases the low-noise maximum of the correlation C(D) increases in intensity and appears at lower level of noise while the second correlation maximum remains unchanged. Each curve has been obtained with a single realization of the corresponding network. Other parameters values were as in Fig. 1.
A possible explanation for this intriguing behavior can be understood as follows. It is well known that in Hopfield binary neural networks the increase of the number of stored patterns induces interference among the memory attractors and therefore constitutes an additional source of noise that tries to destabilize them [23]. This also occurs in our spiking network and, in the presence of dynamic synapses, this destabilizing effect results in an early appearance of the transition between the memory phase and the oscillatory phase and therefore, in the appearance of the first low-noise maximum of C(D). Then, at relatively low levels of ambient noise D the transition from below to the oscillatory phase will occur at lower values of D in a network that stores a larger number of patterns. At relatively large values of the ambient noise, however, the main destabilizing effect is due to the ambient underlying noise, and therefore, the effect of increasing P, although is also present, it is less determinant. On the other hand, the amplitude of the maximum of C(D) increases with P because, as explained above, the transition among memory and oscillatory phases occurs at low value of the ambient noise for P larger. Then, the thermal fluctuations in the memory phase and during the phase transition to the oscillatory phase are lower. In this way, during the oscillations starting at the transition point, the attractors are recovered transiently with less error which induces the coherence with the weak signal to be larger.
The effect of the asymmetry of the stored pattern
The features of the pattern-antipattern oscillations which characterize the oscillatory phase in neural networks with dynamic synapses (including short-term facilitation and depression), are highly dependent on the particular symmetry in the number of active and silent neurons in the stored pattern. This is controlled by the parameter a (introduced in the definition of the synaptic weights (4)). In all the results reported in previous sections, we have considered a = 0.5, which causes an oscillatory phase characterized by a regime of symmetric oscillations between an activity state correlated with the stored pattern and another with the same level of activity correlated with the antipattern. If we consider a 6 ¼ 0.5, an asymmetry will be induced in the mean network activity, that is, there will be an excess of 1's over 0's or vice versa during pattern-antipattern oscillations. In fact, oscillations occurs between an high activity (Up) state and a low activity (Down) state. Moreover, this asymmetry in the activity of the stored pattern can have a strong influence in the phase diagram of the system and can cause even that the oscillatory phase does not emerge. Since the phenomenology of interest here-namely the emergence of an network activity correlated with a weak subthreshold stimulus-is highly dependent on the transition points at which the system moves over different phases, it is reasonable to think that the parameter a will have a strong influence on it.
We have performed a computational study in a network of N = 800 neurons with pure depressing synapses (τ rec = 300 ms) to investigate this particularly interesting issue, for which we have considered a single stored pattern P = 1 with a = 0.40, 0.42, 0.43, 0.45, 0.47, 0.48 and whose results are depicted in Fig. 8. We observe here a very interesting and intriguing effect in the shape of the correlation C(D) when a is varied. The maximum in the correlation between the network activity and the weak signal at low noise tends to disappear as the value of a decreases from the symmetric value a = 0.5. In fact, when a < 0.45 the correlation C(D) drops abruptly at that point, around D 1 = 100pA. As we have explained above, a large level of asymmetry in the stored pattern could impede the appearance of the Up/Down transitions characteristic of the oscillatory phase which, therefore is absent. The consequence is that, there is not a transition point between a memory phase and an oscillatory phase which impedes the emergence of the low noise maximum of C(D). On the contrary, the second maximum of C(D), which appears at high noise around D 2 = 265pA, remains invariant for all values of a studied here. The explanation to this second situation is also simple because, although asymmetry in the stored pattern is present, the phase diagram of the system still presents a phase of memory retrieval at low noise and a non-memory phase at large noise, separated by a second order phase transition point around which the second maximum of C(D) is originated.
The effects of the underlying network topology
In previous sections, we have considered for simplicity-as our system under study-a fully connected network of spiking neurons. This is far to be the situation in actual neural systems, where neurons are not all connected to each others. In fact, biological neural systems are characterized by a underlying complex network topology which is consequence of different biophysical processes during their developing, including among others, exponential growth at early stages of developing and posterior synaptic pruning processes [24]. All these processes are also influenced by limitations in energy consumption in the system. In this section, we explore if the emergence of several maxima, as a function of the underlying noise, in the correlation between the network activity and some weak subthreshold stimulus, is altered when more realistic network topologies are considered.
We have considered first the case of a random diluted network. We can configure this network topology starting, for instance, starting with a fully connected network and then removing randomly a certain fraction δ of the synaptic connections. In Fig. 9A, it is depicted the resulting correlation function C(D) for single realizations of diluted networks generated in this way with N = 800 and δ = 10%, 20%, 30% and 40%. The figure illustrates two main findings. First, the robustness of the main emergent phenomena described in the above sections also in this type of diluted networks, and second that, as the dilution grows and a higher fraction of connections is removed, both maxima of C(D) appear respectively at lower levels of noise. Moreover, if dilution is too high, it seems that the maximum appearing at low noise tends to disappear. This only can be consequence that the stable memory attractors lose stability-due to strong dilution-and disappear in the presence of ambient noise, in such a way that only an oscillatory phase and non-memory phases are present.
In our analysis with a diluted network, we performed dilution starting with a fully connected network where hki = N. To avoid the possible effect of a factor 1/N normalizing the synaptic weights (4) during dilution, we have done an additional analysis considering a diluted network with a normalizing factor in the weights hki = (1 − δ) 2 N, which is the mean connectivity degree in the resulting diluted network with δ being the probability of a link to be removed during the dilution process. The corresponding results are summarized in the Fig. 9B. One can observe also that results are similar for this second type of dilution, that is, the low noise maximum of C(D) moves toward lower values of the ambient noise and even can disappear as dilution is increased. The main difference with the first type of dilutions is, however, that the level of noise at which the high noise maximum of C(D) is not dramatically affected by dilution. In Fig. 9B, the correlation curves C(D) have been obtained after averaging over 10 realizations of a network of N = 200 neurons with pure depressing synapses (with τ rec = 200 ms).
Diluted networks, however, are homogeneous and do not introduce complex features which could induce more intriguing behavior in the system. Even more interesting and realistic concerns the case of networks with complex topology such as the so called scale-free networks, where the node degree probability distribution is p(k) * k −γ , with k being the node degree. In fact, it has been recently reported that these complex topologies can induce additional correlation with the network activity and the processed weak stimulus due to the network structural heterogeneity [13]. Fig. 10 summarizes our main results concerning the case of complex networks with scale-free topology. Correlation curves C(D) have been obtained after averaging over 10 realizations of a scale-free network of N = 200 neurons with pure depressing synapses (with τ rec = 200 ms) and all other model parameters as in Fig. 1. We see that similarly to the cases studied above, two maxima in the correlation function C(D) emerge-for two well defined values of the underlying noise-also in scale-free networks. However, we do not observe the emergence of an additional maximum of C(D) which could be induced only by the topology. As it has been reported in [13], this maximum should appear at low values of the ambient noise, for γ * 3. The existence of a robust oscillatory phase when dynamic synapses are considered and a phase transition between memory and this oscillatory phase at relatively low values of the ambient noise could hidden the appearance of the this maximum due to the existence of a low noise maximum of C(D) around this phase transition. In any case, as it is depicted in Fig. 10, the emergence of two maxima in C(D) is a robust phenomenon also in these complex scale-free network topologies for a wide range of the relevant network parameters such as the exponent of the network degree distribution (Fig. 10A) and the mean connectivity in the network (Fig. 10B). Interestingly is that the low noise maximum seem to start to emerge for values (4). Each curve has been obtained with a single realization of the corresponding network. Other parameters were as in Fig. 1. (B) In this case simulations has been performed considering diluted networks using the same procedure than in panel A, but with less level of synaptic depression, τ rec = 200 ms, and synaptic weights normalized with a factor hki = (1 − δ) 2 N. Also in this case, the corresponding C(D) curves has been obtained for N = 200 and averaging over 10 different networks.
doi:10.1371/journal.pone.0121156.g009 of γ ≳ 3 and values of the mean connectivity hki ≳ 15-20. These values corresponds to realistic ones, since, for instance, most of the actual complex networks in nature have degree distributions with γ between 2 and 3 [25]. Moreover neurons in the brain of mammals have large connectivity degrees and realistic values of the mean structural connectivity in cortical areas of mammals has been reported to be around hki % 20 [26].
Finally, we have consider in our study the case of a complex network with the small-world property. A prominent example of such type of networks is the so called Watts-Strogatz (WS) network [19]. These networks are generated starting with a regular network where each node, normally placed in a circle, has k 0 neighbors. Then, with some probability p r , know as probability of rewiring, each link among nodes in this regular configuration is rewired to a randomly chosen node in the network avoiding self-connections and multiple links among two given nodes. In this way for p r = 0, one has a regular network with p(k) = δ(k − k 0 ) and for p r = 1 one has a totally random network with p(k) being a Gaussian distribution centered around k 0 . Note that for varying p r one always has hki = k 0 . We have placed neurons defined by the dynamics (1) in such WS networks and studied as a function of the underlying noise the emergence of correlations between the network activity and some subthreshold weak signals by means C(D). The results are summarized in Fig. 11 where each C(D) curve has been obtained after averaging over 10 realizations of a network with N = 200 neurons and pure depressing synapses with τ rec = 100 ms. One can see that the appearance of several maxima for C(D) occurs for values of p r ≳ 0.5. More precisely, the low noise maximum does not emerge for low rewiring probabilities, which clearly indicates that the memory phase does not appear for such small values of p r for the whole range of noise D considered here. Also this finding suggests the positive role of long range connections-that only can emerge with high probability when p r is high-for the existence of such low noise maximum in C(D). In fact, the emergence of a memory phase can be only understood when in the network appear such long range connections since the stored memory patterns involve these type of spatial correlations among active and inactive neurons.
Use of more realistic weak signals
In actual neural systems is expected that relevant signals arrive to a particular neuron in the form of a spike train, with relevant information probably encoded in the timing among the spikes. In this sense, the use of a sinusoidal weak current to explore how the system detect it in all cases considered above could not be the most realistic assumption (it could be enough realistic if relevant information will be encoded in subthreshold oscillations instead that in the precise timing of the spikes). To investigate the ability of the system to detect and process the with more realistic weak signals in the presence of noise we have considered the input weak signal as an inhomogeneous Poisson spike train with mean firing rate λ(t) = λ 0 [1 + asin(2πf s t)], being λ 0 , a positive constants. In this way, relevant information is encoded as a sinusoidal modulation of the arrival times of the spikes in the train. Fig. 12A depicts the coherence among mean firing rate in the network and this weak signal (which is shown in the top graph of Fig. 12B). The correlation curve C(D) has been obtained after averaging over 20 realizations of a fully connected network with N = 400 neurons and pure depressing synapses with τ rec = 300 ms. The figure clearly illustrates that also in this more realistic case the system present a strong correlation with the weak signal at different levels of noise at which phase transitions among different non-equilibrium phases appear (see time series for increasing level of noise from top to bottom in Fig. 12B). These are a memory phase where active neurons in the stored memory pattern are strongly synchronized (D = 60pA) (population burst regime), a transition point characterized by signal driven high-activity (Up state)/low-activity (Down state) oscillations (D 1 = 85.4pA), a phase of intrinsic Up/down oscillations (D = 160pA), a critical point toward a non-memory phase characterized by signal driven fluctuations plus thermal fluctuations (at D c = D 2 = 261pA), and a non-memory phase, or asynchronous phase, characterized by constant firing rate with Gaussian thermal fluctuations (for instance at D = 500pA). In fact, in Fig. 12C it is depicted the difference in steady state features of the behavior for the two last cases. That is, at the critical point D 2 the stationary distribution of the resulting mean firing rate has a bias toward positive fluctuations induced at the exact arrival time of the weak signal spikes. This as evidenced by the disagreement between this distribution (red curve in Fig. 12C) and the shaded red area, which represents the best fit to a Gaussian distribution. On the other hand, during the asynchronous state for D = 500pA the same steady state distribution (green line in Fig. 12C) is clearly Gaussian (shaded blue area) without presenting a bias with the timing of signal spikes.
Discussion
We investigated in great detail by computer simulations the processing of weak subthreshold stimuli in a auto-associative network of spiking neurons-N integrate and fire neurons connected to each others by dynamic synapses using different network configurations-competing with a background of ambient noise. In particular, we studied the role of short-term synaptic depression in the efficient detection of weak periodic signals by the system as a function of the noise. Our results show the appearance of several well defined levels of noise at which there is a strong correlation between the mean activity in the network and the weak signal. More precisely, in the range of noise intensities considered in this study, the transmission of the information encoded in the weak input signal to the network activity is maximum when noise intensity reaches two certain values. The maximum or peak appearing at relatively low levels of ambient noise D 1 , corresponds to a transition point where the activity of the network switches from a memory phase-in which a stored memory pattern is retrieved-to an oscillatory phase where the system alternatively is recalling the stored pattern and its anti-pattern in a given aperiodic sequence. Thus, at this level of noise and in the presence of the weak signal this oscillatory behavior of the network activity becomes correlated with the signal oscillating at its characteristic frequency. This fact provides an efficient mechanism for processing of relevant information encoded in weak stimuli in the system since at this transition point, for instance, the system could efficiently recall different sequences of patterns of information according to predefined input signals.
On the other hand, the second maximum which appears at relatively high level of ambient noise, namely D 2 , emerges around a second order phase transition between the oscillatory phase explained above and a disordered or non-memory phase, where the system is not able to recall any information stored in the patterns. Although the resulting network activity around this maximum is highly noisy, it is strongly correlated with the weak signal, appearing a modulation of the noisy activity that follows the signal features (see the case D 2 = 265pA in Fig. 2).
We also studied in detail, the influence that the particular level of synaptic depression could have in the appearance of the two maxima in the correlation between the network activity and the weak stimulus. By changing the recovery time constant of the active neurotransmitters, for instance, we observe that the longer the synapses take to recover the neurotransmitters (larger τ rec ), the lower level of ambient noise is needed to induce to reach the different maxima of the corresponding correlation function (see the Fig. 4).
Furthermore, we have observed that short-term facilitation competing with short-term depression at the synapses induces also additional intriguing effects in the way system responds to the weak stimulus in the presence of noise. Our results reveal, that for larger values of the characteristic time constant characterizing the facilitation mechanism, namely τ fac , the maxima in the correlation C(D) between the network activity and the weak signal appear at lower levels of ambient noise than in the case that only synaptic depression is considered. In addition, we can observe that the correlation at the low noise maximum amplifies when the level of synaptic facilitation increases (see Fig. 5). Both phenomena can be understood taking into account that facilitation favors the retrieval of information in the attractors and their posterior destabilization, which is the origin of the oscillatory phase. Then, an increase of facilitation moves the transition point between the memory phase and the oscillatory phase towards lower values of the ambient noise D. Also, facilitation favors the recovery of the memory attractors with less error, which implies that, when the coherence with the weak signals emerges in the network activity, it is affected by less sources of noise, and therefore, it increases for large facilitation.
We proof as well the robustness of the results reported here, by checking that the appearance of several maxima in the correlation C(D) between the network activity and weak subthreshold stimuli, around some phase transitions points remains for larger number neurons N or number of stored patterns P. Our study reveals that our main findings are independent of the network size (see Fig. 6) and therefore could probably also obtained in actual neural media. Nevertheless, when we study the correlation C(D) for different number of stored patterns, some new effects appear. In fact, the low noise maximum of C(D) tends to disappear when the number of stored patterns is increased. The reason is, that an increase in P induces the appearance of the oscillatory phase at lower values of the ambient noise D and, consequently, the development of this maximum of C(D) occurs at lower noise (see Fig. 7). On the other hand, the appearance of the second high noise maximum of C(D) is not significantly affected by an increase of P (see also Fig. 7).
Another interesting result is obtained when we consider an asymmetric stored memory pattern, that is, a 6 ¼ 0.5, so that there is an excess of 1's over 0's in the stored pattern or vice versa.
In Fig. 8, we can see that by increasing the pattern asymmetry (reducing a), the low noise maximum of C(D) decreases. This is mainly due to the fact that oscillations between the high-activity and low activity states, characterizing the oscillatory phase, tend to be less visible or even disappear-the network activity is quasi clamped in the memory attractor-for more asymmetric stored patterns. In fact, if we consider a < 0.45, the low noise maximum of C(D) drops abruptly and the system is not able anymore to process the information encoded in the weak signal at this level of ambient noise (see Fig. 8).
We have performed also a complete study of how our main findings are influenced by the given network topology. As a first step in this research line, we have considered the case of diluted networks. We built different types of diluted networks by erasing a fraction of links at random in a fully-connected network. In all cases that we have considered, the main results still emerge, that is the existence of several maxima in the correlation C(D) between the network activity and the weak stimulus at some precise levels of noise around non-equilibrium phase transitions. If the fraction of erased synaptic connections is larger, these maxima of C(C) appear at lower level of ambient noise. Moreover, there is a tendency that leads to the complete disappearance of the low noise maximum for strong dilution (see Fig. 9). These findings are due to the fact that, dilution of synaptic links diminishes the memorization and recall abilities of the network (since memory pattern features are stored in these links). The consequence is that, in more diluted networks, the memory phase appears at lower values of ambient noise, and can even disappear in absence of noise for very diluted networks. Secondly, we have consider also the case of complex networks with scale-free properties in the degree distribution and with the small-world property. In all cases there is a wide range of the relevant networks parameters, such as the exponent of the scale-free degree distribution, the mean connectivity in the network and the rewiring probability in the the case of the WS small world network, for which C(D) shows also similar maxima at given noise levels where the system efficiently process the weak stimulus. Moreover this range of parameters is consistent with those measured in actual neural systems.
Note that the consideration here of some complex networks with scale-free topology, introduces node-degree heterogeneity in the system. This fact induces also neuron heterogeneity in the sense that more connected neurons can be more excitable than less connected neurons, so one can have different levels of neuron excitability in the system. However, we have seen that even in this case there is a wide range of model parameters where the relevant phenomenology here still emerges. This tell us that in a more general scenario where different types of neuron heterogeneity are considered, the phenomena reported here also will emerge.
After our analysis in the present work, we conclude that the efficient detection or processing of weak subthreshold stimuli by a neural system in the presence of noise can occur at different levels of this noise intensity. This fact seems to be related to the existence of phase transitions in the system precisely at this levels of noise, a suspicion which is presently been analyzed in greater detail. In the cases studied here, within the range of noise considered, there is a maximum in the correlation C(D) of the network activity with the stimulus which seems to correspond to a discontinuous phase transition (the maximum at relatively low ambient noise) as well as another maximum appearing around a continuous phase transition (the maximum at relatively high noise). The difference in the type of the emerging phase transition determines the way the weak subthreshold stimulus is processed by the neural medium.
We hope to study next the computational implications that each one of theses maxima induces and their possible relation with high level brain functions. In particular, in some preliminary simulations reported in the present work with more realistic Poissonian signals, a detailed inspection of the network activity temporal behavior, compared with the weak signal time series around the low noise maximum, show a strong resemblance with working memory tasks, where relevant information encoded in the input signal is maintained in the network activity during some time, even when the input signal has disappeared. Also in the case of several memory pattern stored in the system and around this low noise maximum, the system could process a given sequence of patterns encoded in the stimulus. Precisely at this level of noise, a nonequilibrium phase emerges characterized by continuous sequence of jumps of the network activity between different memories, which could be correlated with a particular sequence of memories in the presence of an appropriate stimulus. On the other hand, at the second resonance peak is the precise timing between input spikes which are detected and processed by the network activity.
Finally, we mention that it would be interesting to investigate if the relevant phenomenology reported in this work could emerge naturally in actual systems. In fact, recent data from a psycho-technical experiment in the human brain [8] can be better interpreted, using different theoretical approaches and dynamic synapses, considering the existence of several levels of noise at which relevant information can be processed [17,27]. In Fig. 13 it is shown how these experimental data can be also interpreted in terms of the correlation function C(D) obtained within the more realistic model approach reported in this paper, that is, a complex network of spiking neurons. This should serve as motivation to study in depth how neural systems process weak subthreshold stimuli in a more biological and realistic scenario. For instance, one could consider conductance based neuron models instead of the simplified integrate and fire model used here or conceive more realistic stimuli, and other complex network topologies. The last could include, for instance, different type of node degree-degree correlations [28][29][30] or network-network correlations constituting a multiplex structure as a recent work suggests to occur in the Fig 13. Fitting experimental data in the auditory cortex to our model. The experimental data (symbols with the corresponding error bars) reported in [8] are compared as a function of noise with the correlation function C(D) described in Fig. 1 corresponding to a single realization of a spiking network of N = 2000 integrate and fire neurons (red solid line) and dynamic synapses. The experimental data C (in arbitrary units) has been multiplied by a factor 10 4 , and the noise amplitude M (in dB) has been transformed in our noise parameter D using the nonlinear relationship with D 0 = 0.1pA, η = 0.71(pA/dB), M 0 = 50dB and σ M = 140dB.
doi:10.1371/journal.pone.0121156.g013 brain [31]. All these additional considerations could provide some new insights in order to design a possible experiment easily reproducible by biologists to investigate the emergence the phenomena reported here in actual neural systems. On the other hand, the relation of the relevant phenomenology reported in this study with the existence of different phase transitions in our system could be of interest for neuroscientists to investigate the existence of phase transitions in the brain. | 11,677.4 | 2014-10-15T00:00:00.000 | [
"Physics"
] |
Agreement between visual inspection and objective analysis methods: A replication and extension
Behavior analysts typically rely on visual inspection of single‐case experimental designs to make treatment decisions. However, visual inspection is subjective, which has led to the development of supplemental objective methods such as the conservative dual‐criteria method. To replicate and extend a study conducted by Wolfe et al. (2018) on the topic, we examined agreement between the visual inspection of five raters, the conservative dual‐criteria method, and a machine‐learning algorithm (i.e., the support vector classifier) on 198 AB graphs extracted from clinical data. The results indicated that average agreement between the 3 methods was generally consistent. Mean interrater agreement was 84%, whereas raters agreed with the conservative dual‐criteria method and the support vector classifier on 84% and 85% of graphs, respectively. Our results indicate that both objective methods produce results consistent with visual inspection, which may support their future use.
Visual inspection is commonly used to evaluate the results of single-case experimental designs. Although some researchers have reported positive findings (Ford et al., 2020;Kahng et al., 2010;Novotny et al., 2014), many studies have questioned the reliability of visual inspection for identifying behavior change in single-case graphs (Dart & Radley, 2017;DeProspero & Cohen, 1979;Ninci et al., 2015;Wilbert et al., 2021;Wolfe et al., 2016Wolfe et al., , 2018. Therefore, researchers have proposed different supplemental methods to analyze single-case data more objectively (Fisher et al., 2003;Krueger et al., 2013;Lanovaz et al., 2020;Manolov & Vannest, 2019). Objective methods aim to complement rather than replace visual inspection to improve reliability, validity, and decision making; decrease errors; assist with training efficiency; improve communicability of results; and provide quantitative data on the treatment effect. Notably, Fisher et al. (2003) developed the dual-criteria and conservative dual-criteria methods, which have been the topic of a growing number of studies examining their validity (Falligant et al., 2020;Lanovaz et al., 2020;Lanovaz et al., 2017;Wolfe et al., 2018). Although researchers have shown that these methods can adequately control Type I error rates (e.g., Falligant et al., 2020;Lanovaz et al., 2017), studies have also noted that their power could be improved (Fisher et al., 2003;Manolov & Vannest, 2019).
In a recent study on the topic, Wolfe et al. (2018) evaluated agreement between visual inspection and the conservative dual-criteria method with 31 multiple baseline graphs published in peer-reviewed journals. The visual inspection involved 52 expert raters who had authored at least five studies that relied on single-case methodology. Expert raters had to categorize whether there was a change in the dependent variable for each panel and whether the graph showed a functional relation. The researchers found a mean agreement between expert raters of 83% and an agreement of 84% between the raters and the conservative dual-criteria method. However, one of the limitations of the study was that the analyses relied on published data. Published data may differ considerably from clinical data (e.g., less stability), which is why replication may be important (Dowdy et al., 2020;Sham & Smith, 2014).
Recently, some researchers have proposed using machine learning as an objective supplement to visual inspection to analyze single-case graphs . Given that most behavior analysts do not have prior knowledge or training on machine-learning algorithms, we will introduce the topic before explaining the purpose of our study. At its broadest, machine learning involves the use of computer algorithms 1 to detect and use patterns in data. These problems can take on many forms, but Lanovaz et al. (2020) focused on binary classification problems. A binary classification problem has only two possible outcomes-true or false. In Lanovaz et al., this value represented whether a graph showed a clear change (true) or not (false). To conduct classification, the algorithms also need input data on which to make their decisions. Lanovaz et al. used the means, standard deviations, intercepts, and slopes of Phases A and B as input data.
In supervised machine learning, the experimenter provides the input and outcome data to algorithms, which train the models to make predictions. Specifically, the algorithms transform the input data to develop a model that can predict the outcome data. Each algorithm attempts to optimize correct predictions by transforming the data in a different way. The ultimate test of the appropriateness of responding by a model involves comparing the predictions with the correct labels on data that have never been used during training (i.e., generalization). Many different types of algorithms exist to train these models. The paragraph below briefly describes the algorithm that we will test as part of the current study-the support vector classifier.
The support vector classifier uses a function to project the data in a higher dimension and optimizes the split between the two categories. Figure 1 shows a simple example of how a support vector classifier may split data. The upper panel shows a binary outcome (represented by opened and closed data points) that cannot be separated using traditional linear regression. The lower panel shows how projecting the data in a higher dimension allows the data to be separated by a plane (referred to as a hyperplane in higher dimensions). The support vector classifier can make predictions by examining where a novel data point falls within this higher dimension. In a comparison to visual inspection, Lanovaz and Hranchuk (2021) trained models including the previous algorithm to identify changes in single-case AB graphs. Their results showed that the support vector classifier produced lower Type I error rates (fewer false positives) and higher power (fewer false negatives) than the conservative dual-criteria method and visual raters on simulated data. Moreover, the support vector classifier generally agreed more often with visual raters than the conservative dual-criteria method. However, extensions of Lanovaz and Hranchuk remain necessary as the study focused on simulated data, which may differ from their nonsimulated counterparts.
Examining correspondence with visual raters on actual, nonsimulated graphs appears important.
The purpose of our study was to replicate and extend Wolfe et al. (2018) by examining agreement between visual inspection and the conservative dual-criteria method. We extended Wolfe et al. by using a larger number of graphs to examine within-subject error, blinded normalized graphs, and modified reversal/withdrawal designs.
A secondary purpose was to replicate and extend Lanovaz and Hranchuk (2021) by examining correspondence between visual inspection, the conservative dual-criteria method, and the machine-learning algorithms on a novel dataset. We extended both studies by using clinical data and adding a 10-point scale with definitional criteria to visual-inspection ratings.
Data Acquisition
The sample consisted of all clinical cases from an intensive, in-home pediatric feeding treatment program in Australia. The data were not previously published in a prior consecutive controlled case series and each case included a treatment evaluation conducted via single-case experimental design (N = 6). All treatment evaluations used modified withdrawal/reversal designs: ABCAC (n = 2), ABCDEAE (n = 3), and ABCDEFAF (n = 1). The parents consented to their child's data being used for research and this research project was approved by the second author's university research ethics board.
The six dependent variables were: clean mouth percentage (percentage of trials in which the mouth was clean of food/liquid; permanent product measure of swallowing/consumption), latency to clean mouth, latency to acceptance (bite/drink enters the mouth), inappropriate mealtime behavior per minute (head turn, mouth cover, pushing feeder away), expulsion per minute (bite/drink exits mouth), and negative-vocalizations percentage (percentage of session duration with crying or negative statements about the meal). The expected direction of the treatment effect was an increase in clean mouth (i.e., food consumption/ swallowing) and a decrease in the remaining variables. We used data from the complete phases and compared each adjacent phase. For example, one ABCAC treatment evaluation yielded four phase comparisons (A1B1, B1C1, C1A2, A2C2) for each of the six variables, producing 24 total Note. The upper panel shows a two-dimensional graph representing two features: x1 and x2. Closed points represent one category and opened points represent a different category. The lower panel depicts the addition of a higher dimension (z) and a linear plane that separates the two categories. Reprinted with permission from "Machine Learning to Analyze Single-Case Data: A Proof of Concept" by M. J. Lanovaz, A. R. Giannakakos, and O. Destras, 2020, Perspectives on Behavior Science (https:// doi.org/10.1007/s40614-020-00244-0). CC BY 4.0. phase comparisons for this participant. These phase comparisons were then graphed separately (i.e., distinct AB graphs for each variable). This process yielded 198 AB phase comparisons for the six participants (i.e., 33 adjacent phases with six distinct variables per comparison). The data and code are available in online repository at: https://osf.io/2wgtu/.
To construct the 198 graphs, we used Python (Version 3.7.7) to standardize the process. The graph title provided only the expected direction of the change (i.e., -1.0 for decrease and 1.0 for increase), and the vertical axis was labelled generically as "Behavior" with unlabeled tick marks. We removed the values beside the tick marks to standardize the presentation of the graphs and to control for the effects of differing axis values on the analysis of visual raters. As such, the raters had to focus on the relative change from one phase to another to categorize each graph. A phase line separated Phases A and B, and the horizontal axis was labelled, "Session" with the numerical values depicted on the tick marks. These graphs were blinded in that they did not provide the variable label on the vertical axis, the scale of the vertical axis, phase labels or design sequence letters (e.g., ABCAC), or participant information (see graphsforanalysis.pdf in online repository).
Visual Inspection and Interrater Agreement
For visual inspection of each phase comparison (N = 198), raters responded "yes" or "no" to the following question: "Would the change observed from one phase to the next be indicative of functional control of the behaviour in the planned direction (i.e., increase or decrease) if it were reversed and replicated?" Raters also provided a continuous value from 0 (certainty of no effect in the planned direction) to 10 (certainty of an effect in the planned direction), with 0 to 4 corresponding to "No" and 5 to 10 corresponding to "Yes" (Taylor & Lanovaz, 2021). Five doctoral-level behavior analysts with PhDs in psychology (one professor, four practitioners) made ratings based on the blinded AB graphs (see ExpertA.xlsx, ExpertB.xlsx, ExpertC.xlsx, ExpertD.xlsx, and ExpertE.xlsx for complete analyses). The raters had trained at three different universities and were over 10 years postgraduation. Four raters completed pre-and/or postdoctoral training at the same internship and fellowship program and were licensed psychologists. All raters had authored peer-reviewed research publications, and three raters had authored at least five single-case experimental design publications. Four raters were board-certified behavior analysts, and one was the first author.
Conservative Dual-Criteria Method
To remain consistent with Wolfe et al. (2018), we used the conservative dual-criteria method (Fisher et al., 2003). This method involves projecting trend and mean lines from baseline onto the next phase, adjusting the lines by 0.25 standard deviations from the baseline data, counting the number of data points above or below both lines depending on the expected direction of change, and comparing the results to a cut-off value based on the binomial distribution. Our python code repeatedly applied this analysis to each AB comparison (see CDC_analysis.py for code and CDC_Results.csv for results of the analysis).
Machine Learning
For each phase comparison, our code applied a model derived from machine learning to determine whether procedures produced a clear change in the expected direction (see ML_analysis.py for code and ML_Results.py for results of the analysis). Specifically, our analyses involved applying the support vector classifier previously described and developed by Lanovaz and Hranchuk (2021). We selected the support vector classifier because it produced the fewest errors during the analyses. Furthermore, the support vector classifier agreed more frequently with expert behavior analysts than the behavior analysts amongst themselves in a study with simulated data (Lanovaz & Hranchuk, 2021). The support vector classifier used the eight features extracted from the standardized data (mean, standard deviation, intercept, and slope of each phase) and provided output decisions based on the probability of a clear change in the expected direction. Each of these features represented important characteristics of the data used during visual inspection: Mean is related to level change, standard deviation to variability, intercept to immediacy of change, and slope to trend (Lanovaz & Hranchuk, 2021). Probabilities above, or equal to, 0.5 were categorized as a clear change.
As an example of applying this method, assume that we want to categorize a graph with five points in Phase A and seven points in Phase B using a support vector classifier. The first step involves extracting the eight features from our graph. To do so, we transform each data point to a z score to normalize the data by subtracting the mean of the graph from the value of each point and dividing this difference by the standard deviation for the graph. If the purpose of the interventions is to reduce behavior, the z scores must also be multiplied by -1. Once the points have been standardized, the code uses the z scores to extract the mean, standard deviation, intercept, and slope for each phase (eight features). The second step involves providing these eight features to the model previously developed by Lanovaz and Hranchuk (2021). The model transforms the features by adding an extra dimension and places the data for our graph in a multidimensional space (as depicted in three dimensions in Figure 1). The model also has a hyperplane, which separates the multidimensional space in two (no change vs. change). The categorization then depends on the position of the multidimensional value relative to this hyperplane. If we take the exemplar presented in Figure 1 (bottom panel), each graph would produce a single multidimensional point that falls either above or below the plane (determining its category).
Analyses
For each rater and method, we calculated a pairwise percentage of agreement by dividing the number of agreements (i.e., change vs. no change) by the total number of graphs (see Comparison.py). We also calculated Cohen's kappa (Cohen, 1960). Kappa values range from -1 (perfect disagreement) to 1 (perfect agreement), with 0 indicating completely random agreement. Landis and Koch (1977) proposed interpretive guidelines of slight agreement (0-0.20), fair agreement (0.21-0.40), moderate agreement (0.41-0.60), substantial agreement (0.61-0.80) and almost perfect agreement (0.81-1.0). For continuous outcomes, our code computed a Spearman's rho pairwise correlation between visual-inspection ratings and machine-learning probability values for each graph. The conservative dual-criteria method was excluded from the prior analysis because it does not produce a probability value. For each rater and method, we compared average agreement separately based on whether the conservative dual-criteria method and the support vector classifier indicated an effect or no effect. The next step involved a more in-depth analysis of patterns of disagreements between the visual raters, the conservative dual-criteria method, and the support vector classifier (see dis-agreement_analysis.py). First, we created four groups of graphs. The first group of graphs, agreement on visual inspection, included only the graphs for which at least four of five raters agreed on the outcome (n = 175). The second group of graphs, disagreement on visual inspection, involved the remaining graphs for which only three raters agreed (n = 23). The next two groups were subsets of the graphs showing agreement on visual inspection. The third group included graphs on which the conservative dual-criteria method disagreed with the visual raters in cases where four or five raters agreed (n = 15). Similarly, the final group involved graphs with visual agreement with which the support vector classifier disagreed (n = 13). Finally, we compared the agreements and disagreement across different lengths of Phases A and B.
Results
Proportion of correspondence for the binary outcomes are presented in Table 1. Interrater agreement using visual inspection averaged 84.3% (range, 72%-91%) across all raters with kappa of .66 (range, .43-.79). The support vector classifier matched the ratings of the behavior analysts on 85.0% (range, 78%-89%) of graphs with kappa of .67 (range, .55-.75), on average. The conservative dual-criteria method averaged 83.6% (range, 79%-87%) correspondence with visual-inspection ratings with kappa of .64 (range, .58-.72). The support vector classifier corresponded 81.0% with the conservative dualcriteria method with kappa of .59. Table 2 presents the correlations for the continuous outcomes. The correlation coefficient for visual inspection averaged .78 (range, .66-.90) across all raters. In comparison, the support vector classifier had an average correlation coefficient of .74 (range, .60-.79) with the visual raters. Figure 2 presents average agreement based on results from the conservative dual-criteria (top panel) or support vector classifier (bottom panel) indicating an effect or no effect. The support vector classifier found an effect in 35.4% of graphs, and the conservative dualcriteria found an effect in 33.8% of graphs. Agreement was marginally lower when the objective methods indicated effects (conservative dual-criteria: M = 78%; support vector classifier: M = 79%) compared to no effect (conservative dual-criteria: M = 87%; support vector classifier: M = 87%). Agreement was lower when the methods indicated an effect compared to no effect for 13 of the 14 comparisons, with the exception of Rater A with the support vector classifier. Table 3 shows the proportion of graphs with different phase lengths for each of the four groups. When visual raters agreed, most graphs had only three points in Phase A. In contrast, graphs on which visual raters disagreed amongst themselves or with the conservative dual-criteria method were more likely to have 10 or more points in Phase A. For Phase B, more graphs with three points were present when visual raters agreed, but this difference was offset by the higher proportion of graphs with four or five points in Phase B when visual raters disagreed amongst themselves or with the conservative dual-criteria method. A more in-depth analysis of the patterns of agreement and disagreement is available in Supporting Information. Note. For each pair, the proportion of correspondence is on the left of the slash and the kappa value on the right. CDC: conservative dual-criteria, SVC: support vector classifier.
Discussion
Our results showed that average agreement was generally consistent across different methods of analysis when analyzing clinical data. That is, the raters, the conservative dualcriteria method, and the support vector classifier agreed with each other on similar proportions of graphs. Moreover, correlations were high when we asked that the raters provide a score varying from 0 to 10, suggesting that the confidence in their ratings remains generally consistent. This result extends prior research, which has been mostly limited to examining binary classification (i.e., change vs. no change) although there are exceptions (e.g., 100-point scale, DeProspero & Cohen, 1979). A further analysis indicated that agreement was marginally higher when the objective methods suggested that there was no change in a graph. One potential explanation is that our graphs showing no effect depicted more stable patterns than those showing an effect. Notably, some of the graphs showing no effect showed no change from one phase to the next (i.e., two flat lines of equal level), which facilitated agreement. In general, our findings suggest that agreement between the two objective methods and visual raters is no different than agreement between raters themselves.
The agreements observed in the current study were similar to those reported by Wolfe et al. (2018) using published datasets. This result is promising as it suggests that their published graphs served as an acceptable approximation of clinical graphs, or at least that both types of graphs produce similar ratings. In contrast, raters performed better on the clinical graphs than previously reported by Lanovaz and Hranchuk (2021) on simulated ones. One potential explanation is that simulated graphs may exaggerate patterns (e.g., trend) that are infrequent in clinical graphs, which can be difficult to analyze. Alternatively, clinical Note. SVC: support vector classifier.
Figure 2
Average Agreement of Each Analysis When the Conservative Dual-Criteria (CDC) and Support Vector Classifier (SVC) Indicated an Effect or No Effect graphs may show larger, less ambiguous effects that facilitate analysis and make them easier to rate consistently. Regardless of the cause, this result underlines the importance of replicating studies conducted using simulated data with nonsimulated data. The support vector classifier did not match visual inspection more closely than the conservative dual-criteria method, which is inconsistent with results reported by Lanovaz et al. (2020), who examined thesis and dissertation data. This discrepancy may be the result of the Lanovaz et al. procedures that included only two raters. Additionally, having only three points in Phase A was associated with fewer disagreements, which is inconsistent with prior research with simulated data (e.g., Fisher et al., 2003;Lanovaz & Hranchuk, 2021). One potential explanation is that the decisions of behavior analysts are typically response guided. Hence, behavior analysts may stop collecting data early when they observe stability, making the graphs easier to agree on (i.e., less variable).
Prior research has used simulated or published datasets with professors and researchers as raters (e.g., Ford et al., 2020;Lanovaz & Hranchuk, 2021;Wolfe et al., 2016Wolfe et al., , 2018. We used clinical data with varying phase lengths and trends, containing both effective and ineffective interventions, and with varying degrees of variability, and most raters were practitioners, all of which may extend prior research on the topic. We also used a wide range of variables and metrics (i.e., percentage, latency, responses per minute) from pediatric feeding data with varying characteristics, including extinction bursts, extinction-induced variability, as well as delayed effects. In pediatric feeding, consistency of baseline replication data may be lower because of graduated exposure and skill development. Clinical cut-offs and goals must be considered, for example, with latency to swallowing and negative vocalizations. It is important to acknowledge that it is unclear whether and the degree to which our clinical datasets differed from published or simulated datasets in other studies. Given these considerations, visual inspection and objective methods remained comparable with these clinical datasets.
Researchers, professors, and supervisors may use objective-analysis methods to train visual raters more efficiently and to improve reliability and decrease decision-making errors. In clinical practice, decisions are made in real time along with data collection. We used machine learning post hoc on clinical data, but future studies could examine its use in real time during the treatment evaluation to aid in decision making and decrease errors. Similar to entering and graphing data into Excel to perform visual inspection during treatment evaluation, practitioners could enter or paste data into an internet-based application and immediately receive output results inclusive of graphs to assist with decision making in real time. For more advanced applications of machine learning by researchers and professors, tutorials employing free software are already available (e.g., Turgeon & Lanovaz, 2020). As with most research involving nonsimulated data, the main limitation of our study is that agreement does not equate validity. Two methods may agree, but still produce an incorrect rating. That said, it remains important to examine how agreement varies under naturalistic conditions (i.e., with nonsimulated clinical data). A second limitation is that our dataset was too small (few exemplars) to fully examine dataset characteristics (e.g., phase length, variability, trend, effect size) that may impact results. In the future, researchers should conduct replications with larger datasets to isolate the effects of these variables.
An additional limitation is that participants rated blinded graphs without clinically relevant information (e.g., phase and axis labels indicating the behavior, measurement system, scale, and condition/intervention), reducing the ecological validity of our study. This information can be highly important when making decisions about clinical data. Prior research on the impact of such clinically relevant information on interrater agreement has been shown to be minimal. Ninci et al. (2015) did not identify an association between providing clinically relevant information and agreement. Ford et al. (2020) also did not find that providing such information impacted agreement on published research data, but all variables were equally scaled percentages aimed to increase behavior. An important area of future research is to compare agreement for a variety of graphs rated with and without relevant clinical information. Additional research could examine the impact of specific types of clinical information on interrater agreement. Another extension for future research would be to compare agreement to ratings of graphs made for consecutive phases in sequential order for the entire treatment evaluation, approximating the conditions under which behavior analysts typically analyze singlecase data.
Finally, the study relied on quasiexperimental AB designs for analyses, which limits the applicability of the results. AB designs are insufficient to demonstrate experimental control. Our instructions to the raters asked them to imagine the rest of the graph had the AB graph been replicated, producing an ABAB design. The hypothetical instruction (i.e., to decide on functional control as if there were a reversal and replication) removed the real-life variables one may observe in practice. Visual raters analyzing a graph as a whole (rather than as independent AB graphs) may produce different conclusions regarding the presence of functional control. We do not yet know if the machine-learning outcomes would have differed if ABAB graphs were used. We took this approach because AB comparisons serve as the basic unit of many other types of graphs (i.e., multiple baseline, reversal, and changing-criterion designs), and we did not have enough complete ABAB graphs to conduct our analyses. Additionally, the use of AB comparisons is not unique to machine learning and is also used by all other proposed objective methods designed to supplement visual inspection (e.g., Fisher et al., 2003;Manolov & Vannest, 2019). However, it is possible to develop machine learning models to analyze the full range of single-case experimental designs, which is an important future research direction. As Wolfe et al. (2018) focused on multiple baseline designs, future research should address these issues carefully by replicating their study with other types of experimental designs. | 5,986.4 | 2022-04-27T00:00:00.000 | [
"Computer Science"
] |
Spatial Dynamics and Ecosystem Functioning
Ecosystem functioning is dependent upon the way species become distributed across space.
Classical theory of species dynamics in ecosystems is built on the concept of homogeneous, reciprocal interaction. The concept is borrowed from that branch of physics and chemistry dealing with reaction kinetics of molecules in well-mixed gases and liquids. It idealizes individual entities-no longer molecules but now individuals of a species-as interacting with each other or with their predators or competitors in such a way that each individual has an equal likelihood of interacting with every other individual in the system. There is no spatial structure in the system; in fact, space is assumed to be immaterial to system dynamics.
But, any keen observer of nature may cry foul. Unlike the simplified theoretical conception, natural ecosystems are characterized by complex and heterogeneous spatial structure. Plants are clustered into patches. Accordingly, herbivores that eat them and the predators that eat the herbivores become similarly arranged in space [1]. This observation was not lost on ecological theorists who in the 1980s and 1990s began to address spatial heterogeneity more explicitly [1,2]. This new ecological theory, essentially built on additional concepts from physics and chemistry, (e.g., [3]), partitions system dynamics effectively into two phases: a reaction phase in which individuals of species interact locally and a diffusion phase in which individuals disperse after local interactions take place. Dispersal is activated (a positive feedback) in the reaction phase by factors like intense competition or predation risk that causes individuals to move to less competitive or safer locations. Dispersal becomes inhibited (a negative feedback) whenever individuals' efforts to relocate are rebuffed by individuals already occupying the new locations.
This core reaction-diffusion mechanism has been used to develop two distinct classes of theory for species and ecosystem dynamics. The theories differ fundamentally in assumptions about spatial structure and in the way activation and inhibition feedbacks operate on a landscape. One kind of theory (known now as theory of meta-populations and meta-communities) extends classical theory by imposing spatial patch structure as a physical condition of the system ( Figure 1) and recasts parameters describing population birth and death processes in terms of spatial movement processes [4]. It then examines the consequences of that structure on system dynamics through analyses of within-patch species interactions and inter-patch species dispersal [4]. The other kind of theory (known now as theory of self-organized systems) starts with a clean slate and examines how spatial structure emerges as a consequence of species interactions and movements [5]. In the meta-system theory, diffusion across a landscape is inhibited by local, within-patch negative feedback ( Figure 1). That is, positive and negative feedbacks operate within local patches [4]. In the self-organized systems theory, the positive feedback is local, and negative feedback manifests itself as tension among local clusters of species that prevents further dispersal ( Figure 1). Landscape-scale patch structure thus emerges in self-organized systems theory as a consequence of local positive feedback and landscape-scale inhibitive or negative feedback [5].
Meta-systems theory has gained considerable traction in ecology because it resonates with our intuitive understanding of the current state of many ecosystems [6,7]. For example, small ponds represent natural, discrete patches within terrestrial landscapes, leading to characteristic patterns of local and landscape-scale species abundances and ecosystem functioning [8]. Human activities have also artificially imposed spatial structure onto many ecosystems by fragmenting formerly continuous landscapes into discrete habitat patches. This has led to predictable transformation of species assemblages and their associated functioning owing to differential abilities of species to reside within patches of a particular size and to disperse among them [9]. Meta-systems theory has clear and profound implications for the conservation of biodiversity [6,7,10].
The applicability of self-organized systems theory tends to be less clear because it is a more abstract construct than meta-systems theory. Moreover, there is divided opinion as to whether or not the predicted emergent dynamics based on fairly simple mathematical rules of species engagement are robust to changes in assumptions that reflect real-world ecological conditions [11]. This debate, however, continues to be largely academic because the ultimate arbiter-a rich body of empirical evidence from explicit tests of the theory-has not yet been amassed [5]. There certainly are many putative examples of self-organized, large-scale patterns, owing in good part to advances in satellite imagery [5]. And, there have been efforts to resolve mechanisms driving self-organized pattern formation in species populations [12,13]. But, evidence that such population-level spatial organization influences whole-ecosystem functioning remains a missing piece of the puzzle.
Testing self-organized systems theory in a whole ecosystem context in nature is not an enterprise for those given to do research yielding quick and simple answers. Unlike meta-systems theory there is no easy and fast way to delineate system structure. Patch boundaries of self-organized systems tend to be fuzzy [5], requiring sophisticated statistical techniques to resolve spatial patterning. The success of this kind of analysis is predicated on obtaining an extensive yet finely resolved data set. Before doing that, however, one must decide what a patch is and what drives the patch structure. For example, does patch structure arise from spatial gradients in soil nutrient concentrations that then cause spatial clumping of plants and the build-up of food chains? Or, does the spatial structure emerge from predator-prey interactions that cause species to emanate away from a local point source? More likely, it is a combination of the two, and so, their relative importance must be resolved through strategic experimentation and sampling of biota and physical conditions. Finally, one must find the points of spatial tension and resolve the mechanisms that delineate patch boundaries. The complexity can be perplexing, leading to the ultimate a priori question: where does one begin?
In this issue of PLoS Biology, Pringle et al. [14] answer these questions while undertaking a herculean effort to explain the spatial patterning of an African savanna ecosystem. Breakthroughs in our understanding of ecological systems often come from having good understanding of the natural history of the system in question and paying attention to the clues that nature provides [15]. Indeed, Pringle et al. [14] capitalize on important prior natural history clues that there is a tendency for termites to exhibit locally non-overlapping foraging territories around their colonies [16]. Termite movement away from the colony seems to be activated by the need to find food, and movement is eventually inhibited when individual termites encounter and compete with members of another colony [16]. Amazingly, this behavior may lead to quite regular spacing of termite colonies across the landscape [14].
More important to the structure and functioning of the entire savanna ecosystem is that the grass-covered mounds created by termite colonies are sandier than surrounding soils. This allows greater water infiltration, aeration, and nutrient build-up on the mounds relative to surrounding soils [14]. Termite mounds are effectively moisture and nutrient ''oases'' within a dryland matrix.
Concentrated moisture and nutrients accordingly promotes tree species growth at the colony margins, with the thickest trees at the immediate mound perimeter and gradually thinner trees emanating away from the perimeter and intergrading with thin trees emanating from other mounds [14]. The nutrient supplied by the mounds to the trees also fosters the build-up of food chains comprised of insect herbivores and spider and lizard predators of the insects.
This emergent structure also leads to parallel activation and inhibition dynamics among species in the food chain [14]. The herbivorous insects are highly concentrated on thicker trees near the mounds and decrease in abundances on thinner, distant trees. Lizards and spiders are likewise dispersed, and field experimentation showed that this was partly because thicker trees offered better hunting sites and partly because prey density was highest on thick trees that tended to be closets to the mounds. A related study [17] shows that nitrogen content of plants is higher near termite mounds, too, meaning that both food quantity and quality is higher near mounds, which likely contributes to all of these patterns.
The combination of nutrient supply for primary plant production, and the translation of plant nutrients into herbivore and predator secondary production mean that termite mounds also become hotspots of ecosystem productivity. These hotspots are preserved through the interplay between activation and inhibition of spatial movement of all of the components of the ecosystem. Thus, the landscape displays a regular pattern of high and low productivity that mirrors the regular patterning of termite mounds. Further statistical modeling suggests that this form of heterogeneity results in greater net productivity than would be expected if the termite mounds were irregularly clustered across the landscape [14]. This derives from the statistical property that when patches are regularly spaced, no single point is very far from a mound, so the productivity of all points when averaged is greater than would be the case when patches are highly clustered or randomly dispersed [14]. Of course, it would be exceedingly difficult to execute the definitive experimental test of this assertion, which would require rearranging the spatial configuration of the termite mounds. This is perhaps the biggest Achilles heel of any empirical effort to test self-organized systems theory within a realworld ecosystem. Nonetheless, the study [14] is exemplary in that it comes the closest yet to satisfying empirical conditions needed to demonstrate the existence of a self-organized ecosystem [5]. By amassing animal behavioral, animal population, and ecosystem data, the authors thus provide a reasonably coherent picture of the spatial mechanisms driving ecosystem structure and functioning.
The intriguing thing is that if, instead of focusing down on a neighborhood of termite mounds, we took a bird's-eye aerial view of the landscape, we could be fooled into concluding that a savannah is a fairly homogeneous landscape. And indeed it is quite plausible to draw such a view of system structure given increased and widespread application of modern satellite imagery to study patterning of savannas and other ecosystems [5,18,19]. Then again, if we focused too closely on a termite mound and just its immediate surroundings, our perspective might become so overwhelmed by highly resolved local species interactions that we risk not seeing the spatial patterning at all. The art in empirically resolving the structure and dynamics of self-organized ecosystems is deciding on the appropriate scale of resolution for study [1,5]. This is not a trivial exercise because it requires years of intrepid field research aimed at understanding both the natural history of a system, and measuring spatial pattern and dynamical processes at many different but complementary spatial perspec- tives. This may well be the single most important reason why more field evidence for self-organized systems is not yet available.
The study by Pringle et al. [14] nicely shows that theory of selforganized systems is not merely a virtual computer-world phenomenon. There is indeed a basis in ''robust reality'' [11]. Like meta-systems theory, the implications of self-organized systems theory for conservation, as demonstrated by the study [14], are profound. In this particular case, a very non-charismatic species of fungus-cultivating termite that lives predominantly below-ground seems to create biophysical and biotic conditions that lead to the evolution of aboveground trophic structure and parallel self-organized dynamics in the higher trophic levels. This would, in turn, suggest that the loss of any one of the parts would cause the parallel dynamics sustaining overall ecosystem functioning to quickly collapse. This reinforces the need to consider how the nature of species interactions link to whole-ecosystem functioning when developing strategies to conserve biodiversity [7,15,20]. | 2,660 | 2010-05-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |