text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Strengthening of Steel Shear Wall by Glass Fiber Reinforced Polymer Steel shear wall (SSW) is a new seismic system, which had proper performance against lateral loads during past earthquakes and numerical and experimental studies. Although plate buckling in elastic zone provides post buckling behavior but it causes the reduction of energy absorption, that strengthening with polymer fiber has been proposed as a new method in recent years. In this paper, the performance of steel shear walls with glass fiber reinforced polymers (GFRP) is evaluated and some models will be analyzed using ANSYS software in nonlinear analysis. The results show that the polymer cover increases stiffness, shear capacity and energy absorption but decrease ductility slightly. In investigation of fiber polymer orientation, the study results show that at the angle of 60 degrees, energy absorption increase slightly and decrease about 15 percent at 90 degrees and will change as a function of steel material and span length at 30 and 45 degrees. inTRoducTion In recent years, due to urban development and growth of construction, with attention to the issue of earthquake, many seismic systems with their own feature are used to control the lateral loads. Selection of resisting system against lateral loads depends on combination of loading, the structure behavior, conduction of gravity loads, geographic area, construction method, structure geometry, limitation of regulations, maximum displacement and etc. Many structural systems has been proposed and applied as yet for example moment frame system, different types of braced systems, shear wall (including concrete, steel and composite), active and passive control systems using dampers, or resisting systems in unaided masonry buildings. Stiffness, strength, ductility, energy absorption and system proper behavior, during an earthquake are among the most important parameters to select a structural seismic system. Researchers are always looking for the ideal system to resist lateral loads, in addition to having high stiffness and strength; it also has high ductility and energy dissipation. Steal Shear Wall is one of the lateral load resisting systems that recently attracted the attention of many researchers and engineers. This system is consist of vertical steel plate infill connected to the surrounding beams and columns, and installed one or more bays along the full height of the structure to form a cantilevered wall (Figure 1). This system is used in steel buildings about four decades ago, because of high stiffness and ductility, suitable energy dissipation, the speed of construction and structural lose weight compared to other systems and the trend of using is growing. Their use, compared with the moment frames, has about 50% saving in steel at buildings (Astaneh-Asl, 2001). Steel plate shear walls are executively simple, and there is no particular complexity in the system, thus engineers, technicians and technical workers with their technical knowledge and without having to learn new skills can perform the system. Work precision is at the level of conventional precision in steel structures and with that consideration, there will result much higher executive safety factor compared with the other systems. Due to its simplicity and the possibility of constitution in a factory and installed on the site, system executive speed is high and the enforcement costs will be reduced significantly. In the steel plate shear wall system, because of the expansion of materials and connections, tension modification is much better than other seismic systems like frames and braces which usually their materials are classified and connections are concentrated, and system behavior is more appropriate especially at the plastic zone. Structures built using steel plate shear wall and studies in this field Academic and laboratory research on steel plate shear wall system began at the seventies and the system was used in important buildings in some developed countries. At first, in 1970, steel shear walls were used in Japan in new buildings and they were used for seismic improvements to existing buildings in the United States (Astaneh-Asl, 2001). In some cases the steel shear walls were covered with concrete to form a kind of composite shear wall. The important buildings that have been done with steel shear walls and the studies in this case include: Japan Nippon (1970), 20-storey office building and Shinjuku Nomura, 51-storey High-rise building in Tokyo and City Hall 35-story tower in Kobe are built using steel shear wall (Astaneh-Asl, 2001). In 1973, Takanashi performed the first major research on onefloor steel shear wall. The results of the experiments indicate that the samples had ductile behavior. The test results were in good agreement with the Von-Mises yielding criterion in pure shear. He also performed experiments on two samples of two-story steel shear walls and the results of the experiment were used in the design of a tower in Japan. In these cases the test results were very close to the results of Von-Mises theory formulas. The researchers concluded that rules and the theory formulas of plate girder can be used to obtain stiffness and strength of steel shear walls (Takanashi et al., 1973). Turi et al in Japan in 1996 studied steel shear wall performance with "Low-yield-point Steel" to use it in building of some towers in Japan. The results of this research were used in design and construction of Yamaguchi (1998) tower (Torii et al., 1996), (Yamaguchi et al., 1998). united States Hyatt Regency 30-story in Dallas, Texas, Sylmar 6-story hospital in Los Angeles-California, The Century 52-story residential building in San Francisco-California, Federal courthouse 23-story in Seattle-Washington, Strengthening Oregon state library concrete building, health care building in Charleston, H.C.Moffit 16-story hospital, are among buildings which used steel shear wall in their construction(Astaneh-Asl, 2001). Elgaaly and Caccese in 1993 conducted wide tests under monotonic and cyclic loading. They declared that if a relatively thick steel plate be used in construction of the wall, system resistance will relate to system instability, caused by buckling of the respective column and system resistance will not change much by increasing the thickness of the steel (Elgaaly et al., 1993). In 1998-2022 professor Astaneh-Asl and Zhao at Berkeley University, studied seismic behavior of steel and composite shear wall under cyclic loading in a series of laboratory investigations and proposed the design coefficients(Astaneh-Asl, 2001). Behbahanifard in 2003 performed numerical and experimental studies on the three floors model that the upper flange of first floor beam was with failure. Specimen showed high elastic stiffness, ductility, energy dissipation capability and hysteresis loop stability. In the numerical modeling, it was proved that the plate primary defect has an important impact on the stiffness of the shear panel but has a negligible impact on the shear capacity. He fined that increasing of gravity loads decreases the stiffness, overturning moment, shear capacity and ductility (Rezai et al., 1999), (Behbahanifard et al., 2003). canada In construction of ING, Canam Manac Group buildings in Quebec and 25-story building in Edmonton, the steel shear wall resisting system is used. Timler and Kulak in 19883 did some tests on a two story steel shear wall specimens without stiffeners (Timler, P. A et al., 1983). The results showed high ductility and strength of this system and at the same year Thorbun et al based on this test results, suggested an equation for determine the angle of tensile field inclination and controlled the precision with some experiments (Timler, P. A et al., 1983). Driver et al in 1996 and 1998 tested a 4-storey steel shear wall specimen under cyclic loads. Although specimen rupture was not because of yielding steel plate and actually happened because of stress concentration at the column foot, but hysteresis response curves of the wall implies extra resistance coefficient about 1.3 and ductility coefficient more than 6 (Driver RG et al., 1998). europe Among the buildings that have been built with this system, Byer-Hochhaus 32-story building in Leverkusen, Germany can be mentioned. Also in 1992, researchers like Sabouri-Ghomi and Roberts from Britain studied the built up steel shear wall with or without openings. The other interesting result from these experiment series is investigating the effect of opening on stiffness and strength of the shear wall (Sabouri-Ghomi et al., 1992). Also in the years 2001-2012 the Iranian researchers of Amirkabir University did numerical and experimental investigation of the steel and composite shear wall behavior. In this research, a new system of composite shear walls with carbon fiber reinforced polymers have been wide numerically and experimentally studied and the behavior of steel shear wall composited with a reinforced concrete layer and fiber polymers were investigated with 224 numerical models. In addition, a new system of steel plate shear wall with dubbed plates was introduced and overviewed (Hatami et al., 2005). Fiber Reinforced Polymers Among the innovations and techniques in structural strengthening, FRP (Fiber Reinforced Polymers/Plastics) have a special role, as far as the opinion of some experts, FRP should be called the Third Millennium Construction Materials. In addition to the use of these materials in strengthening concrete structures, using them in strengthening steel structures have been proposed in the past decade, seriously. Polymer composite plates are materials that formed from two separate parts: Polymer fibers and resin. The ultimate strength of polymer fibers along their length is too high and they include: Carbon, Glass and Kevlar. Resin is fibers holder and transports load and prevent crack in the structure. Resin is usually made of epoxy, polyester, etc. Collection of fibers and resin make a plate which called Lamina (Jones , 1999). Using techniques of these composites as external reinforcement due to its unique characteristics like high strength compared to weight, lightness (about 20% of steel), high chemical resistance against corrosion compared to steel and concrete, insulation of electric and magnetic fields, ease of transport and storage, non-interference and inconvenience to using the structure in perform, has found particular attention, and has opened a new window in front of engineers. On the other hand, these techniques have found a special attraction because of rapid implementation and low costs. These properties can be effective in improve the composited steel shear wall characteristics such as strength, stiffness, ductility and energy absorption. To investigate these cases a numerical study has been done by ANSYS finite element software which it will be discussed in more detail. Modeling with the software In this paper, in order to analyze the samples in nonlinear method, ANSYS finite element software is used, and in order to trust the modeling accuracy of steel shear wall composited with fiber polymers, the results of an experimental test which its geometry in Figure 2, is modeled and analyzed by the software. This is the profile of beam and column: 2iPe200+2Pl150*12 The test was conducted at AmirKabir University and it is from the investigations series of composited steel shear wall behavior conducted by Farzad Hatami (Hatami et al., 2005). Comparison of load-displacement diagram shows the high accuracy of the modeling. Then there is the model with actual dimensions and with different fibers orientations. Table 1. In all models the steel plate thickness is 7 mm. Sample beams and columns specification in terms of mm are observable in figure 3. analyze the modeling results Comparison of shear walls and the reinforced ones: To compare the behavior of steel shear walls with the reinforced ones, we modeled and analyzed two shear walls with usual steel. Then we attached the polymer fiber covers to both sides of the steel plates and analyzed them. Load displacement diagrams of steel shear walls compared with the reinforced ones with glass fiber polymer covers in both sides with 2-mm thickness, is shown in figure 4. Valuable information can be inference from these figures. With investigation of these diagrams, it is noticed that, if there is no lateral displacement limit, capacity of reinforced shear wall will rise but steel shear walls behave in complete elasto-plastic way. In diagram figures 5 to 8 seismic parameters of the samples is drawn in bar graph and steel shear wall parameters are compared with the GFRP (2 mm thickness) reinforced ones. Comparison of the graphs shows that the ductility of reinforced steel shear walls (with almost equal ratio for all samples) is always less than the ones without reinforcement. The glass polymer covers increase stiffness, shear capacity and energy absorption. The glass polymer cover increase shear capacity of samples but the impact of polymer covers on increasing shear capacity with usual steel is more than the system with mild steel. The polymer cover has a little effect on resistance coefficient of reinforced steel shear wall. Samples stiffness would increase by the polymer cover but this increment is affected by span length, in other words, with span increasing its ascending rate would be higher. Investigation of span length on samples behavior: With increasing of the panel span, the panel slenderness increases. Behavior of steel shear wall and the reinforced ones is also influenced by the slenderness ratio. The load-displacement diagrams of steel shear wall with different spans are drawn in Figure 9. The changes of stiffness, energy absorption and etc can be observed from these diagrams. For numerical comparing of seismic parameter changes with span ratio changes of these parameters, the bar graphs are drawn. load-displacement diagram of reinforced steel shear wall With results and diagrams investigation of figure 9, it can be observed that, span length increasing has no serious impact on strength coefficient and ductility of steel shear wall but increase ductility of reinforced shear wall 12 %. The most impact of span length increment is on energy absorption and shear capacity which increase them almost 50% and 30%. Investigation of fiber or ientation: Experimental studies have not been done on the fiber orientation impact in reinforced shear walls behavior, so far. Seismic parameters of reinforced steel shear wall, with fiber orientation in the longitudinal and transverse fibers, is drawn in Figure 10. These diagrams are drawn after averaging the samples. It is noted that, at the angle of 60 degrees, energy absorption increase slightly (2-4 percent) and decrease about 15 percent at 90 degrees and it changes as a function of steel material and span at 30 and 45 degrees. In 30 degrees with span increasing from 5 to 6 meters, the energy absorption increases from 7 to 13 percent but in 45 degree angle at 5 meters span, its increment is 13 percent and in 6 meters, the increment is 7 percent. As it is noticeable from Figure 11, stiffness of reinforced shear wall with both sides and usual fibers is influenced by fiber orientation, span length and steel kind. Also the ductility diagram of reinforced steel shear wall is given in Figure 12. Calculation of Force Modification Factor: After averaging of reinforced and usual steel shear wall models, force modification factor is given in Table 2. concluSionS In this paper, seismic behavior of steel shear wall reinforced with fiber polymers, under different angles, was investigated, and some models for estimate the seismic parameters were analyzed in nonlinear method and here are the results: • It is noticed that, if there is no lateral displacement limit, capacity of reinforced shear wall will rise but steel shear walls behave in complete elasto-plastic way, which this behavior shows the preference of fibers in improve the seismic parameters and behavior of the system. Therefore the polymer cover will increase stiffness, shear capacity and energy absorption. • However polymer fibers will cause increment in stiffness, shear capacity and energy abruption but the ductility of reinforced shear walls is always less than the unreinforced ones. • The polymer cover has a little effect on resistance coefficient of reinforced steel shear wall. Samples stiffness would increase by the polymer cover but this increment is affected of span length, in other words, with increasing span its ascending rate would be higher. • Span length increasing has no serious impact on strength coefficient and ductility of steel shear wall but increases ductility of reinforced shear wall 12 %. The most impact of span length increment is on energy absorption and shear capacity which increase them almost 50% and 30%. • It is noted that, at the angle of 60 degrees, energy absorption increase slightly (2-4 percent) and decrease about 15 percent at 90 degrees and it changes as a function of steel material and span at 30 and 45 degrees. In 30 degrees with span increasing from 5 to 6 meters, the energy absorption increases from 7 to 13 percent but in 45 degrees angle at 5 meters span, its increment is 13 percent and in 6 meters, the increment is 7 percent • Shear capacity ratio of reinforced shear wall with fibers in both sides is 10% more than fibers with different angles.
3,858.4
2015-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Fuzzy-Based Dynamic Time Slot Allocation for Wireless Body Area Networks With the advancement in networking, information and communication technologies, wireless body area networks (WBANs) are becoming more popular in the field of medical and non-medical applications. Real-time patient monitoring applications generate periodic data in a short time period. In the case of life-critical applications, the data may be bursty. Hence the system needs a reliable energy efficient communication technique which has a limited delay. In such cases the fixed time slot assignment in medium access control standards results in low system performance. This paper deals with a dynamic time slot allocation scheme in a fog-assisted network for a real-time remote patient monitoring system. Fog computing is an extended version of the cloud computing paradigm, which is suitable for reliable, delay-sensitive life-critical applications. In addition, to enhance the performance of the network, an energy-efficient minimum cost parent selection algorithm has been proposed for routing data packets. The dynamic time slot allocation uses fuzzy logic with input variables as energy ratio, buffer ratio, and packet arrival rate. Dynamic slot allocation eliminates the time slot wastage, excess delay in the network and attributes a high level of reliability to the network with maximum channel utilization. The efficacy of the proposed scheme is proved in terms of packet delivery ratio, average end to end delay, and average energy consumption when compared with the conventional IEEE 802.15.4 standard and the tele-medicine protocol. Introduction Wireless body area networks (WBANs) is growing rapidly due to the recent advancements in the fields of electronics, intelligent sensors, and wireless communication technologies [1]. WBAN is a type of wireless sensor network [2] that requires a number of nodes to be worn on the body or implanted within the human body to collect the health vital signs. It can also be considered as a subclass of wireless sensor networks (WSNs) [3,4] with certain specific characteristics that make the research challenges more exigent [5]. The sensors collect data periodically or aperiodically and route them through different body controller nodes using various routing protocols. A geographic delay tolerant network (DTN) routing protocol is presented in [6], with the primary objective to improve the routing efficiency and reduce the chance of selecting inappropriate nodes for routing. Greedy forwarding, perimeter forwarding, and DTN forwarding modes are used for efficient routing towards the destination. The paper [7] explained the need for programming frameworks and middlewares for collaborative body sensor networks (CBSNs) due to the complex system requirements of CBSNs, unlike star topology-based body sensor networks (BSNs). The paper presented a novel collaborative-signal processing in node environment (C-SPINE) framework for CBSNs. It was developed as an extension of Signal Processing In Node Environment (SPINE) middleware that was discussed in [8]. SPINE was designed to meet the high-level software abstraction and hardware constraints in single BSNs. The medical applications of WBAN include daily monitoring of human health vital signs and detection of chronic diseases such that the treatment benefits the patient at an early stage. The challenging tasks in patient monitoring systems are high throughput, limited delay, and less energy consumption. However, the existing protocols are less efficient to meet these challenges. It means the body sensors must be low power devices with guaranteed reliability since battery replacement or recharging is difficult. Hence, this necessitates energy efficient and reliable MAC protocol. The IEEE 802.15.4 MAC is a low power standard with minimum delay requirements that is widely used in WBANs. However, it is less efficient in terms of delay, throughput, and energy consumption for periodic patient monitoring applications. In case of an unexpected event or life-critical applications, the channel and bandwidth utilization are poor for this standard. The two major channel access methods used in WBANs are carrier sense multiple access with collision avoidance (CSMA/CA) and time division multiple access (TDMA). In CSMA/CA, the nodes compete for the channel before the data transmission. In TDMA, each node can transmit during its assigned time slot. The total time is divided into equal time slots which are organized as superframes. In a superframe, a node can transmit data within a time slot. In the IEEE 802.15.4 standard, the aontention access period (CAP) uses CSMA/CA and contention free period (CFP) uses guaranteed time slot (GTS) allocation based on TDMA [9]. There are some shortcomings in the case of life-critical WBAN applications with equal time slots. The first one is the bandwidth under-utilization, where nodes use only a small portion of the assigned slot. This leads to slot wastage which represents an empty slot in the CFP. The second is the limited GTS slots. This affects the medical scenarios where a number of life-critical events occur simultaneously. In this standard, only seven GTS slots are available which cannot accommodate the multiple emergency events in time. Another limitation is the fixed time slots in the superframe which fails during urgent situations. With the introduction of the internet of things (IoT) and cloud computing [10,11] paradigms to the field of medical services, a number of healthcare systems have been developed in order to provide fast and reliable treatment to patients. It also includes the sharing of medical information among the medical institutions, family members and the related personnel [12]. IoT-based health applications are not sufficient for pervasive monitoring, which requires additional analysis and decision-making capabilities. In order to overcome this shortcoming, IoT enabled cloud-assisted monitoring services came up. However, these also suffered due to discontinuities in the network connectivity [13]. Hence, an extended version of cloud computing, called fog computing [14] or fogging, is used now, in which computations can be done in any node, called fog node or nodes at the edge of personal area network (PAN). In a similar manner to cloud computing, fog nodes are also prone to failures. However, the impact of failure is less and easier to handle for fogging in comparison with the cloud [15]. Cloud failure affects the entire hospital, whereas fog failure is restrained to a smaller area such as a hospital ward or a block. In short, fog computing can overcome the limitations of cloud computing, including high bandwidth constraints, dependency on network infrastructure, unpredictable time of response from the cloud for emergency cases, and so on. The fogging has a shorter response time, as the data processing is carried out at the edge of the network and close to the source along with securing the data within the network. Figure 1 shows an example of an in-hospital block-wise health monitoring setup that utilizes the fog computing concept [16]. Each block has a number of patients and a central coordinator. In this method, the central coordinator (battery-operated node) acts as an edge computing device or a fog node. The central coordinator classifies the sensor signal into urgency, semi-urgency, and normal data by using simple mathematical models and a threshold value, and makes the decision accordingly (i.e., whether to immediately send the data to the base station or not). Then, the central coordinator directly sends the data to the base station (BS). The health monitoring system usually consists of a number of sensor nodes worn on the patient's body, such as an electrocardiogram (ECG) sensor, electroencephalogram (EEG) sensor, temperature sensor, pressure sensor, glucose sensor, and a body controller. These sensor devices collect data from the body and send them to the body controller, which is placed at any appropriate position on the body. The body controller aggregates the collected information and sends it towards a central coordinator through the tree-based routes. The advantage of fog computing is that the central coordinator or the fog node sends only the valid information and drops unnecessary sensor information, thereby simplifying the complexity of data storage and computation. It also makes the decision quickly. For example, in speech analysis for Parkinson's disease, the audio recordings are not merely forwarded. Instead, the analysis of the recordings is done locally, and only the necessary metrics are transmitted [15]. Hence, the fog-architecture minimizes the delay which makes it suitable for various medical applications. In this paper, a fog-based architecture and dynamic slot allocation are considered to address the discussed challenges of WBANs. The performance of an in-hospital patient monitoring system is enhanced by using a QoS efficient next hop selection algorithm and a fuzzy-based dynamic slot allocation scheme. The proposed methods are designed without modifying the superframe structure of the IEEE 802.15.4 MAC standard. The main contributions of the paper are: • A fog-based WBAN for a real-time patient monitoring system which consists of a sensor layer, body controller layer, and a central coordinator layer. • Minimum cost parent selection (MCPS) algorithm for best parent selection and a link cost function for efficient routing. The best parent node for the tree formation is selected by comparing the link cost function, number of hops, and the distance between the nodes. Dynamic time slot (DTS) allocation technique based on fuzzy logic that can enhance the packet delivery and reduce the end-to-end delay. The time slot to each node is allocated dynamically based on the parameters such as available energy in a node, buffer availability and the packet arrival rate. The remaining paper is structured as follows: Section 2 summarizes some of the existing medium access control (MAC) layer protocols. Section 3 explains the system model for in-hospital health management application. Section 4 illustrates the tree formation and the cost function evaluation for energy efficient routing. Section 5 includes the design of an energy efficient dynamic time-slot allocation for each sensor node. Section 6 presents the performance results and analysis of the MCPS and DTS algorithm. Finally, Section 7 concludes the paper. Related Works The commonly utilized mechanisms in the MAC layer are time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA). Both of these mechanisms have their own advantages and disadvantages [17] in terms of power consumption, bandwidth utilization, network dynamics, synchronization, etc. A number of MAC layer protocols have been proposed, which combine the advantages of CSMA/CA and TDMA techniques in order to meet different demands such as reduction in the collisions, energy consumption and enhancement of the network reliability. In [18], MAC protocols with a quality of service (QoS) control scheme has been developed; however, they are not optimized for handling emergency data in medical applications. For an energy-efficient network, the MAC protocols in WBAN use duty-cycling mechanisms, which serves as an effective solution for over-hearing and idle listening problems. The beacon mode in IEEE 802.15.4 provides a better duty-cycling mechanism for using the available energy resources efficiently [19]. At the same time, this standard also faces several challenges such as unfair channel access, extended back off periods, and lack of dynamic adaptive capabilities. Hence, these issues result in inferior performance of WBAN in cases where the application demands less delay, accurate throughput, energy utilization and reliability at a specific time. A new MAC protocol has been proposed in [20], which reduces the energy consumption of the guard band and extends the lifetime of the WBAN system. It uses a self-adaptive guard band in each time slot in order to reduce the energy consumption of the network. An enhanced packet scheduling algorithm (EPSA) is proposed in [21] to minimize the slot wastage and to allocate a greater number of waiting nodes in the available time slots. Initially, the vacant time slots are identified and divided into equal time slots based on the number of waiting nodes. Hence, they can transmit the data with a minimum delay in the given time frame. This scheme is based on the availability of the vacant time slots. The iQueue-MAC is a hybrid protocol [22] of CSMA/TDMA specifically designed for variable or bursty traffic. During low traffic it uses CSMA and when traffic increases it changes to TDMA mechanism. It uses a piggybacked indicator with a request for time slots. It allocates slots when a queue is detected. An energy preserving MAC protocol was derived in [23], called as Q-learning medium access control (QL-MAC) protocol with its aim to converge to a low energy state. It eliminated the need of a predetermined system model to solve the minimization problem in WSNs. It is also designed as a self-adaptive protocol against topological and other external changes. In [24], a time slot allocation is modeled and proposed a time slot allocation scheme based on a utility function. The function is designed based on sensor priority, sampling rate and available energy of the node. The main objective is to maximize the data transmission of each node in the network. A priority-based adaptive MAC(PA-MAC) protocol [25] is derived for WBANs which dynamically allocates time slots to the nodes based on the traffic priority. There are separate channels for a beacon and data. A priority-guaranteed CSMA/CA is used to prioritize the data. Based on the traffic priority, the PA-MAC dynamically allocates the time slots. In [26] a Traffic Class Prioritization based CSMA/CA (TCP-CSMA/CA) is proposed for prioritized channel access in intra-WBAN. The aim is to reduce delay, minimize packet loss, and enhance network lifetime and throughput. The traffic is categorized into different classes and assigned backoff period range to each class. To overcome the first-come-first-served (FCFS) guaranteed time slot (GTS) policy of IEEE 802.1.5.4 based network, an adaptive and real-time GTS allocation scheme (ART-GAS) is proposed in [27]. Here, the bandwidth utilization of IEEE 802.15.4 MAC for time-critical applications was improved. It used a two-stage approach, where the first stage dynamically assigned the priorities of all devices. In the second stage, the GTS was allocated to the nodes according to the assigned priorities. An analysis of the GTS allocation mechanism was done in [28] for time-critical applications based on the IEEE 802.15.4 standard. A Markov chain was considered to model the GTS allocation for designing various efficient GTS allocation schemes. In [29], real-time applications with periodic data are guaranteed with a reduced the packet drop rate. This algorithm can be used for only GTS allocation and it does not have any effect on the data packets in the contention access period (CAP). The tele-medicine protocol (TMP) defined in [30] is a MAC protocol suitable for patient monitoring applications which need limited delay and reasonable reliability. The duty cycle is varied with respect to three parameters like delay-reliability factor, traffic load, and superframe duration. The protocol is designed based on three computation methods like network traffic estimation, channel access, and collision probabilities and delay-reliability factor. It shows the efficacy in terms of delay, reliability and efficient energy consumption. A number of routing protocols are proposed and studied for routing the packets from source node to the sink node based on the tree structure. In [31], a routing protocol for low-power and lossy (RPL) Network is introduced where two routers along with interconnecting devices are restrained. It is based on IPv6 protocol which supports multipoint-to-point and point-to-point traffic within the lossy networks. It discusses the topologies like destination-oriented directed acyclic graphs (DODAGs), their upward and downward routes, security mechanisms, and fault management. A velocity energy-efficient and link-aware cluster-tree (VELCT) is proposed in [32] which provides reliable data collection scheme in sensor networks. Cluster head location is utilized to construct the data collection tree (DCT). It minimizes the energy consumption of the cluster head with less frequent cluster formation. It is well suitable for mobility based sensor networks. In [33], a cluster based routing protocol is introduced to extend the network lifetime of sensor networks. The energy of all nodes is balanced to prolong the lifetime of the network. It utilized a spanning tree to send heterogeneous data to the base station. A tree-based routing protocol (TBRP) is discussed in [34] for mobile sensor networks. It enhanced the node's lifetime by considering different energy levels in the tree. Here, the lowest energy level consumes high energy and highest level consumes less energy. Whenever a node attained a critical level of energy, it saves the energy by moving into the next energy level. The tree formation and routing of packets are influenced by the link reliability and the co-existence issues in the network. For context-aware WBAN, it has to coexist with a number of wireless networks. The paper [35] discussed the characteristics of the physical layer in a smart environment. The experiment characterized on-body and off-body channels. The author had come up with some concerns for physical layer protocol design. In [36], the co-channel interference in WBAN is addressed where it has to co-exist within smart environments operating in the same frequency band. It also discussed the fading characteristics of mobile WBAN. The measurements for inter-body interference between two WBANs are also explained. The reliability, fault-tolerant, and interference mitigation schemes are presented in [37]. The term reliability is expressed in terms of quality of the link and the efficiency of the communication. A detailed explanation about different types of interference and coexistence is also included. A decentralized time-synchronized channel swapping (DT-SCS) protocol is presented in [38] to overcome the shortcomings of time-synchronized channel hopping (TSCH) in ad hoc networks. These protocols were designed for collision-free and interference avoiding communications. The TSCH and its variants need centralized coordination technique for time-frequency slotting in networks. It resulted in slow convergence to the steady state during mobility. Hence, Dt-SCS was introduced with a decentralized concept based on the coupling of distributed synchronization and desynchronization mechanisms. All the existing aforementioned approaches mainly concentrated on any one of the QoS aspects at a time, whereas a combined set of QoS parameter optimization is necessary for WBAN medical applications. Additionally, most of the MAC protocols based on the IEEE 802.15.4 standard concentrated on any one of the MAC aspects for the protocol design. Most of the schemes used data traffic and traffic priority for the analysis. Also, the developed protocols attained their objectives by adjusting the CAP/CFP in the superframe structure, which has its own limitations in terms of bandwidth and number of devices used. The comparative survey of different routing protocols for WBAN medical applications is summarized in [39]. Network Model An in-hospital real-time healthcare patient monitoring network is assumed to evaluate the performance of the proposed methods. A patient monitoring block with 15 patients is considered. Each patient is assumed to be a WBAN with five sensor nodes and a body controller. The sensor nodes collect the body vital signs such as blood glucose, blood pressure, body temperature, ECG and EEG. The measured data are given to the body controller which is deployed on the human body. The patient monitoring system consists of 15 body controllers which form the tree structure for the proposed model. The body controller transfers the data to the fog node (central coordinator) using the proposed algorithms. The fog node assigns priority to the data and sends the prioritized data to the physician through the cloud server to meet the emergency situations. The data processing and computation are done within the fog node and only the consolidated report is sent to the physician through the cloud server. The local server in the proposed network is called here as the cloud server. The cloud server assigned here is mainly to connect to the external network. The fog node avoids congestion in the network, reduces the computation time by performing all operations in the fog node itself. It also minimizes the storage size and redundant data package (only important data is sent to the server), and decreases the time delay between the source and the destination. The designed MCPS algorithm is used to transfer data towards the central coordinator. The developed fuzzy based dynamic time slot allocation is utilized to improve the reliability and network lifetime. Block Diagram of a Fog-Based WBAN The functional block diagram of the proposed fog-assisted architecture for the real-time health monitoring system is shown in Figure 2. The three layers in the monitoring framework are as follows: 1. Sensor layer 2. Body controller layer 3. Central coordinator layer The sensor layer collects the body vitals and processes the signals that must be transmitted to the next layer. The body controller layer stores and transmits the data about the fog layer or central coordinator layer. Here, simple mathematical modeling has been used to make a decision regarding the priority of the data. From this layer, the prioritized data are transmitted to the physician through the cloud server. The role of a fog node in the proposed model are: 1. Collecting the human vital signs from sensor nodes 2. Computing and analyzing the sensed data using simple modeling techniques 3. Sending the consolidated report to the cloud server 4. Assigning the priority of the sensed data 5. Coordinating operations of the body sensor nodes The patient vitals is transmitted to the base station through the body controllers, using a trusted tree formed with n number of body controllers within each block. The fog nodes determine the priority of the data with the help of the prioritization scheme and send the data towards the destination through a cloud server. The back-end part of the system is the cloud server, whose function includes storing, processing, and transmission of data along with back-end services for real-time data interpretation and visualization. The tree formation between the body controllers and the next hop node selection and dynamic time slot assignment is explained in the following sections. Tree Formation The first step in the initialization phase is the tree formation with the available set of sensor nodes and the central coordinator (CC) or the root node. Initially, the root node broadcasts the CC announcement to all the neighboring nodes. The CC announcement includes a sequence number, number of the visited devices, available energy, queue length, and all the necessary parameters to select the parent node. CC announcement is broadcast based on a sink timer. Initially, the one-hop neighboring nodes will receive the announcement from the root node. Based on the received sequence number and hop count, the tree is formed with selected parents and children. The detailed psedocode for the tree formation is given as follows: 1. The root node broadcasts a CC announcement using a sink timer 2. One-hop connected devices receive the message 3. If the received sequence number is new add the previous hop forwarder in the tentative parent list 4. If the received sequence number is not new but if its hop count is less than the previous one, then add it to the tentative parent list 5. Execute MCPS algorithm (Algorithm 1) to select the best parent node Link Cost Function for Next-Hop Selection The objective of the link cost function is to select a node with minimum link cost as the best parent node. It [40] is based on the parameters such as: residual energy, queue size, link reliability, the distance between the nodes and the available bandwidth. Consider the variable x, where x is given as: where, e r , e i , q i and q a are residual energy, initial energy, initial queue size, and current available queue size of node j respectively. R ij (n) is the current round link reliability between the nodes i and j, which is estimated from Equation (3). The metrics d, c, b a , and b r represent the distance between the two nodes, the coverage of a node, the required bandwidth, and the residual bandwidth respectively. w 1 , w 2 , w 3 , w 4 , and w 5 are the weighted coefficients, where The link reliability between any two sensor nodes (R ij ) is estimated from the exponentially weighted moving average, which is given as follows: where N t is the total number of successful packet transmission attempts through the link between the nodes i and j, n is the index number of the round, τ tr is the total number of successful transmission and re-transmission attempts of all data packets, and γ is the average weighting factor. The distance between the two nodes can be calculated using Equation (4): where x and y are the coordinates. The expression for the link cost function is expressed as: where, x is expressed in Equation (1). The range of link cost function is (0.367,1). The mentioned link cost considered five factors in order to enhance the QoS performance of the network. The energy metric aims to stabilize the energy between the nodes, queue size metric attempts to reduce the queuing delay, the link reliability improves the reliability of the network, node coverage and the distance between the nodes are used to decrease the number of re-transmission attempts, and the residual bandwidth increases the packet delivery ratio of the network by utilizing the available bandwidth resource of the network. Minimum Cost Parent Selection Algorithm To find the best parent node, the proposed minimum cost parent selection (MCPS) algorithm is used whenever a node receives an announcement from the neighboring nodes. According to this algorithm, the best parent node will be the one with a minimum hop, minimum cost, and the shortest distance from the child node. Since it utilizes the minimum link cost, minimum hop, and the shortest distance between nodes, it satisfies the required QoS for WBANs. The selection of best parent nodes from the tentative parent list is depicted in Algorithm 1. Algorithm 1 Best parent node selection algorithm. Initialization: LC ij -link cost function between sensor nodes i and j C m -maximum link cost = 1 N id = −1 h n -highest number of hops nid-node Identifier of node j C n id-link cost of node j NNi s1, s2, ....., sm-set of neighboring nodes of node i, 1 ≤ i ≤ N, 1 ≤ m ≤ N BNHi-best parent node of NN i N md -node with minimum distance from child node 1: for each node in the list NN i do 2: compute link cost: LC ij using Equation (9) if h n == h nid then 12: if C m > C nid then 13: C m = C nid BNH i = N id 23: end for Fuzzy-Based Dynamic Time Slot Allocation Once the traffic is generated, the initial equal slot assignment may fail due to the dynamic conditions in the networks such as traffic flow, buffer availability, energy consumption by each node, and so on. Therefore, each of these parameters in the network is highly unpredictable. The solution proposed to this situation is the dynamic time slot allocation technique (DTS), where the slots are allocated to nodes depending on the packet interval, buffer availability, and the remaining energy of each node. In order to improve the reliability and efficiency of packet transmission, fuzzy-based dynamic slot allocation has been proposed. Fuzzy logic [41] can give an appropriate solution or can integrate many factors to solve an evaluation problem. In this method, it is used to find a dynamic time slot to each node based on the mentioned factors. Fuzzification In the first step of fuzzification, the crisp inputs are converted into their corresponding linguistic values, which are represented through the use of fuzzy sets [42]. Each fuzzy set is related to a membership function that describes the way in which each crisp input is associated with the fuzzy set. The fuzzy model is shown in Figure 3. For the slot allocation to each node, the three input variables of fuzzy are energy ratio (ER), packet arrival rate (PAR), and buffer memory ratio. The fuzzy model uses three linguistic terms (low, medium, and high) in order to partition the input variable. To define each term, different membership functions such as Gaussian, S, and Z functions are used. Energy Ratio The energy ratio (ER) variable indicates the ratio of available energy (E r ) to the initial energy (E i ) at each node and is given as follows: Equations (7)-(9) explain the partitioning of the variable energy ratio. Fuzzy rules for available energy and slot allocation are the following: 1. If ER is high, then slot allocated value is high. 2. If ER is medium, then slot allocated value is medium. 3. If ER is low, then slot allocated value is low. Buffer Memory Ratio The second input variable is the Buffer Memory Ratio (BMR), which can be calculated according to Equation (10), where m a is the available memory in a node and m max is the maximum memory allotted to that node. Equations (11)-(13) explain the partitioning of the variable BMR. Fuzzy rules for BMR and slot allocation are the following: 1. If BMR is high, then slot allocated value is high. 2. If BMR is medium, then slot allocated value is medium. 3. If BMR is low, then slot allocated value is low. Packet Arrival Rate The packet arrival rate (PAR) in the network is estimated with the help of the exponentially weighted moving average (EWMA) method. It is defined as follows: where α 1 is the weighting factor that takes the value within the range from 0.1 to 0.9. pr avg is the average of the previously arrived packet rate, pr cur is the current packet arrival rate. Equations (15)- (17) explain the partitioning of the input variable PAR. Fuzzy rules for PAR and slot allocation are as follows: 1. If PAR is high, then the slot allocated value is high. 2. If PAR is medium, then the slot allocated value is medium. 3. If PAR is low, then the slot allocated value is low. The membership function for the output allocated slot value can be explained using Equations (19)- (23). Table 1 defines the fuzzy inference rules for the selection of the optimal slot value for each node. The fuzzy rules consist of a series of conditional statements of "if-then" type. The rating is given as "low", "rather low", "medium", "rather high", and "high". If the normalized input variables for ER, BMR, and PAR is all low, then the chance value for the required number of slots for the particular node is expected to be low. Similarly, if the normalized input variables are all high, then the chance value for the required number of slots for a certain node is expected to be low. The remaining chances occur between these two extremes. The inference system used to find the chance value fuzzy variable is the Mamdani fuzzy inference system. The results of all fuzzy rules are in fuzzy values and are converted into crisp values based on the centroid of area U CoA : where Z(u − x) is the membership function of all aggregated outputs, u x is the centroid of the area, and n r is the number of fuzzy rules. Comparison of Time Slot Allocation Consider the tree structure shown in Figure 4. The tree structure has 15 nodes and a root node. The number of nodes are selected in random manner. The root node has three direct children node 1, 2, and 3. The whole tree can be divided into three branches. The branch I include nodes 1, 4, 9, and 10. Branch II has nodes 2, 5, 6, and 11. Nodes 3, 7, 8, 12, 15, 13, and 14 constitute branch III. Any node in the tree can be a child node, a relay node, or a leaf node (which does not have any child). For example, node 3 has direct children 7 and 8; the leaf nodes are 7, 13, 14, and 15; the relay nodes are 3, 8, and 12. Assume that the total transmission time for all nodes is 1 s. With a total of 15 nodes, the equal slot duration for each node is 0.0666 s (1/15). Hence, the total slot duration for nodes 1, 2, and 3 are 0.266 s, 0.266 s, and 0.466 s respectively. This equal slot allocation is used for the conventional sensor networks. The proposed DTS method uses dynamic slot allocation to enhance network performance. The slot allocation to each node depends on the relay nodes, child nodes, and the leaf nodes. In the conventional method of equal slot allocation, each parent node must be active during the entire time duration of its child and leaf nodes. This leads to higher energy consumption. Hence, the dynamic slot allocation method is adopted in the DTS scheme such that the parent needs to be active only during its own and direct child slot duration. This is represented in Figures 5 and 6. It shows the comparison between the two methods with respect to the duration of an active state for parent nodes for the branch III. The branch has nodes 3, 7, 8, 12, 13, 14, and 15. According to the conventional method, the node 3 has to be active for a duration of 0.4662 s and node 8 must remain active for a duration of 0.333 s. The DTS method reduces the active duration of nodes 3 and 8 to 0.1998 s which also results in reduced energy consumption. Simulation Setup The performance was evaluated using the network simulator version 2 (NS-2) simulating tool. NS-2 is an object-oriented discrete event simulator for research in wired and wireless networks that can simulate newly designed network protocols. It has a number of wireless networking supported platforms and protocols for detailed study of simulated results. A random WBAN network was considered with 15 sensor nodes. The network used IEEE 802.15.4 as the MAC protocol. The simulation time was selected as 200 s, and the packet interval was varied from 0.1 s to 3 s in steps of 0.5. Table 2 summarizes the simulation parameters used. Performance Metrics and Results The performance of the proposed technique was validated with the help of key factors such as packet delivery ratio (PDR), average end to end delay, and average energy consumption. The experiments were conducted for two sets, based on the selected simulation time and the packet interval time. The proposed DTS mechanism was compared with the basic IEEE 802.15.4 standard and the TMP protocol [24]. In the first set of experiment, the simulation time was varied as 50, 75, 100, 125, 150, 175, and 200 s. Figure 7 depicts the PDR from a source node to the root node. It was measured as the percentage of the total number of successfully received packets at the root node to the number of packets transmitted from the source node. The figure shows that PDR was highest for DTS method. This is due to the link reliability function used in the best next hop selection algorithm. It ensured the best path between the source node and the root node by reducing the packet loss. On the other hand, the TMP protocol mainly concentrated on the time slot allocation to minimize the slot wastage. In IEEE 802.15.4 standard the data transmission was based on the CAP and CFP transmission. In comparison, DTS outperformed the TMP and IEEE 802.15.4 standard by 12% and 15% respectively. In Figure 8, the average end-to-end delay is shown with varied simulation time. It is the average time taken by the packet to reach the root node from the source node. DTS scheme dynamically allocated the time slot based on the available energy, buffer memory, and packet arrival rate. It allocated the slot dynamically. Hence the slot wastage gets reduced where unnecessary waiting time is minimized for those nodes in the queue. In TMP, computational methods were utilized for MAC parameter tuning and duty cycle adjustment which contributes lesser than the DTS method. There was a 47% and 59% reduction in average end-to-end delay when compared to TMP and IEEE 802. 15 The average energy consumption is depicted in Figure 9, where the DTS scheme has the lowest energy consumption. The MCPS algorithm proposed for next hop selection is based on the available energy in each node. The dynamic time slot allocation based on fuzzy rules used the energy ratio to utilize the available energy resources effectively. In TMP, the energy ratio consideration is less when compared to DTS. The percentage of reduction in average energy consumption of DTS is 22% and 31% respectively. The second set of experiments was based on different packet interval time such as 2-7 s. Figures 10-12 show the comparative results of PDR, average end to end delay, and average energy consumption of IEEE 802.15.4, TMP and the proposed DTS protocols. Figure 10 shows the PDR for different packet interval. As packet interval decreases the traffic load increases hence there is a decrease in PDR. This is due to the less number of packets generated during low traffic conditions. As traffic load increased, more packets were injected into the networks and there was an increase in the PDR. As packet interval increased there was a decrease in the PDR. This is due to the congestion and collisions in low traffic load. The data packets could reach the root node easily at high packet intervals. The DTS performed better than the TMP and IEEE 802.15.4 standard in terms of PDR by 5% and 17% respectively. It is obvious from the figure that as packet interval increases the average end-to-end delay decreases. The traffic will be less at longer packet interval and at the shorter interval the traffic will be heavy. More packets were injected into the network during high traffic resulting in network congestion and reduced buffer size. Therefore, packets cannot reach the root node easily resulting in increased end-to-end delay. During high packet interval, the traffic load decreased and the packets can reach the root node easily. Hence, the average end-to-end delay decreased. DTS allocated the time slot by considering the available buffer memory in each node. Therefore, the dynamic time slot selection reduced the average end-to-end delay in the network. The percentage of reduction when compared with TMP and IEEE 802.15.4 was 41% and 43% respectively. Figure 12 shows that as the packet interval increased the energy consumption also increases. This is due to the decrease in the traffic load with an increase in the packet interval. An increase in the packet interval increased the idle listening time and the time required to transmit control overheads. This resulted in an increase in energy consumption. If the packet interval was at minimum, the listening time and the control overhead transmission time is also minimum. Hence, the energy consumption was minimum. In addition to this, ER in fuzzy rules and the remaining energy in the link cost function helped control the rise in energy consumption when compared to the other protocols. Figure 11 shows that the DTS scheme had the lowest energy consumption than the existing protocols. The percentage of decrease was 25% and 39% when compared with TMP and IEEE802.15.4 standard. From the two sets of simulation results, it is obvious that there is a considerable improvement in packet delivery ratio with respect to the compared protocols. Hence, the dropping ratio was high, thereby resulting in a better packet delivery ratio. Similarly, the average delay and the energy consumption was also reduced considerably due to the energy efficient link cost function used in the routing layer and the energy ratio considered in the time slot allocation method. Conclusions The major challenges identified in real-time patient monitoring WBANs are the higher response time, lower reliability, and higher energy consumption. These shortcomings can be addressed in MAC layer using dynamic time slot allocation instead of fixed slot allocations. In this paper, a fog-assisted network is utilized for a real-time patient monitoring setup. The fog layer (central coordinator) is deployed at the edge of the network to reduce the response time and transmission errors. This makes it suitable for emergency medical applications which carries bursty data. An energy-efficient, cost-based objective function and an MCPS algorithm is designed for routing the data packets to the coordinator node. A new dynamic time slot allocation method called DTS has been proposed for allocating dynamic slots to the sensor nodes. It minimizes the unnecessary slot wastage and waiting time of packets in the queue. The slot allocation is designed based on the fuzzy logic with input variables as energy ratio, buffer memory ratio, and packet arrival rate. The chance value for the number of slots allocated is determined with the help of fuzzy inference rules. The results reveal that the DTS is capable of achieving a relevant enhancement in packet delivery ratio (12% and 15%), a significant reduction in average end to end delay (47% and 59%) and average energy consumption (22% and 31%) in comparison with TMP and IEEE 802.15.4 respectively. Future work will include the enhanced version of the proposed model for a specific disease prediction which is based on different data rate patient vitals. Also, the fog-assisted network can be made more secure by implementing new data encryption and authentication methods. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
9,506
2019-05-01T00:00:00.000
[ "Computer Science", "Business" ]
AN ALGORITHM FOR COASTLINE DETECTION USING SAR IMAGES Coastal management requires rapid, up-to-date, and correct information. Thus, coastal movements have primary importance for coastal managers. For monitoring the change of shorelines, remote sensing data are some of the most important information and are utilized for differentiating any detections of change on shorelines. It is possible to monitor coastal changes by extracting the coastline from satellite images. In the literature most of the algorithms developed for optical images have been discussed in detail. In this study, an algorithm which extracts coastlines efficiently and automatically by processing SAR (Synthetic Aperture Radar) satellite images has been developed. A data set of ALOS Palsar image of Fine Beam Double (FBD) HH-HV polarized data has been used. PALSAR image has L-band data, and has a 14 MHz bandwidth and 34.3 degrees look angle. Data were acquired in ascending geometry. Ground resolution of PALSAR image was resampled to 15m to amplitude image. Zonguldak city, lies on the northwest costs of Turkey, has been selected as the test area. An algorithm was developed for automatic coastline extraction from SAR images. The algorithm is encoded in a C__ environment. To verify the results the algorithm was applied on two PALSAR images gathered in two different date as 2007 and 2010. The results of automatic coastline extraction obtained from SAR images were compared to the results derived from manual digitizing. Random control points which are seen on each image were used. The average differences of selected points were calculated. INTRODUCTION As a peninsula country Turkey has a coastline more over than 8300 km, and 1700 km of this is surrounded by Black Sea.For the study area coastline detection is important because this region has mining which are located at the coast and has international ports.Due to fact that it is needed to update coastline information and morphological changes versus any natural disaster.Since this area suffer from flooding and erosion problems.It is crucial to have rapid and updatable output for decision makers in coastal management. Remote sensing data provide large scale surfaces to extract changes of features.Satellite images are widely used to extract coastline with both optical and SAR data.Wang et al (2010) presented a class association rule algorithm and designed a method to separate land and sea from each other.Karsli et al (2011) developed a thresholding method using top of atmosphere reflectance and normalized difference water index (NDWI) of Landsat image.Moreover, ISODATA classification method is used to observe long term shoreline changes (Yu, et al, 2011).He indicated the significance of using archived data for the effective assessment of coastline changes. Wavelet based edge detection method is used for coastline detection from ERS data by Niedermeier, et al. (2000).Liu and Jezek, 2004, used thresholding technique on Landsat and Radarsat data.Wang and Ellen (2008) calculated the backscatter coefficient (dB) values of HH polarized L-band SAR data and applied an edge-filtering model with Sobel filter.Two enhanced Level Set Algorithm (LSA) which is based on active contours or snakes are applied on Radarsat imagery by Ouyang et al (2010).Another wavelet based edge detection algorithm is developed for coastal change detection from ERS data by Chen et al. (2011), and morphological filter is followed to refine the boundaries. In this study we improved an automatic coastline extraction algorithm to extract coastline feature from ALOS/PALSAR data.The algorithm was successfully applied on optical data such as CORINA, IRS-1D and Landsat in previous study (Bayram et al., 2008). STUDY AREA The study area covers the Zonguldak and Bartin cities which are located at the coastline of the Black Sea in the north-west part of Turkey.The adjacent provinces of the study are located as Kastamonu at the east, Karabuk and Bolu at the south and Duzce at the southwest.In the area there are two main streams which drain water to the Black Sea.Bartın stream is drains at the coast of Bartın and Filyos stream drain in Caycuma city of Zonguldak (Figure 1). In these cities the main economic support is underground coal mining.It is the biggest and only hard coal mining in Turkey.Also industrial raw materials such as limestone, marble, quartzite are the main products of the study area. An oceanic (maritime) climate is dominant in Zonguldak, and precipitation is distributed almost evenly during the whole year.The mean humidity reaches up to %70.Along the coastline mountain align parallel to the coast.The area has a rough topography which rates up to 1000 m.Table 1.Specifications of ALOS data METHODOLOGY In this study, the radiometric resolution of both SAR images is 32 bit.As the image histogram describes the statistical distribution of the image pixels in terms of the number of pixel at each DN, both image scenes of the study area have bimodel histogram which indicates two dominant materials as water and land in the area.In the process of land (i.e.coast) and water separation, the radiometric resolution of the images was reduced to 8 bit in order to decouple noise from image as well as to increase the processing speed (Figure 2).In the last step of the process the final images were converted to binary images.For the purpose of making more apparent the sharp radiometric difference between the coast (land) and sea, histogram equalization process was applied on the images (Figure 3).Two types of images came out after the histogram equalization process.These two types were described as "too noisy" and "less noisy" image.Since the amount of the noise affect the extraction of coastline, various algorithms needed to be applied. In order to describe image as too noisy or less noisy, land and water parts of the images were separated as the first step.For this purpose, a 100 x 100 search window was applied on the entire image scenes to split the completely land and completely water portion of the images.In this process 128 was selected as the threshold value for the windows of 10000 pixels.If the number of pixels with 0 (black) values was %200 greater than the number of pixels with 255 (white) values, then the pixel group was assigned as water (Figure 4).The percentage value (% 200) was defined empirically. In all images a large portion of all the noise was found to have the same gray value.One more check was done on each window assigned as water.A check was carried out on the histograms of the every 100x100 search window, and the gray values with maximum number of pixel except for zeros were assigned as noise (Figure 4).As seen from the histogram, in the area consisting of 10000 pixels, the gray value of 4477 pixels was 105, the gray value of 5461 pixels was 0, and the gray values of 162 pixels was 141 (Figure 5).Hence gray value of noise was assigned as 105.If the number of noise pixel was % 30 graters then the number of 0 pixels then the image is accepted as noisy.If the image was too noisy, than the gray values of noisy pixels were removed.If the image was less noisy than next step was processed. Sensor Figure 5.A Histogram consisting of 100x100 pixels from the sea part of the image. After the elimination of pixels having regular noise value in the image scene, a mathematical morphology was used to eliminate pixels having random noise value (Acar, U., 2011).In the water part of the image, in order to remove the gray values of noise pixels by the dominant 0 (zero) pixel values a 3x3 circle structuring element was used.The reason preferring the 3x3 circle structuring element is that the area of noise pixels was not greater than 1 pixel, and mathematical morphology applied with this structuring element was sufficient to eliminate the noise (Figure 6).The applied image processing techniques caused some gaps and distortions on coastline.In order to reduce this gap and corruption, again a mathematical morphology was applied as closing this time.In this process a 5x5 circle structuring element was preferred.The reason to prefer the 5x5 circle structuring elements is its ability to eliminate distortion and gaps up to 5 pixels.The larger circle structuring selection results in the loss of the small bays and recesses (Figure 7).After mathematical morphology application, despite the possibility of remaining noise in the image, one more filter was applied.Task of this filter was to eliminate gray value of noise in a group of pixels when there were only zero gray values and noise gray values in that group of pixels (Figure 8).For the resulting image a fit-coast algorithm (Bayram, B., 2008) was used (Figure 9).It is a region growing algorithm using image processing techniques. Figure 9.The result of Fit-Coast algorithm application Sobel operator was applied on the binary image generated by fit-coast algorithm and then image was converted to vector data (Figure 10).Figure10.Automated extorted coastline converted to vector data. RESULTS Since coastal management requires rapid, up-to-date, and correct information, coastal movements have primary importance for coastal managers.Remote sensing is an important technique for detecting and monitoring coastlines using satellite images.In the literature most of the algorithms developed for optical images have been discussed in detail.In this study the use of SAR images was investigated for automatic coastline detection by using PALSAR images. The algorithm was applied on 4 images gathered in 2007 and 2010 in two polarizations as HH and HV.In order to test the accuracy of coastline detection, all images were digitized manually.To calculate the total difference between the automatic coastline detection and manually digitizing, land side of the each image was converted to a closed polygon and its area was calculated.Then the calculated areas of manual and automatic extractions were compared Table 2. Since digitizing coastline from PALSAR images is not an easy and simple task due to the feature of SAR images, for each pair the manual digitizing which was thought to be the best coastline extraction was chosen.Overall length of the coastline extracted automatically was compered with the manually digitized coastline (Table 3).For each image scene the difference is less than %0.9. Figure Figure 1.Study area 3. STUDY AREA Materials We used Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) data for the coastline detection.SAR data gathered in raw format and first converted to a Single Look Complex (SLC) data then converted to a 4 multi-look data.Fine Beam Dual (FBD) mode HH and HV polarized ALOS data has a 14 MHz bandwidth and 34.3 degrees look angle.PALSAR which uses L-band to illuminate the Earth surface is acquired in ascending geometry, and the ground resolution of the amplitude images are resampled to 15 m.In the process to indicate the effect of polarization HH and HV polarized images processed with the algorithm separately and compared.To verify the results two dated 2007 and 2010 PALSAR images are acquired and the algorithm was applied on four images in total. Figure 2 Figure 2 Part of the image scene which was reduced to 8-bit radiometric resolution Figure 4 . Figure 4. Defining threshold value for land and sea part of the images using 100x 100 search window. Figure 7 . Figure 7. Mathematical morphology closing applied image Table 2 . Comparison between automatic coastline detection and manual digitizing. Table 3 . Comparison between the lengths of automatically extracted and manually digitized coastlines.
2,601.2
2012-08-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Nanoparticle Emulsions Enhance the Inhibition of NLRP3 Antibacterial delivery emulsions are potential materials for treating bacterial infections. Few studies have focused on the role and mechanism of emulsions in inflammation relief. Therefore, based on our previous analysis, in which the novel and natural Pickering emulsions stabilized by antimicrobial peptide nanoparticles were prepared, the regulation effect of emulsion on inflammasome was explored in silico, in vitro and in vivo. Firstly, the interactions between inflammasome components and parasin I or Pickering emulsion were predicted by molecular docking. Then, the inflammasome stimulation by different doses of the emulsion was tested in RAW 264.7 and THP-1 cells. Finally, in Kunming mice with peritonitis, NLRP3 and IL-1β expression in the peritoneum were evaluated. The results showed that the Pickering emulsion could combine with ALK, casp-1, NEK7, or NLRP3 to affect the assembly of the NLRP3 and further relieve inflammation. LPNE showed a dose–dependent inhibition effect on the release of IL-1β and casp-1. With the concentration of parasin I increased from 1.5 mg/mL to 3 mg/mL, the LDH activity decreased in the chitosan peptide-embedded nanoparticles emulsion (CPENE) and lipid/peptide nanoparticles emulsion (LPNE) groups. However, from 1.5 to 6 mg/mL, LPNE had a dose–dependent effect on the release of casp-1. The CPENE and parasin I-conjugated chitosan nanoparticles emulsion (PCNE) may decrease the release of potassium and chloride ions. Therefore, it can be concluded that the LPNE may inhibit the activation of the inflammasome by decreasing LDH activity, potassium and chloride ions through binding with compositions of NLRP3. Introduction The fatality rate of peritonitis caused by severe E. coil infection is high. Infection can cause further sepsis and multiple organ failure with a fatality rate of more than 30%. Therefore, antibacterial drugs play a pivotal role in treating abdominal infections. However, due to antibiotic resistance, the rapidly emerging global health problems have brought significant challenges to microbial infection [1][2][3]. It was recently estimated that millions of people worldwide might die from sepsis every year due to antibiotic-resistant infections [4]. There are multiple strategies to overcome problems with resistance of microorganisms, such as decreasing the usage of antibiotics, using antimicrobial peptides, exploring novel antimicrobial reagents, etc. Antimicrobial peptides have aroused widespread concern due to their complex development of drug resistance, and solid antibacterial and immunomodulatory ability [5][6][7]. Some promising antimicrobial peptides can replace classic antibiotics for drug-resistant infections, such as pexiganan acetate, Omiganan, and Dulaglutide. They exhibit high activity against Gram-positive and Gram-negative bacteria, and they can target bacterial, fungal, parasitic, and eukaryotic cells indiscriminately [8,9]. However, limited success in the clinical application of these peptides urgently needs to be resolved, such as high hemolysis toward human cells, short circulating plasma half-life, easily changing the homeostasis of intestinal microflora, poor in vivo stability, and so on [10]. In our previous study, three kinds of oil in water Pickering emulsions stabilized with solid particles (chitosan peptide-embedded nanoparticles Pickering emulsion (CPENE), parasin I-conjugated chitosan nanoparticles Pickering emulsion (PCNE), lipid/peptide nanoparticles Pickering emulsion (LPNE)) successfully improved the poor stability, high hemolysis, and high toxicity in a mouse model of peritonitis. However, how Pickering emulsions decrease the symptoms of inflammation are unclear. More in-depth explorations of anti-inflammatory mechanisms are needed. The aberrant activation of NLR family, pyrin domain-containing 3 (NLRP3) inflamm asomes-a protein complex assembled of NLRP3, apoptosis-associated speck-like protein (ASC), and cysteinyl aspartate specific proteinase (casp-1)-contributes to the development of peritonitis. The NLRP3 inflammasome can activate casp-1, produce functional interleukin-1β (iL-1β), and further induce cell apoptosis. It is reported that many compounds have high anti-inflammatory activity and are beneficial to NLRP3-related diseases [11][12][13][14][15][16][17][18]. However, many antimicrobial peptides from food show low anti-inflammatory activity and destroy the colonization of beneficial bacteria [19]. The Pickering emulsion may change the interactions between inflammasome components and peptides through the interfacial effect of emulsion and the interference effect of biomacromolecules. However, little research addressed the function of the Pickering emulsion in inflammasome inhibition. The Kunming mice have been widely used as a bacterial peritonitis mouse model in research due to their advantages: good reproductive performance, fast growth, strong disease resistance and biological characteristics similar to humans and other mammals in the natural state [20,21]. Therefore, this study predicted the interactions between inflammasome components and parasin I or Pickering emulsion. Furthermore, the mechanism of Pickering emulsion inhibition on the activation of the inflammasome was explored in inflammation cells and Kunming mice. This research may provide knowledge about the function of Pickering emulsions on NLRP3 inhibition in inflammation-related diseases. Results and Discussions The material characterization, including its composition, size, surface properties, degradation properties, etc., was provided in our previous publication [22]. In LPN, the parasin I is embedded in the lecithin. In CPEN, parasin I was encapsulated by thiolated chitosan. In PCN, the parasin I was conjugated with chitosan through a C−N bond. Three nanoparticles are formed by self-assembly or ion cross-linking. Three kinds of emulsion were prepared with a fish oil/aqueous phase ratio of 92/8 (w/w) stabilized by LPN, PCN and CPEN, respectively. The size distribution of emulsion droplets in CPENE, LPNE and PCNE were 90-0.98, 1.03-1.08 and 1.0-1.25 µm, respectively. The composition of Pickering emulsion: parasin-I, chitosan, lecithin and fish oil show higher mucoadhesive properties and can be derived from food. Therefore, these three emulsions are natural food-grade Pickering emulsions. Molecular Docking The NLRP3 inflammasome is a complex including NLRP3, ASC, and casp-1, which plays an essential role in the innate immune defense system and can cause cell apoptosis and tissue damage. Many compounds were found essential to the assembly of the inflammasome [23]. As shown in Scheme 1, the ALK is required for NLRP3 inflammasome activation in macrophages [24]. The NEK7 is an essential mediator of NLRP3 activation downstream of potassium efflux [25]. The macrophages stimulated by LPS and ATP can trigger the assembly of NEK7 and NLRP3. Casp-1 is a crucial indicator for detecting cell pyroptosis [26]. The activated inflammasome can modify the casp-1 and the growth of cytokines, and the ripening and secretion of the cells are promoted in the process of natural immune defense. Therefore, tissue damage will be relieved in disease if these signals are blocked. In this analysis, three kinds of Pickering emulsions (CPENE, LPNE, and PCNE) were prepared to inhibit the activation of the inflammasome and decrease the inflammatory damage of tissue in the peritonitis mouse model. Firstly, to judge if the three kinds of Pickering emulsion can inhibit the activation of the inflammasome, the peptide-conjugated chitosan complex, parasin I, chitosan, thiolated chitosan, and lecithin underwent in silico molecular docking with four inflammasome components: ALK, NEK7, casp-1, and NLRP3. Computer simulation aimed to exclude the improper inflammasome inhibitors and discover potential effective inhibitors. Our previous publications verified that parasin I conjugated to chitosan through an amido bond. The ratio of chitosan and parasin I in parasin I-conjugated chitosan matrices was 1:1 [22]. The 3D diagrams of successful docking results are shown in Figure 1. Several donor atoms of NEK7 surrounded the chitosan and lecithin. 3-glutathione -chitosan and L-α-lecithin showed stronger binding with one end of casp-1 and ALK. In this analysis, three kinds of Pickering emulsions (CPENE, LPNE, and PCNE) were prepared to inhibit the activation of the inflammasome and decrease the inflammatory damage of tissue in the peritonitis mouse model. Firstly, to judge if the three kinds of Pickering emulsion can inhibit the activation of the inflammasome, the peptide-conjugated chitosan complex, parasin I, chitosan, thiolated chitosan, and lecithin underwent in silico molecular docking with four inflammasome components: ALK, NEK7, casp-1, and NLRP3. Computer simulation aimed to exclude the improper inflammasome inhibitors and discover potential effective inhibitors. Our previous publications verified that parasin I conjugated to chitosan through an amido bond. The ratio of chitosan and parasin I in parasin I-conjugated chitosan matrices was 1:1 [22]. The 3D diagrams of successful docking results are shown in Figure 1. Several donor atoms of NEK7 surrounded the chitosan and lecithin. 3-glutathione -chitosan and L-α-lecithin showed stronger binding with one end of casp-1 and ALK. Furthermore, the thiolated-chitosan and parasin I can both dock with NLRP3. The lowest binding affinities and binding force types of all dockings are shown in Table 1. There were no hydrogen bonds in any binding. The interactions between NLRP3 and 3-glutathione-chitosan-parasin I (−167 kcal/mol) were predicted to be stronger than that of NLRP3 and parasin I (−138 kcal/mol). Furthermore, the thiolated-chitosan and parasin I can both dock with NLRP3. The lowest binding affinities and binding force types of all dockings are shown in Table 1. There were no hydrogen bonds in any binding. The interactions between NLRP3 and 3glutathione-chitosan-parasin I (−167 kcal/mol) were predicted to be stronger than that of NLRP3 and parasin I (−138 kcal/mol). The lipid/peptide adduct will increase the interaction between NLRP3 and parasin I. Compared with L-α-lecithin, which had a CDOCKER_INTERATION_ENERGY of −79 and −70 with NEK7 and casp-1, respectively, the chitosan and chitosan-glutathione-3 had a higher CDOCKER_INTERACTION_ENERGY, which indicated peptide-embedded chitosan matrices and peptide-conjugated chitosan matrices had a more robust interaction with components of inflammation. Although the nanoparticle and Pickering emulsion molecular docking were not directly carried out with inflammasome components, the soft materials and convergent configurations of nanoparticles and Pickering emulsion carrier will enhance the interaction between inflammasome components and 3-glutathionechitosan-parasin I, chitosan, or lecithin. Yuan et al. [27], also reported that the reconfigurable assembly of colloidal material might cause more adaptation and interactive functions. These bindings with ALK, NEK7, casp-1, and NLRP3 will hinder the combination of the NLRP3 inflammasome. The interaction domain and atoms of the components of NLRP3 inflammasome and nanoparticles are shown in Figure 2 (the ligand in which atoms exceed the maximum number of atoms specified in the preferences is not shown). These results showed that NLRP3 components could interact with GLY, ALA, ARG, ASP, GLU, LEU, HIS and form alkyl, carbon-hydrogen bond, attractive charge, van der Waals, conventional hydrogen bond, pi-donor hydrogen bond with components of the nanoparticles. Therefore, the following hypothesis can be made; parasin I can dock with NLRP3, competing with NEK7 for its locus (Scheme 2). It can further decrease the activity of pro-caspase 1. CPENE affected the NF-κb activation, and the LPNE affected the function of casp-1 and decreased the secretion of IL-1β. PCNE can dock with NEK7, which hinders the binding of NEK7 and NLRP3. All in all, the stronger binding in Pickering emulsion and compositions of NLRP3 can affect the assembly of the NLRP3 and further relieve inflammation. The NLRP3-Inflammasome Inhibition in Macrophages Molecular docking results were used as an early screening tool in many studies [28][29][30]. After the prediction of the interactions, to further verify that the Pickering emulsion can block NLRP3 activation, the NLRP3-inflammasome inhibition in macrophages was tested. The mechanism of Pickering emulsion on casp-1 activation and IL-1β release were also explored. It is generally recognized that the activation of the NLRP3 inflammasome needs pre-stimulation and activation processes [31]. Activation of the NF-κB signal pathway and up-regulated expression of NLRP3 and pro-IL-1β were included in the pre-stimulation process (Scheme 1) [24]. Three concentrations (1.5 mg/mL, 3 mg/mL, 6 mg/mL parasin I) of parasin I solution, CPENE, and LPNE Pickering emulsions were incubated with RAW264.7 cells. As shown in Figure 3A,B, when the concentration of parasin I in LPNE increased from 1.5 to 6 mg/mL, the concentration of IL-1β and NLRP3 increased significantly, which indicated the LPNE had a noticeable inhibition effect on IL-1β and NLRP3 secretion. Therefore, LPNE showed a dose-dependent inhibition effect on the release of IL-1β and casp-1, indicating that LPNE might hinder the maturation of IL-1β and casp-1. Compared with the control group, the parasin I solution and CPENE showed lower concentrations of IL-1β and casp-1, which indicated they could inhibit casp-1 activation and IL-1β release stimulated by LPS and nigericin. Parasin I showed a higher inhibition effect than CPENE on IL-1β, which illustrated that the Pickering emulsion did not increase IL-1β release in RAW 264.7 cells. These results may be because the L-α-lecithin can dock with NEK7, casp-1 and ALK, and the 3-glutathione-chitosan can dock with casp-1, ALK and NLRP3. The structure and the interactions with mediators of inflammation were crucial for proinflammation [32]. Therefore, the more interactions between the Pickering emulsion and key inflammation compounds, the higher the anti-inflammatory effect of the Pickering emulsion. The Pickering emulsion increased IL-1β release compared with parasin I. This is mainly because the antimicrobial peptide precipitate in the unstable emulsion promotes the cells' inflammatory response. Besides the IL-1β and casp-1 release, the cell damage also needs to be analyzed. LDH is an enzyme in the plasma of living cells. When the cell is damaged and the permeability of the cell membrane changes, it will release LDH into the culture medium [33]. Therefore, the enzyme activity in the medium is proportional to the number of lysed cells. In this analysis, the activity was tested to evaluate the cell membrane damage in different groups. As shown in Figure 3C, the LDH activity of Parasin I was higher than CPENE and LPNE. With the concentration of the parasin I increased from 1.5 mg/mL to 6 mg/mL, the LDH activity decreased. When the concentration of parasin I in CPENE and LPN increased from 1.5 to 3 mg/mL, the LDH activity decreased. However, when the concentration increased from 3 to 6 mg/mL, the LDH activity increased. These results indicated that LPNE and CPENE had no dose-dependent effect on the LDH activity. The Pickering emulsions showed lower LDH activity than parasin I, which indicated that Pickering emulsions could decrease cell damage, membrane permeability change, and cell toxicity compared with parasin I. This may be because the carrier of the Pickering emulsion and the extra compounds of chitosan or lecithin enhanced the anti-inflammatory activity and further relieved the cell damage. The RAW264.7 cells are mouse peritoneal macrophages used to evaluate the inhibi- Besides the IL-1β and casp-1 release, the cell damage also needs to be analyzed. LDH is an enzyme in the plasma of living cells. When the cell is damaged and the permeability of the cell membrane changes, it will release LDH into the culture medium [33]. Therefore, the enzyme activity in the medium is proportional to the number of lysed cells. In this analysis, the activity was tested to evaluate the cell membrane damage in different groups. As shown in Figure 3C, the LDH activity of Parasin I was higher than CPENE and LPNE. With the concentration of the parasin I increased from 1.5 mg/mL to 6 mg/mL, the LDH activity decreased. When the concentration of parasin I in CPENE and LPN increased from 1.5 to may be because the carrier of the Pickering emulsion and the extra compounds of chitosan or lecithin enhanced the anti-inflammatory activity and further relieved the cell damage. The RAW264.7 cells are mouse peritoneal macrophages used to evaluate the inhibition of inflammation in human cells. Three kinds of Pickering emulsion were tested to show whether there is inhibition in the human acute monocytic leukemia cell line (THP-1). In Figure 4A,B, when the concentration increased from 1.5 to 6 mg/mL, parasin I showed a dose-dependent effect on the release of casp-1 and IL-1β, while LPNE had a dosedependent effect on the release of casp-1. As for parasin I, with the increase in parasin I concentration, the casp-1 in parasin I and LPNE groups showed a lower concentration in the cell supernatant. However, the casp-1 concentration increased in CPENE when the concentration of parasin I increased from 3 to 6 mg/mL. This may be due to the disability of CPENE. When the nanoparticles' concentration increased, the oil's stability in water became poor and caused the nanoparticles to precipitate. The CPENE then showed a poor anti-inflammatory ability. The CPENE Pickering emulsion showed a higher inflammasome inhibition effect on LPS and nigericin in RAW264.7 or THP-1, which suggested that the Pickering emulsion had higher inhibition activity on the NLRP3 inflammasome. Udayana Ranatunga et al. [34] showed that interfacial tension in the oil-water interface could affect the interactions of nanoparticles with other components. Therefore, the nanoparticles distributed on the oil-water interface can avoid direct contact between nanoparticles and cells. Moreover, the chitosan nanoparticles may increase the anti-inflammatory activity by binding to inflammasome components. show whether there is inhibition in the human acute monocytic leukemia cell line (THP-1). In Figure 4A,B, when the concentration increased from 1.5 to 6 mg/mL, parasin I showed a dose-dependent effect on the release of casp-1 and IL-1β, while LPNE had a dose-dependent effect on the release of casp-1. As for parasin I, with the increase in parasin I concentration, the casp-1 in parasin I and LPNE groups showed a lower concentration in the cell supernatant. However, the casp-1 concentration increased in CPENE when the concentration of parasin I increased from 3 to 6 mg/mL. This may be due to the disability of CPENE. When the nanoparticles' concentration increased, the oil's stability in water became poor and caused the nanoparticles to precipitate. The CPENE then showed a poor anti-inflammatory ability. The CPENE Pickering emulsion showed a higher inflammasome inhibition effect on LPS and nigericin in RAW264.7 or THP-1, which suggested that the Pickering emulsion had higher inhibition activity on the NLRP3 inflammasome. Udayana Ranatunga et al. [34] showed that interfacial tension in the oil-water interface could affect the interactions of nanoparticles with other components. Therefore, the nanoparticles distributed on the oil-water interface can avoid direct contact between nanoparticles and cells. Moreover, the chitosan nanoparticles may increase the anti-inflammatory activity by binding to inflammasome components. After the anti-inflammatory activity was evaluated, the mechanism of inflammasome inhibition in Pickering and parasin I needed to be explored. The activation of the inflam- After the anti-inflammatory activity was evaluated, the mechanism of inflammasome inhibition in Pickering and parasin I needed to be explored. The activation of the inflammasome needed the stimulus of a cascading signal. The release of potassium is generally recognized to be able to induce the activation of the inflammasome [35,36]. The chloride efflux is the downstream pathway of potassium efflux, which can cause the interaction between NEK7 and NLRP3 and activate inflammasomes (Scheme 1) [35]. After inflammasome induction, the inhibitory mechanisms of inflammasomes were explored by analyzing potassium and chloride ion release of THP-1. As shown in Figure 4E, LPS and nigericin can significantly promote IL-1β release compared with THP-1 cell and THP-1 activated by phorbol-12-myristate-13-acetate (PMA). The parasin I, CPENE and PCNE can decrease the release of potassium and chloride ions and further decrease the assembly of the NLRP3 and the secretion of IL-1β, which indicates that the CPENE and PCNE can enhance the anti-inflammatory activity by decreasing the releases of potassium and chloride ion. There was a high content of 3-glutathione-chitosan in CPENE and high content of chitosan in PCNE. This result is consistent with molecular docking results, which showed a higher bonding force in 3-glutathione-chitosan and casp-1, ALK and NLRP3. Therefore, the Pickering emulsion may also preserve the membrane potential and osmotic pressure of the macrophage and stabilize the concentration of potassium and chloride. Immunofluorescence Analysis in Visceral Peritoneum The survival of the different groups is shown in Table 2, the survival of the control group mice was 100% compared with 43.75% for control mice after intraperitoneal injection 72 h. The survival ratio of parasin I and CPENE were both 93.75%. The mechanism of Pickering emulsion improving the symptoms of inflammation was investigated by monitoring NRLP3 and IL-1β expression. In order to visualize the expression of NLRP3 and IL-1β, an immunofluorescence assay was carried out. Similar to the response to the stimulation of the E. coli, the peritoneum increased the expression of IL-1β and NRLP3, which were tracked with fluorescent-labeled antibodies. There was lower intensity of fluorescently labeled antibodies in the control group. The morphology of the peritoneum showed NRLP3 expression in mononuclear cells infiltrating the peritoneal membrane of Kunming mice with peritonitis. The IL-1β receptor (IL-1R1) was located on CD11B + peritoneum mononuclear inflammatory cells. As shown in Figure 5, at 24 h, there were fewer CD11B + peritoneum mononuclear inflammatory cells. However, IL-1β had a higher expression level in the model than that of the PCNE groups. NLRP3 also had a higher expression in the model group at 24 h than at 12 h. At 72 h, there were fewer peritoneum mononuclear inflammatory cells ( Figure 6). NLRP3 and IL-1β both had higher expression except for the control and PCNE groups. NLRP3 had a relatively higher expression in the model group than parasin I and CPENE groups. These results indicated that PCNE and LPNE decreased inflammation during the treatment of peritonitis from 24 h to 72 h. They can improve the function of the visceral peritoneum by decreasing the IL-1β and NLRP3 expression. Different letters ("a", "b" or "A"-"C") indicate significant differences (p < 0.05). Pickering Emulsion Preparation and Quantification The methods of thiolated chitosan preparation and quantification, peptide-embedded chitosan matrices preparation, peptide-conjugated chitosan matrices preparation, chitosan peptide-embedded nanoparticles (CPEN), parasin I-conjugated chitosan nanoparticles (PCN) and lipid/peptide adduct and lipid/peptide nanoparticles (LPN) preparation, and Pickering emulsions preparation were described in detail in our previous study [22]. Molecular Docking Discovery Studio 2017 was used to perform molecular docking. The 3D structure of never in mitosis gene A-related kinase 7 (NEK7), casp-1, anaplastic lymphoma kinase (ALK), NLRP3, L-α-lecithin, and chitosan were downloaded from PubChem (https://pubchem. ncbi.nlm.nih.gov/ (accessed on 1 March 2021)). The 3D structures of chitosan-GSH and chitosan-parasin I were drawn with Chemical Bio Draw Ultra 14.0 (ChemBioOffice Ultra 14.0 suite, PerkinElmer Inc., Akron, OH, USA). The parasin I, chitosan, chitosan-parasin I, or the chitosan-GSH were considered docking sites to dock with NEK7, ALK, Casp-1, or NLRP3 Dock Ligands (CDOCKER), and the lowest energy and the docking sites in the protein were calculated. The parameters in the molecular docking were set as the default values. Inflammatory Cell Induction To stimulate the NLRP3 inflammasome, after the frozen mouse mononuclear macrophage leukemia (RAW 264.7) cells or the human acute monocytic leukemia cell line (THP-1) were resuscitated and passaged, the cells in the logarithmic growth phase were inoculated in cell culture six-well plates and cultured overnight in a 37 • C, 5% CO 2 incubator. The cell culture medium of RAW 264.7 and monocyte-derived macrophage THP-1 were DMEM + 10%FBS + 1% (Penicillin-Streptomycin Solution) and RPMI 1640 + 10%FBS + 1% (Penicillin-Streptomycin Solution), respectively. The cells were incubated with LPS (50 µg/mL) for 3 h. Subsequently, the cells were incubated with CPENE, PCNE, or parasin I for 0.5 h, with nigericin (10 µM) for 0.5 h. The blank group was regarded as the control group (RAW 264.7 or THP-1 treated with LPS (50 µg/mL) for 3 h and nigericin (10 µM) for 0.5 h). The cell supernatant was obtained after centrifuging for 10 min at 3000 rpm. The cell supernatant was prepared for the following measurements. The activity of lactate dehydrogenase (LDH) was analyzed by a LDH assay kit using cell supernatant. Chloride Ion Concentration Detection The chloride ion in the supernatant of RAW 264.7 and THP-1 was measured by a chloride ion assay kit (Nanjing Jiancheng Bioengineering Institute (Nanjing, China), C003-2). A volume of 10 µL deionized water, chloride standard solution, or cell supernatant was added to the 250 µL mercury thiocyanate working solution. After 5 min, the OD value was recorded at 480 nm using a microplate. The standard curve calculated the chloride ion concentration of the supernatant. Potassium Ion Concentration Detection The potassium ion in the supernatant of RAW 264.7 and THP-1 was measured by a potassium ion assay kit (Nanjing Jiancheng Bioengineering Institute (Nanjing, China), C001-2). A volume of 50 µL deionized water, standard potassium solution, or cell supernatant was added to the 200 µL NA-TPB working solution. After 5 min, the OD value was recorded at 450 nm using a microplate. The standard curve calculated the potassium ion concentration of the supernatant. Therapy of Mice Peritonitis Model The therapy and peritonitis modelling was carried out according to the methods of our previous studies. The 112 Kunming mice (7 weeks old, 7 groups each containing 8 male and 8 female animals, 18-22 g) were bought from the laboratory animal center of China Three Gorges University. After being bred under 12 h light/dark cycles with access to standard meals and water, the mice were intraperitoneally injected with E. coli ATCC 25922 (1 × 10 5 CFU/mL, 10 mL/kg) except for the blank control group. The dosage of E. coli was chosen following the preliminary experiment. After 1 h, the parasin I solution, ciprofloxacin (CPFX) solution, and three Pickering emulsions (100 mg/mL) were injected intraperitoneally. Mice injected with saline served as negative control groups. The survival of mice was recorded after 6, 12, 24, and 72 h. The experiments were conducted following the National Research Council Guide for the Care and Use of Laboratory Animals and approved by the Hubei Academy of preventive medicine Ethics Committee. The assigned accreditation number of the laboratory was 20191825. Tissue Staining and Immunohistochemistry Tissue samples were firstly fixed in 4% paraformaldehyde phosphate tissue fixative. After fixation, the samples were washed with phosphate buffer solution and then immersed in a series of ethanol solutions with increasing concentration for dehydration. The samples were then further dehydrated overnight using an automatic tissue dehydrator. Subsequently, the tissue samples were embedded in paraffin on the paraffin embedding machine and properly cooled on the cold table. The paraffin section of the peritoneum was dewaxed and washed with water, and the antigen was repaired with EDTA antigen repair buffer (pH 8.0) in a microwave oven. The section was circulated with histochemical strokes and sealed with hydrogen peroxide and BSA. The primary antibodies (CD11B + NLRP3 or CD11B + IL-1R, Servicebio, bs-20697r) and the HRP-labeled secondary antibodies (HRP RAB CY3TSA 488 goat anti-rabbit, Servicebio, Gb25303, Wuhan, China) were added to the section in sequence, and incubated 30 min and overnight, respectively. After incubation, the antigen was heated in EDTA antigen repair buffer (pH 8.0) in the microwave to remove the primary and secondary antibodies. Then the second primary and secondary antibodies were added to the slices, and DAPI to stain the nucleus. After self-fluorescence quenching and sealing with anti-fluorescence tablets, the section was analyzed using a fluorescence microscope. Conclusions The food-derived Pickering emulsions stabilized by antimicrobial peptide nanoparticles were tested as therapeutic agents for bacterial infectious diseases in silico, in vitro and in vivo. CPENE significantly improved the NLRP3 activation and maturation of the IL-1β in RAW 264.7 and THP-1 cells. LPNE showed a dose-dependent inhibition effect on the release of IL-1β and casp-1. When the concentration of parasin I increased from 1.5 mg/mL to 3 mg/mL, the LDH activity decreased in the CPENE and LPNE groups. However, from 1.5 to 6 mg/mL, LPNE had a dose-dependent effect on the release of casp-1. CPENE and PCNE can decrease the release of the potassium and chloride ion. At 24 h, there were fewer CD11B + peritoneum mononuclear inflammatory cells in all peritoneum groups. Therefore, LPNE may inhibit the activation of the inflammasome by decreasing LDH activity and potassium and chloride ions through binding with components of NLRP3.
6,283.2
2022-09-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Sedum pallidum (Crassulaceae) – alien species of the flora of plain part of Ukraine The objective of this study was to evaluate the current distribution of Sedum pallidum in Ukraine, to analyze its state in the alien flora of Ukraine. Material and methods. The studies were conducted in 2008–2019 in the plain part of Ukraine and the Crimean Mountains. Literature information, several national herbarium collections, and other sources were analyzed. Special attention was paid to the delimitation of synanthropic locations of S. pallidum from cultural ones. Results. S. pallidum is a sub-euxine species, which range occupies the Crimean Mountains. It is widely cultivated throughout Ukraine and is prone to naturalization, thanks to its vegetative and generative reproduction. In general, about 30 synanthropic locations of S. pallidum have been recorded, mainly in the Middle Prydniprovia and Western Ukraine. Urban lawns and roadsides on light substrates are favorable ecological niches for S. pallidum. Соnclusions. S. pallidum is the alien species in the flora of the plain part of Ukraine and ergasiophyte in its origin. A potential secondary synanthropic range of this species occupies the whole country except the Carpathian highlands. It has been established that S. pallidum in the culture of the Forest-Steppe is a perennial herb. Two races identified in its composition (var. pallidum and var. bithynicum) are probably ecads and have no systematic importance. In the culture, S. pallidum is characterized by successful vegetative and generative reproduction, which contributes to its naturalization. S. pallidum is often confused with other species of the genus, which does not contribute to its study in adventive floras. A key for S. pallidum determination has been proposed. Introduction The timely detection of new alien plant species is a contemporary pressing issue. During the acclimatization and naturalization, many alien plant species escape beyond the places of cultivation and replenish synanthropic flora. The number of escaped plants has increased rapidly in recent decades, causing modern concerns (Protopopova & Shevera, 2012, 2013Shynder, 2019b). The family Crassulaceae J. St.-Hil. is characterized by the great diversity of cultivated species. Many species of the family naturalize and form secondary synanthropic ranges (Byalt, 2011). Sedum pallidum M. Bieb belongs to such species, widely handled as an ornamental plant throughout the plain part of Ukraine and is prone to naturalization. There is no reliable information about the naturalization of S. pallidum in Ukraine, so the establishment of a synanthropic range of this species is relevant. Hence, the objective of this study was to clarify the current distribution of S. pallidum in Ukraine and to analyze its state in the alien flora of Ukraine. Material and methods The floristic and comparative-morphological studies of S. pallidum in the natural, introduction, and synanthropic habitats were conducted in 2008-2019 in the plain part of Ukraine and the Crimean Mountains. Some plants were introduced to the M.M. Gryshko National Botanical Garden of the National Academy of Sciences of Ukraine for further exact determination. Herbariums of KW, KWHA, KWU, LWS, and LWKS (see Index Herbariorum (Thiers, 2020) for abbreviations), literature and other sources were analyzed to summarize the chorological information (Wulf, 1960;Shynder et al., 2018;Shynder, 2019b;UkrBIN, 2020). Classification of groups of alien plants is provided with respect to published sources (Protopopova & Shevera, 2012;Shynder, 2019a). An important task was the separation of the synanthropic habitats of S. pallidum from the places of its cultivation. As we showed before (Shynder, 2019a), artificially planted plants after abandoning care of them (so-called "relicts of culture") should not be attributed to spontaneous habitats. Sometimes plantations of S. pallidum sprawl and form a carpet cover over a large area, which is similar to the spontaneous population, and the flowerbed itself may not exist. But the cultural origin of such colonies can be identified by the remains of old beds and other representatives of the cultural flora. As synanthropic (adventitious), we refer to those populations, which were located at some distance from flowerbeds and composed of many individuals and their groups, often covering a large area (sometimes several dozens of ares). Results and discussion Sedum pallidum is a sub-euxine species with a range covering the Crimean Mountains, the southeastern part of the Balkan Peninsula, Asia Minor, the South Caucasus, North-Western Iran and Cyprus (Wulf, 1960;Chamberlain & Muirhead, 1972;Byalt, 2001;Hart & Eggli, 2003). In the Crimean Mountains, S. pallidum is at the northern limit of its natural range. Here, according to herbariums data, literature sources (Wulf, 1960), and our own field observations, we have identified 70 locations of the species that outline the Crimean part of its range (Fig. 1). Most of the Crimean locations of S. pallidum are concentrated in the western part of the main mountain range. In other parts of the Crimean Mountains, the species occurs only sporadically. The natural habitats of S. pallidum are attributed to rocky substrates with moderate moisture -rocks, scree, and sometimes occur in light forests. Comparing with the Sedum pallidum (Crassulaceae) -alien species of the flora of plain part of Ukraine close species S. hispanicum L., the habitats of S. pallidum are moister (Wulf, 1960;Byalt, 2001). The species is widely cultivated in many countries of the Holarctic as a ground-covering perennial plant and is prone to naturalization. S. pallidum often occurs wild in Southern Scandinavia and Central Europe (Byalt, 2011;Bomble & Wolgarten, 2012;Hohla, 2018). In Eastern Europe, this species sometimes escapes beyond the places of cultivation (Byalt, 2001). It is also naturalized in China and Japan (Byalt, 2011). Probably one of the adaptations of S. pallidum to the successful growth in conditions of high humidity is guttation. In the condition of Kyiv, guttation in S. pallidum is quite pronounced in comparison with other species of the family. It happens even when it is not observed in other species. The biomorphological characteristics of the species differ among published sources. Wulf (1960) attributed S. pallidum to annuals and biennials. A number of authors distinguish two biomorphological races within S. pallidum s.l.: a) the annual typical var. pallidum (= subsp. pallidum) with or without few shoots, which are never wintering; b) the perennial var. bithynicum (Boiss.) D.F. Chamb. (= subsp. bithynicum (Boiss.) V.V. Byalt) with numerous sterile wintering shoots. Only annual plants are described for the Crimean Mountains (Chamberlain & Muirhead, 1972;Byalt, 2001). Byalt (2001) attributed S. pallidum to annuals or biennials. He described both races as cultivated plants in Eastern Europe and noted that subsp. bithynicum is cultivated more often. Hart & Eggli (2003) placed S. bithynicum in synonyms to S. pallidum and characterized the species as perennial. The authors also noted that all sterile shoots could periodically form inflorescences, and in this case, the plants become monocarpic. We tend to regard S. pallidum as a vegetatively mobile herbaceous perennial that forms monocarpic generative shoots and is prone to particulation. In 2007 and 2008 we introduced living plants of S. pallidum from different locations of Crimean yaylas to the M.M. Gryshko National Botanical Garden NAS of Ukraine. All plants have taken roots well and for several years have formed a thick carpet in which sterile wintering shoots prevailed numerically; in the following year, some of them have formed generative shoots (Fig. 2). Generative shoots wither after flowering, often together with nodes of tillering (especially in the conditions of drought). This fact leads to the particulation of the maternal individual. However, in the situation of sufficient moisture, the lower part of the withered stems can persist and become a short-lived rhizome. Probably, in conditions of dry habitats of the Crimean Mountains, plants could not always be able to form a sufficient number of sterile shoots. In Crimean habitats that we observed, S. pallidum plants formed mainly generative shoots and were not prone to intense vegetative overgrowth. Thus, the plants of S. pallidum from the Crimean geographical population are only facultative annuals, if they are at all. Hence, the races identified within S. pallidum are not independent taxonomic units but simply ecads. In the conditions of Kyiv, cultivated plants successfully produced self-seeding. Seedlings appear in September and October, and the rosette size increases before winter. Thus, in the Forest-Steppe conditions, S. pallidum is characterized by successful vegetative and generative reproduction. The rapid vegetative sprawl and attractive greenish-glaucous color of the vegetative parts contributed to the popularity of S. pallidum for the landscape of the large areas, primarily for municipal sites. In places of cultivation, S. pallidum is characterized by durability and often extends on adjacent areas, where it successfully anchors. Despite the fact that S. pallidum has become a trivial species in urban landscapes, it was not usually cited in the lists of the alien flora of cities and regions of Ukraine until recently. However, after the revision of herbariums and published material, it was found that S. pallidum was sometimes reported, but under other names (Kagalo et al., 2004;Kuzyarin, 2012;Doyko & Katrevych, 2015). The list of secondary synanthropic locations of S. pallidum in the plain part of Ukraine is given in the Appendix. Also, we have repeatedly registered persistent colonies of cultural origin near abandoned or existing places of S. pallidum cultivation and flowerbeds. For example, we noted such colonies in Rudnytsia (Vinnytsia region), Rzhyshchiv (Kyiv region), Kaniv, Smila, Talne, and Uman (Cherkasy region, in all cases in parks), Dzhankoy (Crimea). It should be noted that the locations of S. pallidum, obviously of introduction origin, were also found in the seaside part of the Southern coast of Crimea (according to O.F. Levon). Still, the secondary range of this species can be separated from the natural one only in the Steppe Crimea. Thus, nowadays, there are about 30 synanthropic locations of S. pallidum, most of which are recorded from the Middle Prydniprovia and Western Ukraine (Fig. 3). Urban lawns and roadsides on sandy and other light substrates are the favorable ecological niches for S. pallidum. Their regular mowing promotes the vegetative dispersal of the species (especially in regions with sufficient moisture) and eliminates competition from tall herbaceous plants. The most of synanthropic S. pallidum locations were recorded in similar habitats, where the species was at the stage of the beginning of an expansion. In some other ecotopes, for example, on railway embankments, sandy roadsides of forest roads, around the edges of forests, S. pallidum was rarely recorded and was only at the stage of initial expansion and fixing. In all the cases, curtains and colonies of S. pallidum developed successfully on wellaerated substrates. Today, there is every reason to believe that S. pallidum participates in floras of all large and many medium-sized cities of Ukraine as an adventitious element. So, the potential secondary synanthropic range of this species covers the whole Ukraine with except the Carpathian highlands. But outside of urbanized landscapes, not all areas of Ukraine can be suitable for S. pallidum naturalization. For example, in the southern lane of the Rightbank Forest-Steppe, S. pallidum occurs in a culture quite rarely, and we have not noticed any of its synanthropic habitat. The rarity of S. pallidum in the culture of this terrain is explained by the absence of large cities and the prevalence of traditional landscaping methods on agrarian areas (without rockeries and carpet flowerbeds). In addition, the ecological conditions of the southern strip of the Forest-Steppe (with soils of heavy texture, lack of atmospheric moisture, frequent dominance of turfy steppe and tall adventive plant species in ruderal ecotopes), probably much less favorable for S. pallidum naturalization comparing to the northern strip of the Middle Prydniprovia and Western Ukraine. Thus, S. pallidum is an ergasiophyte in its origin, which in current conditions, became a part of the adventive flora of Ukraine. The first synanthropic finding of this species dates to 2001, so according to the time of immigration, it is eukenophyte -alien Sedum pallidum (Crassulaceae) -alien species of the flora of plain part of Ukraine species that has come since the end of the XX century (Protopopova & Shevera, 2012). In synanthropic habitats, by the degree of naturalization, S. pallidum participates as a typical colonophyte (species that form stable local populations in synanthropic habitats). In some urban floras, such as floras of Kyiv and Lviv, it demonstrates a tendency to expand further and gradually goes into a group of epekophytes (species that have fully naturalized in anthropogenic habitats). At the moment, S. pallidum should be considered as an unstable element of flora. But, in some urban areas, this species is already sufficiently entrenched in many secondary ecotopes and is a potentially expansive species. Due to its current distribution and ecological features, S. pallidum can be widely spread in the ruderal and semi-natural ecotopes of Polissya and Western Ukraine in the future. In more arid regions with fertile soils (Chernozems), its distribution will probably remain localized and will correlate with urban areas. To clarify the species composition of the regional adventive floras, florists should pay attention to S. pallidum. In many cases, it is also necessary to check the correctness of determination of Sedum specimens preserved in collections. As noted above, this species was widely represented as the escaped plant in our flora, but often misidentified with S. album, S. hispanicum, and S. lydium. After acquaintance with many private and scientific collections in Ukraine, we concluded that S. pallidum is one of the most common species representing the genus Sedum. Still, it is mostly unnamed, or it is provided under wrong names in such collections. S. pallidum is officially listed only for two collections of arboretums and botanical gardens, while S. lydium is mentioned for seven collections (Mashkovska, 2015). Taking into account that we have never (!) met living plants of S. lydium (at least in the flowering stage), we can assume that it is a result of numerous misidentifications. Identification of Sedum s.l. can be problematic. Thus, the Asian S. lydium is mostly not included in published floras and identification keys in Europe. Byalt (2001) attributes S. pallidum to annuals or biennials, which sometimes also lead to doubts during identification. As noted above, in the ecologic conditions of the plains part of Ukraine, S. pallidum develops as a perennial plant. Since S. pallidum has so far been confused with S. album, S. hispanicum, and S. lydium, we outlined the main features distinguishing these species (Fig. 4). The key for identification of Sedum pallidum and frequently confusing species in the flora of Ukraine 1. Plants annuals, glandular-pubescent; there are no sterile shoots, or they are single and are not rooted, flowers (5) 6-7 (9)-merous; in culture is rare …………………. S. hispanicum -. Plants with numerous sterile shoots that are rooted; flowers 5-merous ……………………… 2 2. Plants bright-greenish-glaucous, sometimes with anthocyanin lower leaves; flowering stems low, glandular-pubescent; inflorescence is branching at the level of sterile shoots, diffuse, consist from several branches which diverge; widespread …………………… S. pallidum -. Plants dark-green or green (not glaucous), often burgundy or, rarely, with red lower leaves; flowering stems high rise above sterile shoots; inflorescences compact, rounded; plants completely glabrous ……………………………………. 3 3. Turfs loose; sterile shoots of various lengths; plants dark-green, often burgundy; inflorescences of 20-50 flowers, loose; petals 3-4 times longer than sepals; widespread ………………. S. album -. Turfs dense; plants green, but the lower leaves are often red, or the tips of the leaves are red; inflorescences of 5-20 flowers, dense, petals 1.5-2 times longer than sepals …………………... S. lydium During the flowering period, distinguishing these species is generally not a problem. In general, S. pallidum is a widespread ground-covering plant that forms light greenish-glaucous curtains, sometimes with anthocyanin (purple) spots, and inflorescences are always glandular-pubescent. S. hispanicum is a glaucous-green, often with a gray and anthocyanin-burgundy tinge; it is a glandularpubescent annual plant, which does not sprawl, and is rare in collections. S. album is a widespread perennial that forms dark green carpets, often with a burgundy tinge (or entirely burgundy, but not purple). S. lydium is close to the S. album, and is characterized by dense, brighter green curtains and often red lower leaves or reddish tips on the upper leaves. Researchers should pay attention to that S. album and S. lydium are entirely glabrous plants. Distribution of S. lydium in culture in Ukraine requires further critical study. According to literature sources, this is a rather widespread species in culture (Byalt, 2001;Mashkovska, 2015), but those sedums that we have observed in private collections under identifications of S. lydium and S. lydium 'Glaucum' were found belonging to S. pallidum. Conclusions Sedum pallidum is an alien species and an unstable element of the flora of the plain part of Ukraine, an ergasiophyte in its origin. It is widely spread in the culture throughout the country. It has been established that S. pallidum in a culture of the Forest-Steppe is a perennial herb, and two races identified in its composition (var. pallidum and var. bithynicum) are probably ecads, which are out of systematic importance. In the culture, S. pallidum is characterized by successful vegetative and generative reproduction, which significantly supports its naturalization. In the urban floras of Kyiv and Lviv, this species tends to expand and becomes an epekophyte, but in most other regions, it is at the stage of colonophyte. In collections and floristic lists, the name S. pallidum is widely wrong adduced, which leads to confusion. Hence, we developed the key for the identification of S. pallidum. The potential secondary synanthropic range of this species occupies the whole country except the Carpathian highlands.
4,030.6
2020-06-06T00:00:00.000
[ "Environmental Science", "Biology" ]
Improving Energy Adaptivity of Constructive Interference-Based Flooding for WSN-AF Constructive interference (CI) is a synchronous transmission technique for multiple senders transmitting the same packet simultaneously in wireless sensor networks (WSNs). CI enables fast and reliable network flooding in order to reduce the scheduling overhead of MAC protocols, to achieve accurate time synchronization, to improve link quality of lossy links, and to realize efficient data collection. By achieving microsecond level time synchronization, Glossy realizes millisecond level CI-based flooding and 99% reliability. However, Glossy produces substantial unnecessary data forwarding, which significantly reduces the network lifetime. This is a very critical problem, especially in energy-limited large-scale wireless sensor networks for agriculture and forestry (WSN-AF) system. In this paper, we present an energy adaptive CI-based flooding protocol (EACIF) by exploiting CI in WSN-AF. EACIF proposes a distributed active nodes selection algorithm (ANSA) to reduce redundant transmissions, thereby significantly reducing energy consumption and flooding latency. We estimate the performance of EACIF both with real data traces and with uniformly distributed topology. Simulation results show that EACIF achieves almost the same packet reception ratio (PRR) as Glossy (e.g., 99%), while reducing 63.96% energy consumption. EACIF also reduces 25% flooding latency. When the packet interval is 30 seconds, EACIF achieves 0.11% duty cycle. Introduction With the emergence of the Internet of Things (IoT) [1,2], wireless sensor networks for agriculture and forestry (WSN-AF) are becoming a research hotspot. WSN-AF systems can provide real-time field data, including environment temperature, relative humidity (RH), O 2 concentration, CO 2 concentration [3], and ethylene (C 2 H 4 ) concentration. The implementation of WSN-AF system demands forwarding data [4], synchronizing sensor nodes [5], creating data collection tree [6], locating the monitored objects [7], reprogramming the revised code [8], and so forth. All these common applications in WSNs depend on the service of network flooding which propagates a packet through the whole network. Power consumption, latency, and reliability are three critical factors of flooding in wireless sensor networks (WSNs). Recently, constructive interference-(CI-) based flooding is emerging as a latency optimal and high reliability flooding technique. As a representative CI-based flooding protocol, Glossy [9] realizes 99th percentile packet reception ratio (PRR). Real testbed (MoteLab, Twist, and Local) experiments show that Glossy achieves millisecond level flooding latency which is almost 100 times faster than traditional flooding solutions [10,11]. Then a question arises: is Glossy energyefficient? Unfortunately, Glossy makes all nodes involved in the data forwarding, which directly triggers enormous number of superfluous data transmissions, eventually leading to unnecessary energy consumption and network life reduction. This is a very critical problem, especially in energylimited large-scale WSNs. One effective solution is to reduce redundant transmissions while maintaining the advantages of CI-based flooding. A concurrent transmission scenario of CI-based flooding can be illustrated in Figure 1. In this example, 1, 2, 3, 4, 5 As the most common behavior of CI-based flooding, 1, 2, 3, 4, 5 are forwarding an identical packet to 1 simultaneously. In order to transfer as soon as possible, Glossy makes all the five senders in the radio on mode (dashed arrows in Figure 1) which is obviously unnecessary. In fact, an optimized forwarding set can be selected via lightweight communication between neighbors. Suppose that the residual energy of each node can be expressed as Er 1 , Er 2 , Er 3 , Er 4 , and Er 5 . The residual energy of each candidate sender and its neighbors is as follows: 1 = {Er 1 , Er 5 , Er 4 }, 2 = {Er 2 , Er 3 , Er 4 }, 3 = {Er 2 , Er 3 , Er 5 }, 4 = {Er 1 , Er 2 , Er 4 }, and 5 = {Er 1 , Er 3 , Er 5 }. Then we have Er 1 > Er 2 > Er 3 > Er 4 > Er 5 . Each node independently selects the node which has the largest residual energy from ( = 1, . . . , 5). For the example in Figure 1, 1 and 2 are selected as the final senders. 3, 4, and 5 can keep silent to save energy. As shown by the red arrows in Figure 1, we can achieve the same flooding performance but save 60% energy consumption compared with Glossy. In particular, all nodes have the same residual energy in the initial state. There are two disjoint forwarding sets { 1, 2} and { 4, 5}. Network lifetime can be greatly extended by energy adaptive scheduling { 1, 2} (red arrows in Figure 1) or { 4, 5} (green arrows in Figure 1) as the practical forwarding set. Inspired by this insight, we propose an energy adaptive CI-based flooding protocol (EACIF) for WSN-AF. EACIF improves energy adaptive CI-based flooding through energy scheduling and topology control. Compared with the stateof-art CI-based flooding protocols, EACIF achieves lower energy consumption, lower latency, and the same coverage (i) We are the first to propose a lightweight distributed energy adaptive CI-based flooding protocol (EACIF) for WSNs. EACIF adapts to the network changes in residual energy and selects the optimized forwarding set via the communication between neighbors. (ii) EACIF constructs a sparse topology for CI-based flooding via an approximate minimum connected dominating set (MCDS) algorithm, which can be completed approximately in ( ). (iii) EACIF proposes a -covered mode switching algorithm (KMSA) for -covered UDG and theoretically derives the energy saving formula for CI-based flooding. (iv) We simulate EACIF based on uniformly distributed topology and real topology. Simulation results show that EACIF can achieve the same reliability as Glossy, while reducing 63.96% energy consumption on the real trace. The rest of the paper is organized as follows. We review the related work in Section 2. In Section 3, we analyze the process of CI-based flooding and present active nodes selection algorithm (ANSA) and -covered mode switching algorithm (KMSA). We evaluate the performance of EACIF in Section 4 and conclude the paper in Section 5. Related Work Constructive interference (CI) is a physical layer phenomenon and was first experimentally discovered by Dutta et al. [12]. Subsequently, CI is used in backcast [13] to avoid broadcast storm problem. CI can be achieved by submicrosecond time synchronization of all coordinated senders. A typical scenario of CI is shown in Figure 2; 1, 2, and all employ IEEE 802.15.4 radios. The packets ( 1 and 2 ) with the same data are, respectively, transmitted from 1 and 2 to . can decode the 1 and 2 as one packet if the maximum temporal displacement Δ of 1 and 2 is no more than one chip period = 0.5 s. By implementing chip-level time synchronization on IEEE 802.15.4 radio, CI can greatly improve the efficiency of concurrent transmission in WSNs. Compared with the typical concurrent transmission technologies, capture effect (CE) [14] and message-in-message (MIM) [15], CI neither needs to set packet with strong signal International Journal of Distributed Sensor Networks 3 arrival first nor needs to add additional hardware detecting the signal strength. Recently, TriggerCast [16] improves CI technology by compensating synchronization errors caused by propagation delay and realizing link selections in the lossylink condition. CI requires that multiple senders simultaneously transmit the same packet. This behavior is consistent with the characteristics of network flooding. Besides CI-based flooding, opportunistic flooding [17] also achieves fast network flooding by constructing a tree-based topology. Previous technologies such as CF [11] and RBP [10] schedule the data forwarding through link quality assessment. Moreover, when multiple sensor nodes simultaneously send an identical packet to the same destination, existing flooding solutions (RBP [10], FLASH [18], and CF [11]) need to dispatch the transmission order. Otherwise, it will cause mutual interference. The dispatch overhead should never be neglected in the large size of WSN-AF system. Fortunately, CI-based flooding can eliminate redundant dispatch overhead while increasing the flooding speed. Glossy [9] achieves the synchronization condition (Δ ≤ 0.5 s) of CI by capturing interrupts of IEEE 802.15.4 radio. Accurate synchronization guarantees high reliability of CI-based flooding. However, Glossy produces considerable redundant packets during the flooding process. This leads to massive unnecessary energy consumption. LWB [19], Chaos [20], and Choco [21] build up level scheduling mechanism for data collection or dissemination based on Glossy. The above works [19][20][21] achieve low duty cycle and efficient network flooding. However, they do not fundamentally change the transmission mechanism of Glossy which brings unnecessary energy consumption. Splash [22] forms a parallel pipeline by scheduling channel switch between nodes on adjacent layers. Splash can achieve higher throughput than Glossy. However, frequent channel switching increases the cumulative synchronization error which decreases synchronization accuracy and reliability. Meanwhile, channel scheduling brings more energy consumption. Recently, Wang et al. [23] prove that Glossy has a scalability problem. The PRR of Glossy is inversely proportional to the hops of independent paths. The reason is that independent paths increase cumulative synchronization errors. For example, the GreenOrbs project [3] is designed to deploy 1000 nodes in Tianmu Mountain, Jiaxing, China, for forest carbon sink. In order to achieve accurate data acquisition, GreenOrbs deploys 8∼20 neighbors (according to the diversity of node's communication range) for every sensor node. The drawback of scalability problem is more serious in such a high dense WSN-AF system. Wang et al. also propose SCIF protocol which leverages grid topology to reduce the number of independent paths. However, the grid topology increases the path length of CI-based flooding and therefore increases the network delay. CX [24] uses the changes of relay count to conduct link selection, to some extent, reducing the number of nodes involved in transmission. Due to the randomness of relay count, CX costs considerable computational overhead and energy consumption. Compared with the above CIbased flooding protocols, EACIF is an energy adaptive CIbased flooding mechanism. EACIF achieves high reliability, EACIF Implementation Enlightened by SCIF [23], we find that a sparse topology can help to achieve energy adaptive CI-based flooding. However, there are still some challenges. The first challenge is to guarantee full network coverage while achieving fast network flooding and high packet reception ratio (PRR). The second challenge is to achieve a lightweight distributed energy scheduling algorithm for redundancy cover sets. In this section, we introduce the implementation of EACIF. We analyse the state migration of CI-based flooding in Section 3.1. Then, we introduce the topology control algorithm in Section 3.2, followed by the design of -covered mode switching algorithm in Section 3.3. Moreover, we theoretically analyze the energy consumption of EACIF in Section 3.4. The Analysis of CI-Based Flooding. CI-based flooding is a network flooding technique based on the hierarchical model. In the first round, the initiator broadcasts a packet to its one-hop neighbors. Then, the one-hop neighbors will forward the received packets to the initiator and the two-hop neighbors in the second round. Every other round, nodes on each layer forward the flooding packet once. Each node stops forwarding packet if its transmission count reaches the transmission threshold. Key factors of CI-based flooding are listed in Table 1. represents the node that starts to send a packet and is the specific event of receiving packet. When the flooding begins, only the initiator can send packet. Let SF denote the node that becomes an initiator and SF let denote the node that becomes a receiver. SF and SF events are only triggered at the beginning of the network flooding process. Stop represents the external event stopping the network flooding. is the transmission counter which records the number of transmissions of the current node. is the transmission threshold. indicates that the node fails to receive the packet. We use to denote the relay count which will be increased by one after each successful reception. As Figure 3 shows, CI-based flooding has four correlative states including Radio off state, Standby state, Send state, and Receive state. When SF event happens, the initiator transforms from Radio off state into Send state. Then the initiator triggers the event to send a flooding packet to its one-hop neighbors. When SF event happens, the nodes except the initiator enter the Standby state to wait for the incoming packets. When event happens, the node switches to Receive state and starts to decode the received packet. If the reception is successful, ++ is performed. Meanwhile, the node shifts to Send state for new transmitting round. In contrast, if happens (Δ ≥ 0.5 s), the node converts into Standby state for the next receiving round. After event is completed, the node has two candidate states: the Standby state and the Radio off state. If is less than , the node will turn to Standby state to wait for the new event. If reaches , the node will turn to Radio off state. WSNs looping in Standby → Receive → Send until flooding ended. Each cycle has two "milliampere-level" states and one "microampere-level" state. As Figure 3 shows, each node in EACIF has two interconvertible modes: the active mode and the passive mode. In active mode, the node has the same state conversion schedule with Glossy, while the node in passive mode need not forward the received packet. If it has received the packet successfully, the node will turn to Radio off state to save energy. If the reception has failed, the node will turn to Standby state for the next event. Obviously, the more nodes in passive mode are, the more energy can be saved. The scenario shown in Figure 4 illustrates the flooding process of EACIF when = 1. Suppose an evendistributed WSN which has 25 nodes. Here, node is the initiator; 1 , 2 , 3 , and 4 are working in active mode, while 1 , . . . , 20 are working in passive mode. When the 1st transmission round begins, broadcasts a packet to its onehop neighbors 1 , 2 , 3 , 4 , 1 , 2 , 3 , and 4 . Upon receiving the packet, 1 , 2 , 3 , and 4 enter the Radio off state to save energy. Only 1 , 2 , 3 , and 4 participate in the 2nd transmission round. All the peripheral nodes ( 5 , . . . , 20 ) are working in passive mode. It is not difficult to calculate that there are 80% nodes that can work in passive mode to save energy. However, Glossy makes all the 25 nodes work in active mode. Topology Control Algorithm. To save energy, one of the main challenges is to find the fewest nodes in active mode to cover the whole WSN. Then, we need to design an energydriven policy for mode scheduling. In order to complete these challenges, EACIF proposes a distributed algorithm to select the active nodes in a given topology and develops a mode scheduling policy based on residual energy. Geometric Definitions. We assume a WSN consisting of sensor nodes deployed in a two-dimensional plane. The Input: Given a node V of a unit disk graph ( , ), V ∈ . Output: The node type V , where Ψ denotes the set of node type and V ∈ Ψ. if RE V < then (17) b r e a k ; (18) end (19) ++; (20) end (21) if == then node set is defined as . A set of edges can be defined by the Euclidean distance of any two nodes. In this way, we can define a simple undirected graph ( , ). Suppose that the communication radius of all nodes is equal to one unit, we have the following definitions. Definition 1. In a simple undirected graph = ( , ), for ∀ , V ∈ , if ( , V) ∈ and ∃(| V| ⩽ 1), then is a unit disk graph UDG( , ). Definition 2. In UDG( , ), a subset of is a dominating set (DS) if any node V in either is in or is a neighbor of some node in . We call the nodes in "active dominators" (AD). The gateway nodes converting DS to CDS are called "active gateways" (AG). The nodes outside CDS are called "passive dominatees" (PD). Active Nodes Selection Algorithm. Enlightened by Section 3.1, we consider to construct an approximate MCDS as a sparse backbone. In a previous work, Wan et al. [26] have proposed an effective MCDS generation algorithm for wireless ad hoc networks. However, Wan's approach does not optimize the residual energy. In this paper, we propose a distributed active nodes selection algorithm (ANSA) which generates an approximate MCDS according to node's residual energy. Definitions 2 and 3 indicate that "active nodes" include the AD nodes and the AG nodes. Therefore, ANSA can be divided into two subalgorithms: AD election subalgorithm and AG election subalgorithm. AD Election Subalgorithm. AD election subalgorithm constructs a CDS of a given topology. As Algorithm 1 shows, V denotes the type of node V. The type of V is AD if V equals 1, while V = 0 indicates that the node is a PD. If V is a AG, V equals 2. In the initial state, the type values of all nodes are zero. Any node V must broadcast its request to its onehop neighbors (QA(V) == True) if it wants to be a AG. The request packets contain the residual energy of node V. Then, its neighbor nodes ( V ) send response packets to V. When receiving the responses (RA(V) == True), V compares its residual energy (RE V ) with the residual energy set NE V . Node V is elected as AG if and only if V has the maximum residual 6 International Journal of Distributed Sensor Networks Input: Given node V of a unit disk graph ( , ). Output: The node type V , where Ψ denotes the set of node type and V ∈ Ψ. (1) Suppose and respectively represent the sets of AD node and PD node; } be neighbors set of node V, where is V's neighbors number; (6) Let RE V be the residual energy of V, and be the threshold value of residual energy; (7) //PD processing. (8) if V ∈ then (9) //Node V broadcast the (AD, PD) pairs. (10) if (V) == then (11) (V) → V ; (12) end (13) // , are AD nodes connected by V. (22) if V received (V, , ) of then (23) / / is the candidate AG of V and . (24) T r a v e r s e (V, ); (25) if RE has the residual energy && RE ⩾ then (26) (V, , ) = True; (27) end (28) end (29) end Algorithm 2: AG election subalgorithm. energy. Finally, V sets its node type and triggers TA(V) event to send the declaring packets. After AD election subalgorithm, each node in ( , ) is either a AG or a PG. AG Election Subalgorithm. As Algorithm 2 shows, all PD nodes first broadcast their (AD, PD) pairs. Alzoubi et al. [27] have proved that a PD of UDG( , ) has at most five AD nodes. After gathering two-hop AD nodes information, a PD node V can broadcast AG request (QG( , V, )) applying to be AG between and ( and are AD nodes). Any two AD nodes may have an interval of one hop or two hops. If the interval is one hop ( → V → ) and V has the most residual energy, any of the two AD nodes ( or ) can trigger TG( , V, ) event. If the interval is two hops, QG( , V, ) and QG( , , ) are, respectively, sent to and . If V and have the maximum residual energy, they are both elected as AG nodes. Otherwise, and cooperatively select the AG nodes for three-hop path ( → V → → ) by Algorithm 2. Time Complexity. As a distributed algorithm, the time complexity of a AD election algorithm is ( ). For a PD node, the time complexity of a AG election algorithm is ( ). For a AD node, the time complexity of a AG election algorithm is ( ) + ( ). Therefore, time complexity of ANSA can be approximated as ( ). -Covered Mode Switching Algorithm. According to Definition 4, a -covered UDG can generate groups of MCDS. -covered mode switching algorithm (KMSA) motivates us to design an energy adaption strategy for node's residual energy. As Algorithm 3 shows, every node V of a -covered UDG ( , ) holds a predefined residual energy threshold . If the residual energy of any node (generally AD node or AG node) is lower than , KMSA is triggered for a new MCDS building round. KMSA first empties the AD and AG sets and resets all the nodes' type to PD. Then, KMSA triggers Algorithms 1 and 2 to find new AD and AG sets. If the new and are not NULL, KMSA generates new MCDS which prolongs the lifetime of WSNs. However, if KMSA cannot find the new AD set or AG set, it will switch the type of all nodes to AD. Power Consumption Model. In this section, we first establish the model of power consumption for CI-based flooding and then analyse the energy saving by EACIF incovered UDG. Referring to Section 3.1, we define the main specifications of CI-based flooding. The unit time power consumption of Standby state, Send state, and Receive state can be, respectively, described as , , and . The durations of Standby state, Send state, and Receive are , , and Input: Given a -covered UDG ( , ). Output: The set of node type Ψ, V ∈ Ψ. (1) Let Ψ be the set of node type and V ∈ Ψ; (2) Let and be the sets of AD node and AG node; (3) Let AD and AG be the triggers of Algorithms 1 and 2; (4) Let RE V be the residual energy of V, and be the threshold value of residual energy; Algorithm 3: -covered mode switching algorithm. . Other parameters include the packet length , the data transmission rate , and node number . Glossy completes flooding after all nodes run cycles of Standby → Receive → Send. The power consumption of Glossy ( glossy ) can be expressed as The energy consumption of active nodes is the same as Glossy, while the passive nodes only need to run Standby → Receive once. The power consumption of EACIF ( eacif ) can be calculated as Here, is the number of active nodes. active and passive are the energy consumption of active nodes and passive nodes, respectively. The energy saving of EACIF ( saving ) can be calculated as Equation (3) shows that every passive node reduces ( −1) states, ( − 1) V states, and states. However, active nodes cannot save energy. EACIF can reduce energy consumption because it reduces the number of active nodes via ANSA algorithm. Moreover, suppose Ω = { 1 , 2 , . . . , } is the active nodes set for a -covered UDG. Here, denotes the node number of active nodes in every cover set. EACIF's energy saving for -covered UDG KE saving can be calculated as Equation (4) indicates that the energy saving is proportional to the redundant MCDS number . Therefore, ECIF enables low-power network flooding in WSNs. Performance In this section, we evaluate EACIF in both real trace and uniformly distributed topology. Topology Control Evaluation. We first evaluate the topology control algorithm of Section 3.2 by MATLAB 7.11 platform. We use a real trace of previous work [28] which has 1052 nodes deployed in a ten-kilometer area. As Figure 5(a) shows, all nodes labeled as PD are represented by yellow. Figure 5(b) illustrates the node type after ANSA. The AD nodes are represented by red, while green denotes the AG nodes. Evaluation shows that ANSA selects 83 AD nodes, 275 AG nodes, and 694 PD nodes. The nodes in the active mode are only 34.03%. It is indicated that there are 65.97% nodes working in the passive mode to save energy. Figure 6 shows the evaluation of PRR. We define CI-based flooding parameters as follows: the length of packet payload is 32 bytes, the variance of clock frequency drift = 5ppm, the threshold of time displacement Δ = 0.5 s (for the IEEE 802.15.4 radio), and the period during one-hop packet reception and retransmission slot = 1.124 ms. In the real trace of Figure 5, we change the transmission distance from 40 m to 100 m. In the uniformly distributed topology, we increase the node number varying from 500 to 2000 (referring to the scale of existing WSN-AF system). All the simulation results are averaged by 100 times. The PRR Evaluation. As Figure 6(a) shows, the PRR of EACIF and Glossy are more than 97.1% ( = 1). When = 3, both EACIF and Glossy achieve a 99.9% stability. Simulation results of uniformly distributed topology also prove that EACIF can achieve high PRR (see Figure 6(b)). However, we also note that there are lower PRR values in individual samples, especially when distance is 65 m ( = 1), one sample of 100 samples has 76.4th percentage of PPR (see Figure 6(a)). The reason is that EACIF reduces the amount of retransmission and also reduces opportunities to obtain the packets. The nodes in passive mode can only receive the packets from their parent nodes, while Glossy gives each node at least two chances of receiving packets, if the network degree is more than 2. Fortunately, the average PRR of EACIF is close to Glossy. Flooding Latency Evaluation. For each node, the flooding latency is defined as the duration from SF event to the first successful event (see Table 1). Figure 7(a) shows the flooding latency of 1052 nodes in real trace ( = 3). In EACIF, the latency of 263 nodes is less than 1 ms and 11.2 ms is needed for all nodes to receive packets, while Glossy spends 25% time more on flooding the whole network. Figure 7(b) plots the flooding latency of a uniformly distributed network with 2000 nodes. Using EACIF, 125-node latency is less than 1 ms. In uniformly distributed topology, Glossy spends 27% time more than EACIF. Although Glossy is a fast CI-based flooding protocol, our evaluation results show that EACIF can flood much more quickly. This is because EACIF reduces retransmissions by constructing a sparse network topology. The simulation results indicate that EACIF can effectively resolve the scalability problem in largescaled WSNs. Energy Consumption Evaluation. We use the average radio on time to evaluate the energy consumption of EACIF. Figure 8(a) shows the average radio on time of real trace. When the transmission distance is increased from 40 m to 100 m, the average radio on time of Glossy changes from 22.4 ms to 19.7 ms, while the average radio on time in EACIF changes from 15.1 ms to 7.1 ms. When the transmission distance is 100 m, EACIF can save 63.96% more energy consumption than Glossy. Figure 8(a) indicates that EACIF's effect of energy saving is proportional to network density. Simulation results verify the indication of (4). In Figure 8(b), the node number increases from 500 to 2000. The average radio on time of Glossy has 31.2% increasement. It means that Glossy suffers scalability problem. However, the average radio on time of EACIF has a slight decrease. When the node number is 2000, the average radio on time of EACIF is 11.2 ms. We assume that a WSN-AF system sends a packet every 30 seconds. The duty cycle of EACIF is 0.11%. Conclusions and Future Work In this paper, we introduce EACIF, the first work to improve energy adaptivity of CI-based flooding. We first implement ASNA to construct a sparse backbone on a given topology. Then, we propose KMSA algorithm to control the flooding process. We experimentally validate the performance of EACIF on real trace and the uniformly distributed topology. Simulation results indicate that EACIF can efficiently reduce redundancy retransmissions and significantly reduce energy consumption. Our future work includes the performance measurements of EACIF in real-world large-scale WSN-AF and the exploitation of EACIF in wireless time synchronization and remote reprogramming.
6,573
2015-06-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Modelling of Dispersive PT-Symmetric Bragg Grating This paper reports on the time-domain numerical model of a parity-time Bragg grating with saturated and dispersive gain. The model is compared against the ideal PT scenario where the gain is constant and unsaturated for all frequencies. Introduction Recently, a new class of optical metamaterials that utilise balanced loss and gain, also known as Parity-Time (PT) structures, have opened a new avenue in realising optical functionalities, for example, switching [1][2], lasing and isolating.An interesting potential application of PT devices is in cloaking as they offer unidirectional invisibility over a wide frequency range [3]. To date, all modelling of PT-devices has been done using frequency domain methods such as coupled mode theory and using materials that have constant and frequency-independent gain and loss.However, in practice the gain in material saturates at high input intensities and any change in the real or imaginary part in the material refractive index is described via the Kramers-Kronig relations. This paper outlines the implementation of a dispersive gain material model that satisfies the Kramers-Kronig relation and gain saturation.This is done in the numerical Transmission Line Modelling (TLM) method.The model is then used to analyse the response of a PT Bragg grating and compare it with the idealised case with constant gain/loss. PT-Symmetric Bragg grating A schematic presentation of a PT Bragg grating is given in Fig. 1.The length of the period is , where is the Bragg wavelength and is the average refractive index of the grating.The PT-structure requires that loss and gain in the structure are balanced which can be expressed via a complex refractive index as ̂ ̂ .This condition implies that one period of the Bragg grating has four layers of equal length but of different material parameters, so that refractive indices of layers satisfy the following distribution: Modelling In this paper, a model of macroscopic physical gain and loss, having a homogenously-spectral-broadening and satisfying the Kramers-Kronig condition [4] is implemented through the electric conductivity , as where is the atomic transition angular-frequency, is the peak value of the conductivity at and is the dipole relaxation time parameter.Material gain and loss are implemented as (3) where and are the free-space light velocity and permittivity respectively, and is the real part of refractive index.Loss is implemented with a positive sign whilst gain is implemented with a negative sign in eq. ( 3). The frequency dependant dielectric properties are modelled using the Duffing model for electric polarization outlined in [5] as, (4) where and denote the electric polarization and field in the -direction, and denote the damping and dielectric resonant angular-frequency parameters of the medium, and denotes the dielectric susceptibility at DC frequency.The implementation of the gain (2) and Duffing (4) models are done in the Transmission Line Modelling method [6].The TLM method is a time stepping numerical technique based upon the analogy between the propagating electromagnetic fields and voltage impulses travelling on an interconnected mesh of transmission lines.As a time-domain numerical model, it offers flexibility in modelling frequencydependent and nonlinear material [6].In the TLM method, this is done using -transform approach.The overall outline of the TLM approach in modelling dispersive properties is to z first calculate the electric field at each node from the incident voltages from the left and right nodes.For the 1D case this is given by [6], where is the normalized electric field in the -direction, is the dielectric susceptibility at infinite frequency, and are the normalized conductivity (2) and electric polarization (4) in the -domain.In the next step the scattered voltages from the each node are obtained which become the incident voltages on the neighbouring nodes in the following time-step.These successive repetitions of the scatterpropagate procedure provide an explicit and stable time stepping algorithm that mimics electromagnetic field behaviour to second order accuracy in both time and space. and discussions In this section, a PT Bragg grating based on GaAs material is analysed.The material parameters chosen are: , r , and r [7] with the high and low dielectric susceptibility as and .The gain parameters are , and r [4].The period of the Bragg grating is designed so that the band-gap is centred at the atomic-transition angular frequency , i.e. .The background material has , i.e. the refractive index of GaAs at around . Figure 2 compares the transmittance of the 200 period GaAs PT Bragg grating for the ideal case, the physical model and the linear grating that corresponds to the conventional Bragg grating with no gain/loss.The ideal PT model has a constant and frequency-independent gain with with and modulation index of .The physical gain model has a parameter of and was run for low ( ) and high intensity ( ).The linear Bragg grating has and is included for reference.Figures 2(a,b) show the transmittance and the reflectance for a wave incident from the right respectively.Figures 2(a,b) show that the PT Bragg grating ideal model exhibits a broadband unidirectional invisibility i.e. and .However, in the case of the physical model unidirectional invisibility occurs only around the band gap frequency, and is dependent on the input signal intensity, reducing the unidirectional invisibility to the narrowband range around the Bragg frequency.The effect is more prominent at high input intensities which can be explained by the fact that the saturation of the gain breaks the balance of gain and loss thereby destroying the PT-symmetry. Conclusion The paper reports on the implementation of dispersive and saturated gain model in the TLM for the time-domain modelling of the unidirectional invisibility of the PTsymmetric Bragg gratings.The saturated gain modifies the unidirectional invisibility of the grating making it dependent on the intensity of the input signal.The model shows that the realistic model of gain modifies the unidirectional invisibility of the grating reducing it to a narrowband phenomenon. Fig. 1 . Fig. 1.Schematic illustration for a single period of the PT-Bragg grating Fig. 2 . Fig. 2. Plot of dispersion of PTBG for passive Bragg filter, PTBG with the ideal model, physical model (low and high intensity) for incident from the right.
1,401.4
2014-01-01T00:00:00.000
[ "Physics" ]
On the Convergence of a Regularized Jacobi Algorithm for Convex Optimization In this paper, we consider the regularized version of the Jacobi algorithm, a block coordinate descent method for convex optimization with an objective function consisting of the sum of a differentiable function and a block-separable function. Under certain regularity assumptions on the objective function, this algorithm has been shown to satisfy the so-called sufficient decrease condition, and consequently, to converge in objective function value. In this paper, we revisit the convergence analysis of the regularized Jacobi algorithm and show that it also converges in iterates under very mild conditions on the objective function. Moreover, we establish conditions under which the algorithm achieves a linear convergence rate. I. INTRODUCTION In this paper we consider large-scale optimization problems in which a collection of individual actors (or agents) cooperate to minimize some common objective function while incorporating local constraints or additional local utility functions. We consider a decentralized optimization method based on block coordinate descent, an iterative coordinating procedure which has attracted significant attention for solving large-scale optimization problems [1]- [3]. Solving large-scale optimization problems via an iterative procedure that coordinates among blocks of variables enables the solution of very large problem instances by parallelizing computation across agents. This enables one to overcome computational challenges that would be prohibitive otherwise, without requiring agents to reveal their local utility functions and constraints to other agents. Due to its pricing mechanism implications, decentralized optimization is also a natural choice for many applications, including demand side management in smart grids, charging coordination for plugin electric vehicles, coordination of multiple agents in robotic systems etc. [4]- [6]. Based on the algorithms outlined in [2], two classes of iterative methods have been employed recently for solving such optimization problems in a decentralized way. The first covers block coordinate gradient descent (BCGD) methods and it requires each agent to perform, at every iteration, a local (proximal) gradient descent step [1], [6]. Under certain regularity assumptions (differentiability of the objective function and Lipschitz continuity of its gradient), and for an The appropriately chosen gradient step size, this method converges to a minimizer of the centralized problem. This class of algorithms includes both sequential [7] and parallel [8], [9] implementations. The second covers block coordinate minimization (BCM) methods, does not assume differentiability of the objective and is based on minimizing the common objective function in each block by fixing variables associated with other agents to their previously computed values. Although BCM methods have a larger per iteration cost than the BCGD methods in the case when there are no local utility functions (constraints) in the problem, or when their proximal operators (projections) have closed-form solutions, in the general case both approaches require solutions of ancillary optimization problems. On the other hand, iterations of BCM methods are numerically more stable than gradient iterations, as observed in [10]. If the block-wise minimizations are done in a cyclic fashion across agents, then the algorithm is known as the Gauss-Seidel algorithm [3], [7], [11]. An alternative implementation, known as the Jacobi algorithm, involves performing the block-wise minimizations in parallel. However, convergence of the Jacobi algorithm is not guaranteed in general, even in the case when the objective function is smooth and convex, unless certain contractiveness properties are satisfied [ The authors in [12] have proposed a regularized Jacobi algorithm wherein, at each iteration, each agent minimizes the weighted sum of the common objective function and a quadratic regularization term penalizing the distance to the previous iterate of the algorithm. A similar regularization has been used in Gauss-Seidel methods [7], [11] which are however not parallelizable. Under certain regularity assumptions, and for an appropriately selected regularization weight, the algorithm converges in objective value to the optimal value of the centralized problem [12]. Recently, the authors in [13] have quantified the regularization weighting required to ensure convergence in objective value as a function of the number of agents and other problem parameters. However, convergence of the algorithm in its iterates to an optimizer of the centralized problem counterpart was not established, apart from the particular case where the objective function is quadratic. In this paper we revisit the algorithm proposed in [12] and enhance its convergence properties under milder conditions. By adopting an analysis based on a power growth property, which is in turn sufficient for the satisfaction of the so-called Kurdyka-Łojasiewicz condition [11], [14], we show that the algorithm's iterates converge under much milder assumptions on the objective function than those used in [2] and [13]. A similar approach was used in [3], [11] to establish convergence of iterates generated by Gauss-Seidel type methods. We also show that the algorithm achieves a linear convergence rate without imposing restrictive strong convexity assumptions on the objective function, in contrast to typical methods in the literature. Our analysis is based on the quadratic growth condition, which is closely related to the so-called error bound property [15], [16] that is used in [8] to establish linear convergence of parallel BCGD methods in objective value. The remainder of the paper is organized as follows. In Section II we introduce the class of problems under study, outline the regularized Jacobi algorithm for solving such problems in a decentralized fashion, and state the main convergence result of the paper. Section III provides the proof of the main result. Section IV provides a convergence rate analysis, while Section V concludes the paper. Notation Let N denote the set of nonnegative integers, R the set of real numbers, R + the set of nonnegative real numbers,R := R∪{∞} the extended real line, and R n the n-dimensional real space equipped with inner product x, y and induced norm x . Consider a vector x = ( The subdifferential of f at x is denoted by ∂f (x). If f is continuously differentiable, then ∇f (x) denotes the gradient of f evaluated at x. We denote by [a ≤ f ≤ b] := {x ∈ R n | a ≤ f (x) ≤ b} a set of points whose value of function f is between a and b; similar notation will be used for strict inequalities and for one-sided bounds. The set of minimizers of f is denoted by argmin f := {x ∈ dom f | f (x) = min f }, where min f is the minimum value of f . We say that a differentiable function f is strongly convex with convexity parameter σ > 0 if holds for all x and y. The distance of a point x to a closed convex set C is denoted by dist(x, C) := inf c∈C x − c , and the projection of x onto C is denoted by II. PROBLEM DESCRIPTION AND MAIN RESULT A. Regularized Jacobi algorithm We consider the following optimization problem: . . × dom g m , and the combined objective function in P as (1) Problems in the form P can be viewed as multi-agent optimization programs wherein each agent has its own local decision vector x i and agents cooperate to determine a minimizer of h, which couples the local decision vectors of all agents through the common objective function f . Since the number of agents can be large, solving the problem in a centralized fashion may be computationally intensive. Moreover, even if this were possible from a computational point of view, agents may not be willing to share their local objectives g i , i = 1, . . . , m, with other agents, since this may encode information about their local utility functions or constraint sets. For each i = 1, . . . , m, we let f i ( · ; x −i ) : R ni → R be a function of the decision vector of the i-th block of variables, with the remaining variables x −i ∈ R n−ni treated as a fixed set of parameters, i.e., We wish to solve P in a decentralized fashion using Algorithm 1. At the (k + 1) th iteration of Algorithm 1, agent i solves a local optimization problem accounting for its local function g i and the function f i with the parameter vector set to the decisions x −i k of the other agents from the previous iteration. Moreover in the local cost function an additional term penalizes the squared distance between the optimization variables and their values at the previous iteration x i k . The relative importance of the original cost function and the penalty term is regulated by the weight c > 0, which should be selected large enough to guarantee convergence [12], [13]. We show in the Appendix that the fixed points of Algorithm 1 coincide with optimal solutions of problem P. A problem structure equivalent to P was considered in [13], with the difference that a collection of convex constraints x i ∈ X i for each i = 1, . . . , m were introduced instead of the functions g i . We can rewrite this problem in the form of P by selecting g i to be an indicator function of a given convex set. On the other hand, problem P can be written in epigraph form, and thus reformulated in the framework of [13]. The reason that we use the problem structure of P is twofold. First, some widely used problems such as 1 -regularized least squares are typically posed in the form P. Second, the absence of constraints will ease the convergence analysis of Section III since many results in the relevant literature use the same problem structure. B. Statement of the main result Before stating the main result we provide some necessary definitions and assumptions. Let h denote the minimum value of P. We then have the following definition. It should be noted that (2) is a very mild condition, since it requires only that the function h is not excessively 'flat' in the neighborhood of the set argmin h. For instance, all polynomial, real-analytic and semi-algebraic functions satisfy this condition [14], [17]. We impose the following standing assumptions on problem P: Assumption 1: a) The function f is convex and differentiable. b) The gradient ∇f is Lipschitz continuous on dom g with Lipschitz constant L, i.e., c) The functions g i are all convex, lower semicontinuous and e) The function h exhibits the power-type growth condition of Definition 1. Notice that we do not require differentiability of the functions g i . Coerciveness of h implies the existence of some ζ ∈ R for which the sublevel set [h ≤ ζ] is nonempty and bounded, which is sufficient to prove existence of a minimizer of h [18, Prop. 11.12 & Thm. 11.9]. We are now in a position to state the main result of the paper. then the iterates {x k } k∈N generated by Algorithm 1 converge to a minimizer of problem P, i.e., lim k→∞ x k = x * , where x * is a minimizer of P. The proof of Theorem 1 involves several intermediate statements and is provided in the next section. III. PROOF OF THE MAIN RESULT Many results on convergence of optimization algorithms establish only convergence in function value [2], [13], [19], without guaranteeing convergence of the iterates {x k } k∈N as well. Convergence of iterates is straightforward to show when h is strongly convex, or when {x k } k∈N is Fejér monotone with respect to argmin h, which is true whenever the operator underlying the iteration update is nonexpansive [18]. The latter condition was used in [13] to establish convergence of the sequence {x k } k∈N in the special case that f is a convex quadratic function. In the single-agent case, i.e. when m = 1, Algorithm 1 reduces to the proximal minimization algorithm whose associated fixed-point operator is nonexpansive for any convex, proper and closed function h. However, in the multi-agent setting the resulting fixed-point operator is not necessarily nonexpansive, which implies that the Fejér monotonicity based analysis can not be employed to establish convergence of the sequence {x k } k∈N . To achieve this and prove Theorem 1 we exploit the following result, which follows directly from Theorem 14 in [14]. Theorem 2 ( [14, Thm. 14]): Consider Assumption 1, with argmin h = ∅ and h := min h. Assume that the initial iterate x 0 of Algorithm 1 satisfies h(x 0 ) < h + r, where r is as in Definition 1. Finally, assume that subsequent iterates {x k } k∈N generated by Algorithm 1 possess the following properties: 1) Sufficient decrease condition: where a > 0. 2) Relative error condition: There exists w k+1 ∈ ∂h(x k+1 ) such that where b > 0. Then the sequence {x k } k∈N converges to some x ∈ argmin h, i.e. lim k→∞ x k = x * , and for all k ≥ 1 It should be noted that Theorem 2 constitutes a relaxed version of Theorem 14 in [14]. This is due to the fact that we impose the power-type growth property as an assumption, which is in turn a sufficient condition for the satisfaction of the so-called Kurdyka-Łojasiewicz (KL) property 1 [11], [17]. Specifically, we could replace the last part of Assumption 1 with the KL property and the conclusion of Theorem 2 would remain valid. Notice that, under the assumptions of Theorem 2, {x k } k∈N converges to some x ∈ argmin h even if h(x 0 ) ≥ h + r. Since {h(x k )} k∈N converges to h (as a consequence of the sufficient decrease condition (4)), there exists some k 0 ∈ N such that h(x k0 ) < h + r, and hence Theorem 2 remains valid if x k is replaced by x k+k0 . To prove Theorem 1 it suffices to show that, given Assumption 1, the iterates generated by Algorithm 1 satisfy the sufficient decrease condition and the relative error condition. To show this we first provide an auxiliary lemma. Lemma 1: Under Assumption 1, for all x, y, z ∈ dom g 1 This can be seen by choosing the so-called desingularizing function ϕ that appears in the definition of the KL property [11], [17] such that ϕ(s) = p (s/γ) We can then show that the sufficient decrease condition is satisfied. Proposition 1 (Sufficient decrease condition): Under Assumption 1, if c is chosen according to (3), then Algorithm 1 converges to the minimum of problem P in value, i.e. h(x k ) → min h, and for all k the sufficient decrease condition (4) is satisfied with Proof: The result follows from [13, Theorem 2], with the Lipschitz constant established in Lemma 1. Note that the proofs of Lemma 1 and Proposition 1 do not require the last part of Assumption 1 related to the power-type growth condition of h. If c is chosen according to Theorem 1, then (4) implies that x k+1 − x k → 0. To show this, suppose that x 0 ∈ dom h and thus h(x 0 ) is finite. Iterating the inequality (4) gives which means that x k+1 − x k converges to zero. Note however that this does not necessarily imply convergence of the sequence {x k } k∈N . Proposition 2 (Relative error condition): Consider Algorithm 1. Under Assumption 1, there exists w k+1 ∈ ∂h(x k+1 ) such that the relative error condition (5) is satisfied with Proof: Iterate x k+1 in Algorithm 1 can be characterized via the subdifferential of the associated objective function, i.e., which ensures the existence of some v k+1 ∈ ∂g(x k+1 ) such that Notice that in the last equality we used the identity Let us now define w k+1 := ∇f (x k+1 ) + v k+1 ∈ ∂h(x k+1 ). From the above equality we can bound the norm of w k+1 as The last step follows from the triangle inequality, and due to Lemma 1, we obtain Propositions 1 and 2 show that the conditions of Theorem 2 are satisfied. As a direct consequence the iterates generated by Algorithm 1 converge to some minimizer of P, thus concluding the proof of Theorem 1. IV. CONVERGENCE RATE ANALYSIS It is shown in [13] that if f is a strongly convex quadratic function and g i are indicator functions of convex compact sets, then Algorithm 1 converges linearly. We show in this section that Algorithm 1 converges linearly under much milder assumptions. In particular, if h has the quadratic growth property, i.e., if p in (2) is equal to 2, then Algorithm 1 admits a linear convergence rate. This property is employed in [20] to establish linear convergence of some first-order methods in a single-agent setting, and is, according to [15], [16] closely related to the error bound, which was used in [21], [22] to establish linear convergence of feasible descent methods. Note that the feasible descent methods are not applicable to problem P since we allow for nondifferentiable objective functions. Theorem 3: Consider Assumption 1, and further assume that power-type growth property is satisfied with p = 2. Let the initial iterate of Algorithm 1 be selected such that h(x 0 ) < h + r, where r appears in Definition 1. Then the iterates {x k } k∈N converge to some x ∈ argmin h, and for all k ≥ 1 x k − x ≤ M 1 where Proof: The quadratic growth property and convexity of h, together with the relative error condition (5) imply that for where w k+1 ∈ ∂h(x k+1 ). Note that since h is lower semicontinuous, the set argmin h is closed and thus the projection onto argmin h is well defined. From the right-hand sides of the first and last inequality in (11) we have Dividing the left-hand side of the first inequality and the right-hand side of the last inequality in (11) by γ dist(x k+1 , argmin h) > 0, we obtain Substituting this inequality into the preceding one, we obtain where the second inequality follows from the sufficient decrease condition (4). Rearranging the terms, we have that which proves (9). Substituting the above inequality into (6) we obtain (10), which concludes the proof. A direct consequence of Theorem 3 is that Algorithm 1, with c selected as in Theorem 1, converges linearly when h satisfies the quadratic growth condition This is the case when f is strongly convex with convexity parameter σ f , implying that argmin h is a singleton and h has the quadratic growth property with γ = σ f /2 for any x ∈ dom h. It is shown in [22], [23] that if f (x) = v(Ex) + b, x has a Lipschitz continuous gradient, with v being strongly convex, and g being an indicator function of a convex polyhedral set, then the problem exhibits the quadratic growth property. Note that if E does not have full column rank, then f is not strongly convex. In [14], [23] it is shown that a similar bound can be established for the 1 -regularized least-squares problem. Here, we adopt an approach from [14] and show that a similar result can be provided for more general problems in which g can be any polyhedral function. The core idea is to rewrite the problem in epigraph form for which such a property is shown to hold. We impose the following assumption. Assumption 2: a) The function f is defined as with v(·) being a strongly convex function with convexity parameter σ v . b) The component functions g i are all globally non-negative convex polyhedral functions whose composite epigraph can be represented as where C ∈ R p×n , c ∈ R p and d ∈ R p . Note that the inequality Cx + ct ≤ d should be taken component-wise. The conditions of Assumption 2 are satisfied when f is quadratic, and g i , i = 1, . . . , m, is an indicator function of convex polyhedral sets or any polyhedral norm. Note that the dual of a quadratic program satisfies this assumption. The Lipschitz constant of ∇f , which is required for computing the appropriate parameter c for Algorithm 1, can be upper bounded by E 2 L v , where E is the spectral norm of E, and L v is the Lipschitz constant of ∇v. We will now define the Hoffman constant which will be used in the further analysis. Lemma 2 (Hoffman constant, see e.g., [23]): Let X and Y be two polyhedra defined as where A ∈ R m×n , a ∈ R m , E ∈ R p×n , e ∈ R p , and assume that X ∩ Y = ∅. Then there exists a constant θ = θ(A, E) such that any x ∈ X satisfies We refer to θ as the Hoffman constant associated with matrix Let x 0 be an initial iterate of the algorithm and let r = h(x 0 ). Since h is coercive, [h ≤ r] is a compact set and we can thus define the following quantities: x − y , Ex − Ey ≤ D E , ∇v(Ex) . Since Algorithm 1 generates a non-increasing sequence {h(x k )} k∈N , for all k we have x k ∈ [h ≤ r] and We conclude that argmin h ⊆ [h ≤ r] ⊂ [g ≤ R], for any fixed R > g(x 0 ) + V r D r E + b D r . For such a bound R, we have wherex = (x, t) and It can be easily seen thatx = (x , t ) minimizes (13) if and only if x ∈ argmin h and t = g(x ). Using [23, Lemma 2.5], we obtain where Ex − Ey = D R E . Inequality (14) implies that for all x ∈ [g ≤ R] and for all t ∈ [0, R] Setting t = g(x), we then have that Lemma 3: Let r = h(x 0 ) and fix any R > g(x 0 )+V r D r E + b D r . Under Assumptions 1 and 2, for all x ∈ [h ≤ r] we have V. CONCLUSION In this paper we revisited the regularized Jacobi algorithm proposed in [12], and enhanced its convergence properties. It was shown that iterates generated by the algorithm converge to a minimizer of the centralized problem counterpart, provided that the objective function satisfies a power growth property. We also established linear convergence of the algorithm when the power growth condition satisfied by the objective function is quadratic. APPENDIX In this section we show that the set of fixed points of Algorithm 1 coincides with the set of minimizers of problem P. The result follows from [13, §3]; however, the proof is modified to account for the presence of the nondifferentiable terms g i , i = 1, . . . , m. We first recall the optimality condition for a nondifferentiable convex function h. Similarly to [13], we define an operator T such that and operators T i ( · ; y −i ) such that where y −i ∈ R n−ni is a treated as a fixed parameter. Observe that we can characterize the operator T (x) via the operators T i (x i ; x −i ) as follows T (x) = T 1 (x 1 ; x −1 ), . . . , T m (x m ; x −m ) . We define the sets of fixed points for these operators as Note that, in the spirit of [24, §5], we treat T as a single valued function T : R n → R n since the quadratic term in the right hand side of (15) ensures that the set of minimizers is always single-valued, with an identical comment applying to the operators T i (y −i ). We now show that the sets argmin h and Fix T coincide. Proof: The proof is based on the proofs of Propositions 1-3 in [13]. We first show that argmin h ⊆ Fix T . Fix any x ∈ argmin h. If x minimizes h, then it is also a block-wise minimizer of h at x, i.e. for all i = 1, . . . , m, we have Since x i minimizes both f i ( · ; x −i ) + g i and c ( · ) − x i 2 , it is also the unique minimizer of their sum, i.e. x i = argmin implying that x i ∈ Fix T i (x −i ), and thus x = (x 1 , . . . , x m ) is a fixed point of T (x) = T 1 (x 1 ; x −1 ), . . . , T m (x m ; x −m ) . We now show that Fix T ⊆ argmin h. Let x ∈ Fix T , and thus for all i = 1, . . . , m, x i ∈ Fix T i (x −i ), i.e. According to Proposition 3 the above condition means that for all z i ∈ R ni we have which again by Proposition 3 implies that x i is a minimizer of f i ( · ; x −i ) + g i . According to [25,Lemma 3.1] differentiability of f and component-wise separability of g imply that any x = (x 1 , . . . , x m ) for which (16) holds for all i = 1, . . . , m, is also a minimizer of f + g , i.e., x ∈ argmin h, thus concluding the proof.
6,020
2018-04-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Novel Hybrid-Size Digit-Serial Systolic Multiplier over GF ( 2 m ) : Because of the efficient tradeoff in area–time complexities, digit-serial systolic multiplier over GF ( 2 m ) has gained substantial attention in the research community for possible application in current/emerging cryptosystems. In general, this type of multiplier is designed to be applicable to one certain field-size, which in fact determines the actual security level of the cryptosystem and thus limits the flexibility of the operation of cryptographic applications. Based on this consideration, in this paper, we propose a novel hybrid-size digit-serial systolic multiplier which not only offers flexibility to operate in either pentanomial- or trinomial-based multiplications, but also has low-complexity implementation performance. Overall, we have made two interdependent efforts to carry out the proposed work. First, a novel algorithm is derived to formulate the mathematical idea of the hybrid-size realization. Then, a novel digit-serial structure is obtained after efficient mapping from the proposed algorithm. Finally, the complexity analysis and comparison are given to demonstrate the efficiency of the proposed multiplier, e.g., the proposed one has less area-delay product (ADP) than the best existing trinomial-based design. The proposed multiplier can be used as a standard intellectual property (IP) core in many cryptographic applications for flexible operation. Introduction Finite field multipliers have gained substantial attentions recently due to their critical roles in many cryptosystems such as elliptic curve cryptography (ECC), especially on hardware platforms [1]. Typically, there are three types of structuring related to the finite field multipliers, namely the bit-serial, bit-parallel, and digit-serial. Because of the efficient tradeoff in area-time complexities, digit-serial structures usually are more widely preferred than the other two in many applications [2]. Along with the recent advance in artificial intelligence technology, systolic structure has becoming more and more attracting in high-performance hardware platforms [3]. Accordingly, digit-serial systolization of finite field multipliers have the potential to be applied in high-performance cryptosystems due to their superior features such as high-throughput rate and regularity and modularity. Thus far, several efforts have been made on efficient implementation of digit-serial systolic finite field multipliers: (i) an efficient systolic finite field multiplier is presented in [3], where its complexity is significantly reduced compared with the previous reported one; (ii) a systolic-like digit-serial multiplier is reported in [4] and it is found that the systolic structure proposed is specifically suitable for Reed-Solomon Codec; (iii) an efficient digit-serial systolic multiplier is presented in [5]; (iv) the same authors reported a unified digit-serial systolic multiplier based on trinomials and all-one-polynomials [6]; (v) a low-complexity systolic multiplier is given in [7], where its complexity is optimized to be minimal; (vi) an efficient resource-sharing technique is employed in another digit-serial systolic multiplier to achieve low critical-path and high-performance operation [8]; and (vii) an efficient systolic digit-serial multipliers is reported in [9], where the complexity is so far the least in the literature. These designs, undoubtedly, represent the major advance in the field of systolic digit-serial multipliers. On the other side, however, the existing digit-serial systolic finite field multipliers, more or less, still have some drawbacks to be overcome: (i) although the digit-serial systolic multipliers have relatively few processing elements (PEs), the register-complexity of the multipliers is still large; and (ii) the current digit-serial multipliers are designed to be fixed field-size, and thus cannot provide enough flexibility to meet the current technology trend, i.e., one cryptosystem can meet different security level (field-size) need and the designers have to finalize different field-size multipliers with respect to different application requirement, which is some sort of inefficient in integrated chip (IC) design. Facing with these two challenges, in this paper, we have proposed a novel hybrid-size digit-serial systolic multiplier with low-complexity implementation. The proposed work is carried out through a combination of two coherent interdependent stages' efforts: (i) a novel hybrid-size digit-serial systolic multiplication algorithm is proposed which provides enough flexibility to both pentanomial-and trinomial-based multipliers; and (ii) the proposed algorithm is then mapped into a novel systolic structure through a series of optimization techniques. Thorough complexity analysis and detailed comparison have also been made to confirm the efficiency of the proposed design, i.e., it not only offers flexibility to be switched from one field-size to another one, but also has smaller area-time complexities compared with the existing single field-size digit-serial systolic multipliers. The proposed design can not only be used as a standard intellectual property (IP) core for various field-size cryptosystem, but also can be employed as a core computation unit in reconfigurable cryptographic processor (where demands flexible field-size choice). The rest of the paper is organized as follows: Section 2 presents the mathematical formulation of the proposed digit-serial multiplication algorithm. Section 3 shows the detailed steps of the proposed systolic structure mapped from the algorithm. The analysis and comparison are provided in Section 4. The conclusion is given in Section 5. Mathematical Formulation of the Proposed Multiplication Algorithm Let the three elements A, B, and C ∈ GF(2 m ) and let the polynomial basis be the {1, x, x 2 , . . .}, where x is the root of f (x) ( f (x) determines the field) [1]. Suppose for two field-sizes with m 1 and m 2 , and m 1 < m 2 , we can first define that and where a i , b i , and c i ∈ GF(2) and it is clear that a i in both A 1 and A 2 are the same (for 0 ≤ i ≤ m 2 − 1), and the same applies to b i and c i . Suppose in the field-size of m 1 , let C 1 be the product of A 1 and B 1 (corresponding field polynomial is f 1 (x)), we can have where A (0) Similarly, for the field-size of m 2 , we can have C 2 as the product of A 2 and B 2 (the field polynomial is f 2 (x)): where, similarly, we have A Then, after comparing Equation (3) with Equation (5), we can have where j = 1 or 2 according to Equations (3) and (5), respectively, and Then, we can have the following definitions: For any integer of m 2 , we have m 2 = w · d (meanwhile, one can have m 1 = w · d 1 ); then, we can define Similarly, we can have where we assume m 2 − m 1 > w. It is clear that we can now transfer Equation (6) into where ξ(k j ) works for terms only when m 1 ≤ i ≤ m 2 − 1 and where we can see that Equation (10) can be used to perform two field-size finite field multiplications if we select the control signal properly. The above equations can thus be summarized as Algorithm 1. Algorithm 1. Proposed multiplication algorithm for hybrid field-size-based implementation Inputs: A 1 and B 1 (also A 2 and B 2 ) are the pair of elements (polynomial basis representation) in GF(2 m ) for field-size of m 1 and m 2 , respectively The detailed processes of Steps 2.2 and 2.4 are the key multiplication processes. Note that, due to the difference of the field polynomial f j (x), the process of deriving A j is slightly different from each other. For instance, assume where we can have Besides that, one has to note that the National Institute of Standards and Technology (NIST) has recommended five irreducible polynomials for ECC implementation [10,11] (three pentanomials and two trinomials). Without loss of generality, we can assume m 1 is a pentanomial and m 2 is a trinomial. The corresponding structure presented below is also based on this assumption. Proposed Hybrid-Size Digit-Serial Systolic Multiplier In this section, we propose several optimization technique to successfully map the corresponding algorithm into desired systolic structure. Specifically: Novel Input Data Broadcasting Scheme One major component of the register-complexity of a systolic finite field multiplier comes from the input data broadcasting. In this subsection, we propose a novel input data broadcasting that the main inputs to each PE are fed independent from each other and thus the relation of these data between the PEs is reduced to minimum, which can significantly reduce the related register-complexity among systolic array. In Figure 1, the proposed input data broadcasting technique is employed. ... As shown in Figure 1, according to Step 2.3 of Algorithm 1, each PE in the systolic array is fed with two inputs, namely the A (i) j and the corresponding b i . The output of each PE is then transferred to the next PE on its right. The complete output can be delivered after (d + w) cycles, with the help of an extra accumulation cell. Since differences exist among all the A (i) j , we have used the selective connection to rightly connect each PE according to Algorithm 1. Because only one signal pipelining to the next PE is used, the register-complexity of the systolic array is significantly reduced. The details of the internal structures of these PEs are shown below. Note that, due to the simple internal structure of the PEs, i.e., critical-path of the PE is quite small, the proposed broadcasting technique has very limited influence on the overall time complexity. Proper Arrangement on the Input Data Delivery The two inputs, i.e., A j and B j , must be properly arranged to meet the data dependence requirement for hybrid-size operation. For B j , according to Algorithm 1, all bits are delivered in a grouped-sequential way, which can be realized by the structure, as shown in Figure 2. One can see that the shift-register is producing the required output bits to each PE of Figure 1 based on Algorithm 1, while the hybrid-size selection is done by the inserting of an extra MUX (MUX is short for multiplexer) in the shifting path such that the shift-register can be working under the field-size of either m 1 or m 2 through the proper control of the MUX (control signal). Figure 2. The proposed shift-register to deliver input data B j . The operand A j , through the help of PE-0, delivers the correct output bits to each PE according to Algorithm 1, which requires a more sophisticated structure, as shown in Figure 3a. From Equations (13) and (14), one can observe that there are one XOR gate involved when obtaining where we can see that the identical bits, e.g., a 0 , a 1 , . . ., can be shared among these A (i) j (0 ≤ i ≤ 12), as shown by the example in Figure 3b (where we have shown how the MUXes are located to obtain hybrid-size implementation). Since other bits cannot be shared, we just use the MUX to connect with the two bits at the same position (according to Equations (8) and (9)) such that, through the proper working of these MUXes, the correct signals can be produced to the corresponding PE. ... One can also notice that, according to Equations (9), (13), and (14), with the help of a modular operation (done by the modular cell in Figure 3), the PE-0 delivers the corresponding output to each PE, i.e., obtaining A u from A u−1 (for 1 ≤ u ≤ d/d 1 ), which needs a delay time of 2T X (T X is the delay time of an XOR gate, and it takes T X for trinomial-based multiplier and 2T X for pentanomial-based one [9]). Besides that, one has to note that all the A (i) j in one specific A u can be obtained through the sharing of identical bits, as represented by the selective connection in Figures 1 and 3. Following this arrangement, the proposed hybrid-size structure operates in an ordered form according to Algorithm 1. Hybrid Accumulation The accumulation of the digit-serial operation also needs adjustment when compared with the conventional ones. As shown in Figure 4, where we have used a m 1 -bit MUX cell to obtain the hybrid-size accumulation (where the accumulation cell is realized through the XOR cell connected with the register cell in a back-loop style). Note that these m 1 bit-level MUXes connect with the m 1 -bit output of PE-d 1 , while the remaining (m 2 − m 1 ) bits of PE-d are directly connected with the accumulation cell. According to Equations (8) and (9), and Algorithm 1, we can let the MUX determine the multiplier is working under the condition of field-size of either m 1 bits or m 2 bits. Besides that, the number of output bits is also selected according to the specific chosen field-size, as shown in Figure 4, i.e., after designated number of cycle periods, the output is produced based on the value of the control signal. Final Structure The internal structure of each PE is shown in Figure 5b, where it mainly consists of an AND cell, an XOR cell, and a register cell. With the combination of all the optimization techniques introduced above, we have presented the finalized proposed hybrid-size digit-serial structure, as shown in Figure 5a. All the control signals connected with the inserted MUXes collaborate together to switch the finite field multiplier from operating in one field-size to another. After designated cycle periods of accumulation, the multiplier delivers the desired output. Complexity and Comparison For simplicity of discussion, we just follow the assumption in Section 3 that m 1 comes from a pentanomial while m 2 is the field-size of a trinomial. The detailed complexity of the proposed multiplier is: (i) Systolic array: The systolic array has d number of PEs, where each PE has m 2 AND gates, m 2 XOR gates, and m 2 registers. (ii) Shift-register: The shift-register for B j requires m 2 registers and one MUX. (iii) Accumulation cell: The accumulation cell requires m 1 MUXes, m 2 XORs, and m 2 registers. (iv) PE-0: There are in total (3d 1 + d − 4) XOR gates, (4d 1 − 4) MUXes, and (m 2 + 3d 1 + d − 4) registers involved. Moreover, the proposed structure has a critical-path of (2T X + T M ) (T M is the delay time of an MUX), and it takes (d + w) cycles to produce the desired output for hybrid-size operation. Overall, the complexity of the proposed design is listed along with the existing digit-serial multipliers (trinomial-or pentanomial-based designs) in Table 1 in terms of logic gates number, register number, latency (number of cycle periods), and critical-path. Note that the designs of [5,6] are based on all-one-polynomials (or used all-one-polynomials as a computation core), we thus do not list them in Table 1, just for a fair comparison. As shown in Table 1, one can see that the proposed hybrid-size digit-serial multiplier has relatively better area-time complexities than the existing ones, especially when considering that the proposed one can offer hybrid field-size operation (the existing ones are all single field-size based). To have a detailed comparison, we have also used the NanGate's Library Creator and the 45-nm FreePDK Base Kit from North Carolina State University (NCSU) [12] to estimate the area and time complexities of all the designs for m 2 = 233, m 1 = 163, d = 16, and d 1 = 13. The obtained area, delay (latency time), power, area-delay product (ADP), and power-delay product (PDP) are listed in Table 2 for a comparison. Again, we can observe that the proposed one has better performance than the existing ones, e.g., it has at least 7.3% less ADP than the best trinomial one of [8], while it offers the flexibility to execute the pentanomial-based multiplier. Compared with the existing pentanomial ones, the proposed one still has better ADP when considering the scaling of the field-size. The proposed one also has 41.5% less ADP and 34.6% less PDP than the conventional hybrid field-size implementation (we have combined the best existing ones of [8,9] together to realize it). Digit-serial systolic structures (pentanomial of size m 1 ) Hybrid-size digit-serial systolic structures (pentanomial of size m 1 and trinomial of size m 2 ) 1 : delay = latency cycle number × critical-path. 2 : Refers to the conventional implementation of two field-size finite field multipliers; we have used the best existing ones of [8,9] to be combined together. The proposed hybrid-size digit-serial systolic multiplier, undoubtedly, can be extended as a standard IP core in various cryptosystems that demand different security levels. On the other hand, due to the low-complexity of the proposed design, it can also be used in cryptosystem for flexible operation, in the case the user of that cryptosystem needs to change/upgrade the system. Moreover, it is worth mentioning that the proposed hybrid field-size strategy can also be extended to multiple filed-size implementation. Conclusions This paper presents a novel implementation of a hybrid field-size digit-serial systolic multiplier over GF(2 m ). A novel digit-serial multiplication algorithm suitable for hybrid field-size realization is proposed first. Then, through a series of optimization techniques, the proposed algorithm is successfully mapped into a high-performance digit-serial systolic multiplier. The complexity analysis and detailed comparison have been given to confirm the efficiency of the proposed design. Future work may focus on the application of the proposed design in various cryptosystems. Abbreviations The following abbreviations are used in this manuscript: IP Intellectual property ECC Elliptic curve cryptography PE Processing elements IC Integrated chip NCSU North Carolina State University
4,075.4
2018-10-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Local Scheduling in KubeEdge-Based Edge Computing Environment KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment. Introduction With the development of Internet of Things (IoT) technology, various IoT sensors and devices are being deployed daily, and there is an increase in the number of artificial intelligence services that can recognize IoT user behavior patterns and situations based on the data collected from IoT devices [1,2]. In such a scenario, the user data are typically transferred to the cloud located at the center of the network and are analyzed and processed using cloud computing resources. However, cloud-based systems have a centralized structural limitation in meeting the requirements of IoT application services [3,4], which require a low response latency that is within tens of milliseconds. Edge computing was proposed to solve this problem. Edge computing reduces the response time by placing computational resources on locally distributed edge nodes instead of transmitting the data to the central cloud, thereby meeting the requirements of time-critical IoT application services [5]. A container is a unit of software that packages files such as libraries, binaries, and other configuration files required to run an application on an operating system (OS). Therefore, it provides the advantage of preventing program execution errors due to different environments such as networks, security, and build environments, thus driving them to operate stably. The container simplifies the distribution, installation, update, and deletion of IoT application services on edge nodes due to the lightness and portability of the container [6]. Moreover, various types of IoT application services can be provided simultaneously on each edge node. As such, containers are the most suitable technology for providing IoT application services in an edge computing environment. However, container orchestration is required to monitor and manage resource states via multiple edge nodes in an edge computing environment because containers can be applied only to the deployment and management of application services on a single node [7,8]. based IoT application services to run on edge nodes. It incorporates the Kubernetes service used in cloud computing into edge computing. KubeEdge can control edge nodes in the same manner used to operate the Kubernetes cluster in the existing cloud environment and can readily distribute various IoT application services such as machine learning, image recognition, and event processing to edge nodes. EdgeMesh [13] essentially supports service discovery and proxy functions for each pod in the application. It also provides load balancing by distributing user traffic to each pod in the cluster. However, this loadbalancing function has a fundamental drawback in edge computing environments. In an edge computing environment, edge nodes are geographically dispersed and the pods of the applications are also distributed throughout the edge nodes. In other words, the load-balancing function in EdgeMesh distributes the user traffic to the application pods in the cluster. However, it encounters latency when forwarding requests between edge nodes, thereby degrading the application throughput in the cluster [14]. To solve this limitation in a KubeEdge-based edge computing environment, we propose a local scheduling scheme that processes user traffic at the local node without forwarding the traffic to the remote nodes. Experimental evaluation results prove that the local scheduling scheme can provide low latency as well as improve the throughput of the cluster by suppressing the traffic forwarding in edge computing environments. The contributions of this study can be summarized as follows: • To the best of our knowledge, this study is the first to evaluate the performance of KubeEdge. We conducted diverse performance evaluations regarding the amount of computational resources, in other words, the pod distribution throughout edge nodes and the delay between edge nodes. • It was observed that the throughput of the cluster can be degraded due to traffic forwarding between edge nodes. We address the delay caused by the load balancing of EdgeMesh, which negatively impacts the performance of edge computing environments. • To overcome the performance degradation in a KubeEdge-based edge computing environment, we propose a local scheduling scheme and compare the performance in terms of throughput and latency, which provides important lessons for operating the KubeEdge platform in an edge computing environment. The remainder of this paper is organized as follows. Section 2 introduces related research, and Section 3 describes the basic background of KubeEdge architecture, components, and EdgeMesh. Section 4 describes the system model and the problem definition as well as the proposed local scheduling scheme. Section 5 evaluates the diverse performance of KubeEdge, such as the effect of pods and the effect of node-to-node delay between edge nodes, and compares the load-balancing scheme and EdgeMesh's round-robin scheme in cluster performance. Finally, Section 6 concludes this paper. Related Work This section presents an analysis of studies related to KubeEdge and throughput improvement techniques in edge computing environments. KubeEdge was announced by Huawei [15] in 2018 as an open-source system that extends the functions of applications requiring service distribution, expansion, and management to edge hosts. Yang et al. [16] investigated artificial intelligence (AI) for networks (NET4AI) and EdgeMesh computing for networks. They extended the role of cloud to communication networks and suggested a development direction for integrated communication systems. They fused KubeEdge technology with edge computing and mesh networking [17] and proposed the KubeEdge wireless platform for dynamic application services. The platform handles various objects such as vehicles, people, and homes, connected to mesh networks, and shares computational resources. In particular, subscribers are considered mobile routers that build dynamic mesh networks while supporting computational resource sharing and mesh network subscription. Zheng et al. [18] trained a lifelong learning model [19] to develop a lifetime thermal comfort prediction framework that predicts thermal comfort. It was developed based on KubeEdge-Sedna [20] as an edge-cloud synergy AI project at KubeEdge and was designed to automatically learn the passive functions of the existing model. Knowledge of the model, that is, meta-knowledge, can be used to predict the thermal comfort of people living indoors, which can be extended to numerous building interiors and software contexts to estimate long-term thermal comfort. Rui Han et al. [21] proposed EdgeGossip on the KubeEdge platform, aiming to quickly obtain model accuracy and avoid low-performance deviations during iterative training in deep learning. EdgeGossip balances training time by estimating the performance of multiple edge computing platforms during iterative training. It also provides the ability to use the aggregated data points to identify areas related to the accuracy of the data entered, improving the best-effort model accuracy. EdgeGossip is implemented on the Gossip algorithm [22], and its effectiveness was demonstrated using real-time deep-learning workloads. Mutichiro et al. [23] proposed StaSA, which can satisfy the quality of service (QoS) requirements of users as an edge application. The STaSA scheduler improves cluster resource utilization and QoS in edge-cloud clusters in terms of service time by automatically assigning requests to different processing nodes and scheduling execution according to realtime constraints. The performance of the proposed scheduling model was demonstrated on the KubeEdge-based implementation. Tran et al. [24] presented the NDN network over edge computing infrastructure to provide a disaster response support system. The authors defined emergency group communication and disaster information exchange through NDN. The feasibility of the proposed system was demonstrated by implementing the KubeEdge-based infrastructure with NDN IoT devices. With the development of container technology, studies on improving the production environment of container-based applications have been conducted. Abouaomar et al. [25] investigated resource provisioning at the network edge under latency and resource consumption constraints. By studying the frequency of resource allocation by the head of the edge node, they proposed a Lyapunov optimization framework on each edge device to reduce the number of resource allocation operations. Consequently, they validated that the proposed approach outperforms other benchmark approaches and provides low latency and optimal resource consumption. Taherizadeh et al. [26] proposed a dynamic multi-level auto-scaling technique for container-based application services, and [27][28][29] proposed Kubernetes-based resource provisioning and service quality improvement measures. Le et al. [27] address the limitation of the Kubernetes horizontal pod autoscaler, in that it is not suitable for different traffic distribution environments with real-time service demand in edge computing environments. They proposed the traffic-aware horizontal pod autoscaler to improve service quality by dynamically adjusting cluster resources according to the network traffic distribution. Nguyen et al. [28] proposed a proxy for an improved Kubernetes, referred to as RAP, which offloads latency caused by the load during load balancing to other optimal nodes. Gupta et al. [29] proposed a method to containerize and deploy deep-learning models to learn from edges and improve service quality by reducing data latency and traffic. In addition, the article EdgeX over Kubernetes [30] proposed a method to improve service quality by distributing computational resources that IoT gateways handle, given the combination of cloud computing and edge computing platforms. Choi et al. [31] proposed an intelligent service management technique that can handle large amounts of data generated by a large number of devices in real time while solving various problems such as connectivity and security in an industrialized IoT environment. Consequently, KubeEdge has been considered a key platform for building edge computing infrastructure and providing application services. Nevertheless, comprehensive performance evaluation and analysis of KubeEdge have not been performed. In this study, we conducted an experimental performance analysis of KubeEdge in an edge computing environment. We observed that although the load-balancing feature of KubeEdge generally provides high availability and scalability of the cluster, it can degrade the performance due to delays between edge nodes. Therefore, we propose a local scheduling scheme to overcome this problem and maximize the performance of KubeEdge-based edge computing environments. Preliminaries of KubeEdge This section introduces the KubeEdge architecture and main components, and how it works. We also discuss EdgeMesh, which is one of the important components providing load balancing in KubeEdge. KubeEdge Architecture KubeEdge [12] is a lightweight open-source edge computing platform developed under the Huawei initiative. It provides network management between edge nodes and the cloud, in addition to the maintenance of sessions when edge nodes are offline, as it aims to apply edge computing environments from the start of the design. It supports the MQTT protocol to enable resource-limited IoT edge devices to communicate efficiently. Figure 1 presents the architecture of KubeEdge, which consists of Cloud Core and Edge Core structures, unlike the Kubernetes master node and worker node structures [12]. Internet of Things (IoT) application services operate on Edge Core, which is geographically distributed in the edge layer, and Cloud Core manages application services. Edge Core consists of EdgeD, EdgeHub, EventBus, DeviceTwin, and MetaManager. EdgeD runs and manages container-based applications. It helps the administrator to deploy containerized workloads or applications at Edge Core. EdgeD provides diverse functionalities such as pod management, pod lifecycle event generator, secret management, and container runtime, as well as deployment of workloads. EdgeHub supports functions such as updating resource synchronization in the cloud and changing the state of edge devices via socket connectivity between Cloud Core and Edge Core in edge computing environments. EdgeHub acts as the communication link between the edge and the cloud. EdgeHub forwards messages received from the cloud to the corresponding module at the edge and vice versa. EventBus provides MQTT clients with functions to interact with IoT edge devices and supports Publish/Subscribe functions such as sending MQTT topics to CloudCore. DeviceTwin stores the state of IoT edge devices and synchronizes them to the cloud. It also provides query interfaces for applications. MetaManager is a message processor between EdgeD and EdgeHub. It is also responsible for storing and retrieving metadata from a database. Cloud Core consists of controllers and CloudHub, and the controllers are composed of edge controller and device controller. Edge controller connects the Kubernetes application programming interface server (K8s API Server) and Edge Core. Edge controller adds, updates, deletes, monitors, and synchronizes events between the K8s API Server and Edge Core. Device controller is responsible for IoT device management. It synchronizes the IoT device updates from Cloud Core and Edge Core. CloudHub is a component of Cloud Core and is the mediator between controllers and the edge side. CloudHub monitors changes on Cloud Core, caches messages, and allows for communication between Edge Core and the controllers via socket communication with EdgeHub. EdgeMesh This subsection describes EdgeMesh, which is a data plane component of a KubeEdge cluster. EdgeMesh [13] provides service discovery and traffic proxy functionality within the KubeEdge cluster, in addition to the high availability of KubeEdge by connecting edge nodes using LibP2P [32]. In the case of Intra-LAN, communication between edge nodes is provided through direct access. For Cross-LAN, communication between edge nodes is supported via a tunneling technique using hole punching [33] or a traffic transfer technique via relay. Metadata is distributed via the EdgeHub-CloudHub tunnel. Thus, direct access to the cloud is not required, and by integrating the DNS server at the node layer, reliability can be maintained without access to the cloud CoreDNS when searching for services. EdgeMesh provides a load-balancing function using an Istio DestinationRule in the service. Typically, round-robin and random schemes are used. While the round-robin scheme distributes data equally, the random scheme randomly selects an endpoint and distributes data. Cloud Core consists of controllers and CloudHub, and the controllers are composed of edge controller and device controller. Edge controller connects the Kubernetes application programming interface server (K8s API Server) and Edge Core. Edge controller adds, updates, deletes, monitors, and synchronizes events between the K8s API Server and Edge Core. Device controller is responsible for IoT device management. It synchronizes the IoT device updates from Cloud Core and Edge Core. CloudHub is a component of Cloud Core and is the mediator between controllers and the edge side. CloudHub monitors changes on Cloud Core, caches messages, and allows for communication between Edge Core and the controllers via socket communication with EdgeHub. EdgeMesh This subsection describes EdgeMesh, which is a data plane component of a Ku-beEdge cluster. EdgeMesh [13] provides service discovery and traffic proxy functionality within the KubeEdge cluster, in addition to the high availability of KubeEdge by connecting edge nodes using LibP2P [32]. In the case of Intra-LAN, communication between edge nodes is provided through direct access. For Cross-LAN, communication between edge nodes is supported via a tunneling technique using hole punching [33] or a traffic transfer technique via relay. Metadata is distributed via the EdgeHub-CloudHub tunnel. Thus, direct access to the cloud is not required, and by integrating the DNS server at the node layer, reliability can be maintained without access to the cloud CoreDNS when searching for services. EdgeMesh provides a load-balancing function using an Istio DestinationRule in the service. Typically, round-robin and random schemes are used. While the roundrobin scheme distributes data equally, the random scheme randomly selects an endpoint and distributes data. Local Scheduling Scheme in KubeEdge This section discusses how the load-balancing algorithms such as round-robin and random schemes operate in KubeEdge. By defining the problem of KubeEdge's load-balancing algorithms in an edge computing environment, we propose a local scheduling Local Scheduling Scheme in KubeEdge This section discusses how the load-balancing algorithms such as round-robin and random schemes operate in KubeEdge. By defining the problem of KubeEdge's loadbalancing algorithms in an edge computing environment, we propose a local scheduling scheme to overcome the aforementioned problem and improve the throughput and latency in a KubeEdge-based edge computing environment. KubeEdge's Load-Balancing System This subsection describes KubeEdge's load-balancing system and its limitation. Generally, load balancing allows the distribution of the workload in an even manner among the available resources. Specifically, it aims to provide a continuous service in the event of a component failure by effectively provisioning application instances and resources. Furthermore, load balancing can reduce the task response time and optimize resource usage, thereby improving system performance at a reduced cost. Load balancing also offers scalability and flexibility for applications that may widen and require additional resources in the future. KubeEdge provides load balancing via EdgeMesh by distributing user requests equally across available pods. When the edge node receives user requests, it transmits them to EdgeMesh-Agent, which then distributes the traffic to the remote edge nodes according to the load-balancing policies. Round-robin in Figure 2a and Random in Figure 2b are the representative load-balancing algorithms used in EdgeMesh, and their functions are discussed as follows. KubeEdge provides load balancing via EdgeMesh by distributing user requests equally across available pods. When the edge node receives user requests, it transmits them to EdgeMesh-Agent, which then distributes the traffic to the remote edge nodes according to the load-balancing policies. Round-robin in Figure 2a and Random in Figure 2b are the representative load-balancing algorithms used in EdgeMesh, and their functions are discussed as follows. (a) Round-robin scheme: The round-robin scheme distributes user requests evenly among the pod resources. For example, in Figure 2a, four application pods are deployed to each Edge node 1, 2, and 3. Assuming that four user requests are received at Edge node 1, Edge node 1 will distribute the incoming requests evenly to each pod. Thus, the first and second requests are handled by the pods in Edge node 1, while the third and fourth requests are transmitted to pods of Edge nodes 2 and 3, respectively. (b) Random scheme: The random schedule distributes user requests randomly to any pod in the edge nodes. As shown in Figure 2b, the user requests received at Edge node 1 are distributed to individual pods throughout the cluster. For example, the first request is passed to the pod in Edge node 1, and the second request is passed to the pod in Edge node 3. Similarly, the third and fourth requests are passed to the pod in Edge nodes 1 and 2, respectively. It is interesting to note that the random scheme stochastically distributes traffic evenly to individual pods as the user traffic increases, which is similar to that in the round-robin scheme. Problem Definition and Local Scheduling Scheme In the load-balancing schemes in KubeEdge, the user traffic is evenly distributed regardless of the location of the edge node where the pod is placed. In other words, Edge-Mesh in KubeEdge distributes the user traffic to the remote edge nodes without considering the delay in forwarding the requests. However, in an edge computing environment, the edge nodes are located far away from each other to cover a large-scale area, and the forwarding delay between the edge nodes is significant enough to degrade the (a) Round-robin scheme: The round-robin scheme distributes user requests evenly among the pod resources. For example, in Figure 2a, four application pods are deployed to each Edge node 1, 2, and 3. Assuming that four user requests are received at Edge node 1, Edge node 1 will distribute the incoming requests evenly to each pod. Thus, the first and second requests are handled by the pods in Edge node 1, while the third and fourth requests are transmitted to pods of Edge nodes 2 and 3, respectively. (b) Random scheme: The random schedule distributes user requests randomly to any pod in the edge nodes. As shown in Figure 2b, the user requests received at Edge node 1 are distributed to individual pods throughout the cluster. For example, the first request is passed to the pod in Edge node 1, and the second request is passed to the pod in Edge node 3. Similarly, the third and fourth requests are passed to the pod in Edge nodes 1 and 2, respectively. It is interesting to note that the random scheme stochastically distributes traffic evenly to individual pods as the user traffic increases, which is similar to that in the round-robin scheme. Problem Definition and Local Scheduling Scheme In the load-balancing schemes in KubeEdge, the user traffic is evenly distributed regardless of the location of the edge node where the pod is placed. In other words, EdgeMesh in KubeEdge distributes the user traffic to the remote edge nodes without considering the delay in forwarding the requests. However, in an edge computing environment, the edge nodes are located far away from each other to cover a large-scale area, and the forwarding delay between the edge nodes is significant enough to degrade the throughput of the cluster. Therefore, we point out that load-balancing traffic to remote edge nodes degrades the performance of the KubeEdge cluster in an edge computing environment. To solve the aforementioned problem, we propose a local scheduling scheme that processes user requests via pods located at the local node that receives them. In the local scheduling scheme, rather than transmitting the user requests to remote pods, they are distributed equally to the pods in the edge node that receive the user requests. For example, in Figure 3, four user requests are handled by two pods located at Edge node 1 without forwarding them to the pods in the remote edge nodes. In this way, the proposed scheme reduces the latency by preventing traffic forwarding between edge nodes in an edge computing environment and improves the throughput of the overall system by handling the user traffic immediately at the local edge nodes. processes user requests via pods located at the local node that receives them. In the local scheduling scheme, rather than transmitting the user requests to remote pods, they are distributed equally to the pods in the edge node that receive the user requests. For example, in Figure 3, four user requests are handled by two pods located at Edge node 1 without forwarding them to the pods in the remote edge nodes. In this way, the proposed scheme reduces the latency by preventing traffic forwarding between edge nodes in an edge computing environment and improves the throughput of the overall system by handling the user traffic immediately at the local edge nodes. Performance Evaluations In this section, we first describe the experimental setup of a KubeEdge-based edge computing environment. Then, we evaluate the performance of KubeEdge in terms of the number of pods of individual edge nodes, the pod distribution on edge nodes, and the delay between edge nodes by measuring the throughput and delay of individual edge nodes in increasing concurrent requests. We also compare the cumulative throughput and response time of the round-robin and local scheduling schemes to validate the feasibility of the local scheduling scheme in an edge environment. Experimental Setups The KubeEdge clusters used for the performance evaluation consisted of one cloud node and three edge nodes, as shown in Figure 4. The cloud node runs with 4 central processing unit (CPU) cores and 8 GB of RAM, whereas edge nodes run with 4 CPU cores and 4 GB of RAM. Both nodes were installed with Docker version 20.10.14, KubeEdge version 1.9.1, Ubuntu 18.04.5, and Kubernetes API version 1.21.0 installed at a cloud node. The controllers provided a scheduler function by distributing the pods to the edge nodes; they were set to manually distribute the pods during the evaluation. Performance Evaluations In this section, we first describe the experimental setup of a KubeEdge-based edge computing environment. Then, we evaluate the performance of KubeEdge in terms of the number of pods of individual edge nodes, the pod distribution on edge nodes, and the delay between edge nodes by measuring the throughput and delay of individual edge nodes in increasing concurrent requests. We also compare the cumulative throughput and response time of the round-robin and local scheduling schemes to validate the feasibility of the local scheduling scheme in an edge environment. Experimental Setups The KubeEdge clusters used for the performance evaluation consisted of one cloud node and three edge nodes, as shown in Figure 4. The cloud node runs with 4 central processing unit (CPU) cores and 8 GB of RAM, whereas edge nodes run with 4 CPU cores and 4 GB of RAM. Both nodes were installed with Docker version 20.10.14, KubeEdge version 1.9.1, Ubuntu 18.04.5, and Kubernetes API version 1.21.0 installed at a cloud node. The controllers provided a scheduler function by distributing the pods to the edge nodes; they were set to manually distribute the pods during the evaluation. The throughput was measured as the number of requests handled per second, and the response time was measured as the average time that individual requests are processed by the edge node, including the forwarding latency. The measurements were repeated 10 times to ensure that the results obtained were accurate, and an HTTP load-generator HEY tool [34] was used to generate the traffic. The throughput was measured as the number of requests handled per second, and the response time was measured as the average time that individual requests are processed by the edge node, including the forwarding latency. The measurements were repeated 10 times to ensure that the results obtained were accurate, and an HTTP load-generator HEY tool [34] was used to generate the traffic. Effect of Number of Pods This subsection evaluates the effect of the number of pods with increasing concurrent requests. Notably, in this evaluation we focused on a single location (Edge node 1). While increasing the concurrent requests at Edge node 1 from 1 to 16, we measured the throughput and response time when the number of pods was 1, 2, and 4, respectively. As shown in Figure 5a, the throughput of Edge node 1 tends to increase as the incoming concurrent requests increase. However, it is noticeable that the throughput is bounded by a certain level with respect to the number of pods. For example, when the number of concurrent requests was 1, a throughput of approximately 139 req/s was noted, regardless of the number of pods; this observation indicates the ability of a single pod to handle the incoming user requests. When the number of concurrent requests was increased to 16, the maximum throughput of one pod was 308 req/s, whereas four pods could handle 779 req/s user requests. This indicates that an individual pod has its own capacity in terms of handling requests, and the throughput can be increased via cooperation with multiple pods. In addition, Figure 5b indicates that the average response time can be decreased by exploiting multiple pods in the edge node. For instance, the average response time decreased from 113 ms for one pod to 42 ms for four pods when the number of concurrent requests was 16. Effect of Pod Distribution and Delay between Edge Nodes We evaluated the effect of pod distribution on edge nodes as well as the delay between edge nodes while increasing the number of concurrent requests. To analyze the effect of pod distribution, we allocated different numbers of pods to three edge nodes. For example, 4-4-4 indicates that three edge nodes have the same number of pods, that is, 4 pods each, while 8-3-1 indicates that Edge nodes 1, 2, and 3 are allocated 8 pods, 3 pods, and 1 pod, respectively. For the evaluation, we increased the number of concurrent requests accessing Edge node 1 from 1 to 16. Notably, the incoming traffic at Edge node 1 is load-balanced to Edge nodes 2 and 3 by the EdgeMesh module at KubeEdge, where we used the round-robin scheme for load balancing in KubeEdge. It is noticeable that the random scheme has a similar tendency of traffic distribution with the round-robin scheme for high amounts of traffic from the stochastic point of view. Thus, both the round-robin and random schemes can distribute the incoming traffic to Edge nodes 1, 2, and 3 in a ratio of 4:4:4 when 4-4-4 pods are distributed on three nodes. Similarly, when pods are distributed in a proportion of 8-3-1, the incoming traffic is distributed to Edge nodes 1, 2, and 3 in a proportion of 8-3-1 because the round-robin scheme follows the policy of distributing Effect of Pod Distribution and Delay between Edge Nodes We evaluated the effect of pod distribution on edge nodes as well as the delay between edge nodes while increasing the number of concurrent requests. To analyze the effect of pod distribution, we allocated different numbers of pods to three edge nodes. For example, 4-4-4 indicates that three edge nodes have the same number of pods, that is, 4 pods each, while 8-3-1 indicates that Edge nodes 1, 2, and 3 are allocated 8 pods, 3 pods, and 1 pod, respectively. For the evaluation, we increased the number of concurrent requests accessing Edge node 1 from 1 to 16. Notably, the incoming traffic at Edge node 1 is load-balanced to Edge nodes 2 and 3 by the EdgeMesh module at KubeEdge, where we used the round-robin scheme for load balancing in KubeEdge. It is noticeable that the random scheme has a similar tendency of traffic distribution with the round-robin scheme for high amounts of traffic from the stochastic point of view. Thus, both the round-robin and random schemes can distribute the incoming traffic to Edge nodes 1, 2, and 3 in a ratio of 4:4:4 when 4-4-4 pods are distributed on three nodes. Similarly, when pods are distributed in a proportion of 8-3-1, the incoming traffic is distributed to Edge nodes 1, 2, and 3 in a proportion of 8-3-1 because the round-robin scheme follows the policy of distributing the traffic evenly to each pod. To measure the effect of delay between edge nodes in an edge computing environment, we repeated the same evaluations by varying the delay between the edge nodes as 0, 15, and 30 ms. Since the traffic forwarded to the remote edge node is returned to Edge node 1 as a response, we measured the throughput and the average response time handled at Edge node 1 in a manner similar to that in the previous subsection. Figure 6a-c present the throughput when the pod distribution to edge nodes is 4-4-4, 8-3-1, and 10-1-1, respectively, while Figure 6d-f show the corresponding average response times. In Figure 6a-c, there is no difference in throughput according to the pod distributions when the delay between edge nodes is 0 ms. For example, when the number of concurrent requests is 1, the throughputs are approximately 132 reqs/s, 127 reqs/s, and 145 reqs/s for the pod distributions 4-4-4, 8-3-1, and 10-1-1, respectively. When the concurrent request increases to 16, the throughputs increase to approximately 1796 reqs/s, 1677 reqs/s, and 1726 reqs/s, respectively. In addition, the response times in Figure 6d-f show steady response times of 6~9.5 ms according to the number of concurrent requests irrespective of pod distribution. Thus, we can conclude that the KubeEdge cluster can provide the same performance regardless of pod distribution in the case that there is no delay between edge nodes because there is no difference between handling traffic locally and remotely. In other words, 2 out of 12 requests in the 10-1-1 pod distribution are forwarded to the remote edge nodes, whereas 8 out of 12 requests in the 4-4-4 pod distribution are handled by remote edge nodes. Therefore, more than 50% of the throughput was degraded in the 4-4-4 pod distribution, in contrast to that of the 10-1-1 pod distribution. Interestingly, this effect on the throughput degradation increases for higher delay between edge nodes and the response time. Figure 6d-f also show a similar tendency. The average response time in the 4-4-4 pod distribution is approximately 10~14.5 ms for a 15 ms delay, and it increases to 15~20 ms for a 30 ms delay, while that in the 10-1-1 pod distribution does not have any significant difference for the delay between edge nodes. In summary, the important lesson is that although the load balancing of EdgeMesh is designed to efficiently utilize the pod resource deployed in the edge nodes, the throughput and the average response time can be degraded by the delay between edge nodes where they are geographically distributed in an edge computing environment. Effect of Load-Balancing Schemes We evaluated the effect of the load-balancing schemes by comparing the round-robin scheme in EdgeMesh and the proposed local scheduling scheme. To analyze the performance in an edge computing environment, we used a different traffic distribution for each pod distribution. In detail, we used 4:4:4, 8:3:1, and 10:1:1 traffic distributions for 6-6-6, 12-5-1, and 16-1-1 pod distributions, where x-y-z represents the number of pod distributions for each edge node in the KubeEdge cluster and x:y:z denotes the traffic distribution accessing each edge node. We used 18 pods in the cluster and differentiated only the pod distribution. In the same way, 12 concurrent requests were generated, and the traffic distribution was designed to follow the pod distribution ratio to ensure that each edge node utilizes the pod resources fully. In addition, we set the delay between edge nodes at 15 ms. Figure 7 presents the throughput and the response time of the round-robin and local scheduling schemes as the number of concurrent requests increases. As shown in Figure 7a, the throughput of the round-robin scheme shows 871 req/s for the 4:4:4 traffic distribution, while it achieves 1050 req/s for the 10:1:1 traffic distribution. This indicates that the throughput can be decreased by increasing the amount of traffic delivered to the remote edge nodes, as already discussed in the previous subsection. In the 4:4:4 traffic distribution, Edge node 1 forwards 12/18 of the incoming traffic to the remote edge nodes since the pods are distributed in the ratio of 6-6-6 to Edge nodes 1, 2, and 3. Similarly, Edge nodes 2 and 3 forward 12/18 of the incoming traffic to the remote edge nodes while handling the remainder of the traffic. In summary, Edge nodes 1, 2, and 3 forward 4×12/18, 4×12/18, and 4×12/18 incoming traffic to remote edge nodes in the 4:4:4 traffic distribution. However, the 10:1:1 traffic distribution is evaluated using a 16-1-1 pod distribution to each edge node, and Edge nodes 1, 2, and 3 transmit 10×2/18, 1×17/18, and 1×17/18 of the incoming traffic to the remote edge nodes. Therefore, we can conclude that the 10:1:1 traffic distribution distributes less traffic to the remote edge nodes compared with the 4:4:4 traffic distribution. This leads to less degradation of the throughput compared with the case of the 4:4:4 traffic. The average response time in Figure 7c shows the traffic analysis results. While all three edge nodes in the 4:4:4 traffic distribution show a response time of approximately 17 ms, Edge node 1 in the 10:1:1 shows the lowest average response time of 6 ms with the 10×16/18 incoming traffic handled locally. On the other hand, Edge nodes 2 and 3 in the 10:1:1 traffic distribution show a response time of approximately 50 ms, because the 1×17/18 incoming traffic is handled by the remote edge nodes. In the local scheduling scheme, all the incoming traffic is processed at the edge nodes that receive the traffic; thus, performance degradation due to traffic forwarding to remote edge nodes does not occur. As a result, the local scheduling scheme achieves high throughput regardless of the traffic distribution. It can be observed from Figure 7b that the 4:4:4, 8:3:1, and 10:1:1 traffic distributions achieve throughputs of approximately 1493, 1646, and 1644 req/s, respectively. It is also observed that local scheduling eliminates the request forwarding latency between the edge nodes, which results in a low response time of approximately 8 ms regardless of the traffic pattern. Therefore, the local scheduling scheme achieves the maximum throughput by handling all the incoming traffic using the local edge nodes in an edge computing environment where the edge nodes are geographically distributed. Conclusions KubeEdge is a representative open source-based edge computing platform that extends the core functionalities of Kubernetes to the edge. We conducted diverse performance evaluations of KubeEdge in an edge computing environment in terms of the throughput and response time according to the pod distribution and the delay between edge nodes. On the basis of an experimental analysis, we found out that traffic forwarding from load balancing can degrade the throughput of the cluster in an edge computing environment due to the geographical distribution between the edge nodes. To overcome this problem, we propose a local scheduling scheme that handles traffic using local edge nodes. The evaluation results show that the local scheduling scheme outperforms the round-robin scheme in terms of the cumulative throughput and response time, regardless of the traffic patterns. We expect that the local scheduling scheme will be used to optimize the performance of edge computing environments. In the future, we will study the dynamic resource orchestration to adjust the containerized resources according to traffic demand. Conclusions KubeEdge is a representative open source-based edge computing platform that extends the core functionalities of Kubernetes to the edge. We conducted diverse performance evaluations of KubeEdge in an edge computing environment in terms of the throughput and response time according to the pod distribution and the delay between edge nodes. On the basis of an experimental analysis, we found out that traffic forwarding from load balancing can degrade the throughput of the cluster in an edge computing environment due to the geographical distribution between the edge nodes. To overcome this problem, we propose a local scheduling scheme that handles traffic using local edge nodes. The evaluation results show that the local scheduling scheme outperforms the round-robin scheme in terms of the cumulative throughput and response time, regardless of the traffic patterns. We expect that the local scheduling scheme will be used to optimize the perfor-mance of edge computing environments. In the future, we will study the dynamic resource orchestration to adjust the containerized resources according to traffic demand.
8,862
2023-01-30T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Green Synthesis and Characterization of Silver Nanoparticles Using a Lythrum salicaria Extract and In Vitro Exploration of Their Biological Activities This research describes an eco-friendly green route for the synthesis of AgNPs using an aqueous extract of Lythrum salicaria. Taguchi design was used to optimize the synthesis method, taking into account various working conditions. The optimum parameters were established using a 3 mM AgNO3 concentration, a 1:9 extract:AgNO3 volume ratio, a pH value of 8, 60 ℃ temperature, and 180 min reaction time. The synthesized AgNPs were characterized using UV-Vis and FTIR spectroscopy, and TEM and EDX analysis. The SPR band at 410 nm, as well as the functional groups of biomolecules identified by FTIR and the EDX signals at ~3 keV, confirmed the synthesis of spherical AgNPs. The average AgNPs size was determined to be 40 nm, through TEM, and the zeta potential was −19.62 mV. The antimicrobial assay showed inhibition against S. aureus and C. albicans. Moreover, the results regarding the inhibition of lipoxygenase and of peroxyl radical-mediated hemolysis assays were promising and justify further antioxidant studies. Introduction Noble metal (e.g., Pt, Ag, Au) nanoparticles have gained researchers' attention due to their multiple benefits in various fields, such as medicine, the food industry, material science, physics, and chemistry. In the biomedical field, noble metal nanoparticles are versatile agents used in drug delivery, diagnosis, radiotherapy enhancement, photo-ablation procedures, and hyperthermia studies [1,2]. The photophysical metal properties, such as facile synthesis in different shapes and sizes, easy derivatization with chemical and biomolecular ligands, biocompatibility, and high stability, represent the most appropriate attributes of nanoparticles, recommending them to industry [3]. Among such nanoparticles, silver nanoparticles (AgNPs) have gained special attention, due to their superior electrical conductivity, chemical stability, controlled geometry, catalytic and antibacterial properties [4]. AgNPs are used for many medical applications, including sensing devices, coating materials, catheters, bone cement and wound dressings [5]. Several methods can be applied for AgNPs synthesis, each presenting advantages and disadvantages. A general classification divides them into bottom-up and top-down techniques. In the top-down category, the principle relies upon reducing the size of bulk material by mechanical milling, laser ablation or sputtering. Although the final product presents uniform physico-chemical properties, these methods also involve high energy consumption. The bottom-up techniques use molecules or atoms to prepare nanoparticles through various Obtaining of Extracts For the preparation of the L. salicaria extract, 10 g of plant material, as grounded aerial part (purchased from a local natural and biological products market), were added to 100 mL distilled water, at an adjusted temperature of 40°C, while continuously stirring at 500 rpm for 30 min. After the mixture was cooled at room temperature, filtration was carried out using a Whatman filter paper no. 1 and the filtrate was stored at 4°C for further experiments. Preparation and Optimization of Nanoparticles The Taguchi design was used to study AgNPs synthesis conditions in an experiment that uses minimal resources and, in our case, tested five parameters. AgNPs were synthesized by adding different concentrations of extract to silver nitrate (AgNO 3 ) solution, in various volume ratios. These mixtures were subjected to magnetic stirring at 500 rpm, at different adjusted pH values, using 1 M HCl and 1 M NaOH, at different temperatures and time intervals. The change in color was monitored and the maximum absorbance was recorded in the 300-600 nm range by UV-Vis spectroscopy. The analysis was performed in triplicate. The formed AgNPs were separated by centrifugation at 8000 rpm for approximately 30 min. After removing the supernatant, the AgNPs were purified by washing three times with distilled water and eventually dried at 40°C until constant mass. The theoretical concentration of AgNPs in the colloidal synthesis mixture was calculated using a mathematical method [18,19]. Firstly, the average number of atoms per nanoparticle (N) was determined through Equation (1): where π = 3.14, ρ = 10.5 g/cm 3 (density of face centered cubic silver), D = 133.5 × 10 −7 cm (average diameter of nanoparticles), M = 107.868 g/mol (atomic mass of silver) and N A = 6.023 × 10 23 mol −1 (Avogadro's number). Secondly, the molar concentration of nanoparticles in the resulting colloidal mixture (C) could be calculated only by considering that all silver ions (Ag + ) were entirely converted to AgNPs in the biosynthesis process. Equation (2) was applied for the determination of C: (2) where N T = total amount of Ag atoms added as AgNO 3 (3.0 mM = 0.003 M), N = average number of atoms per nanoparticle, V = reaction volume (0.22 L) and N A = Avogadro's number [18,19]. AgNPs Characterization Regarding the physico-chemical characterization of AgNPs, the change in mixture color was initially visually monitored and then the UV-Vis spectra were recorded, observing the presence of the peak due to surface plasmon resonance (SPR). The absorbances were recorded with a Jasco V-530 UV-Vis double beam spectrophotometer (Tokyo, Japan) in the 300-600 nm range. The functional groups involved in the green synthesis of AgNPs were analyzed by FTIR spectroscopy (Bruker Vertex 70 spectrophotometer, Bruker, Billerica, MA, USA) in the 4000-310 cm −1 range, comparing the extract and AgNPs' spectra. The other characteristics of AgNPs were examined by the following: DLS (the hydrodynamic diameter and the zeta potential), using a Delsa Nano Submicron Particle Size Analyzer (Beckman Coulter Inc., Fullerton, CA, USA); TEM (dimensions and morphology), using a Hitachi High-Tech HT 7700 Transmission Electron Microscope (Hitachi High-Technologies Cor- Furthermore, the total phenolic content of the L. salicaria extract (diluted correspondingly) and supernatant (after the first separation of AgNPs) was performed using a previously described UV-Vis spectrophotometric method, with gallic acid as standard [20]. The analysis was performed in triplicate and the results were expressed as mg gallic acid equivalents (GAE) per mL sample. Antimicrobial Testing The synthetized AgNPs and the corresponding extract were investigated for antimicrobial activity, using disk diffusion methods [21,22], against different bacterial (Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa ATCC 27853) and fungal (Candida albicans ATCC 90028) pathogens, obtained from the Culture Collection of the Department of Microbiology, "Grigore T. Popa" University of Medicine and Pharmacy, Iasi, Romania. Standard culture mediums, such as Mueller-Hinton agar (Biolab) for fungi and Mueller-Hinton agar (Oxoid) for bacteria, were used. After the samples and positive controls (discs containing 5 µg ciprofloxacin and 25 µg fluconazole, respectively) were applied, the plates were incubated at 35°C for 24 h. All experiments were conducted in triplicate, and, after the inhibition zones were measured (mm), the mean ± standard deviation was calculated. The Minimal Inhibitory Concentration (MIC) and Minimal Bactericidal Concentration (MBC) or Minimal Fungicidal Concentration (MFC), were determined by the broth microdilution method. Subsequent double dilutions of the tested samples in Mueller-Hinton broth were inoculated with the suspension of the microorganism test. Antioxidant Activity The antioxidant activities of the samples (AgNPs and extract) were determined using 15-lipoxygenase (LOX) inhibition and peroxyl radical-mediated hemolysis inhibition assays. Regarding the first method, after the mixture of lipoxygenase (in pH 9 borate buffer solution) and samples were left to stand in the dark at room temperature for 10 min, linoleic acid (in pH 9 borate buffer solution) was added, and the absorbances were recorded at 234 nm [23]. The second method focused on the determination of the inhibition capacity of erythrocyte hemolysis, mediated by peroxyl free radicals [24]. Samples treated with a solution of 2,2 -azobis-(2-amidinopropane) dihydrochloride (AAPH) (in phosphate buffer pH 7.4) and an erythrocyte suspension (10 % in 0.9 % saline) were maintained for 3 h at 37°C, cooled to room temperature, diluted with phosphate buffer (pH 7.4) and, eventually, centrifuged for 10 min. Simultaneously, a control solution containing only AAPH and erythrocyte suspension was prepared. The absorbances were recorded at 540 nm. All experiments were performed in triplicate and gallic acid was used as standard. For samples that showed an activity of more than 50 %, the EC 50 value was also calculated and expressed as µg extract/mL final solution. Taguchi Design Experiment for Optimization of Reaction Parameters The optimization of parameters for more efficient AgNPs synthesis was based on an L9 orthogonal array design, a method that combines mathematical and statistical principles in order to obtain a predictive knowledge of a complex process with several variables, but with a reduced number of attempts [25,26]. This design presents several advantages over the traditional more laborious and time-consuming optimization methods. In the traditional methods each variable is evaluated one by one, while the others remain constant, and, therefore, data on what happens when variables change simultaneously is not obtained [27]. The model was obtained using variables presented in Table 1 at levels 1, 2 and 3. Table 1. In order to process the data, the S/N ratio was calculated using Equation (3). The S/N ratio measures the quality characteristic deviating from the desired value, and represents the ratio of the target value (signal) to the standard deviation for the response variable (noise), which can be calculated by selecting the formula corresponding to the quality characteristic "larger is better". where: n = number of experiments, yi = response variable for the experiment (absorbance) [26,27]. The obtained results for the S/N ratio are presented in Table 2. Each line in the matrix contains the numbers corresponding to the level at which the factors A, B, C, D and E are expressed in Taking into account the average value of absorbance and the S/N ratio calculated for each level, the optimal synthesis conditions were A3B1C3D3E3 (line L7, A at level 3, B at level 1, C, D and E at level 3), followed by A2B1C2D2E3 (L4), and, therefore, we could say that, in both cases, a volume ratio of 1:9 extract:AgNO 3 and 180 min reaction time were optimal. The average S/N ratio for each level is presented in Table 3 and the main effect of each variable is graphically represented in Figure 1. In order to establish the optimal synthesis conditions, the maximum S/N value was investigated. Considering the S/N ratio, the optimum conditions for AgNPs synthesis were A2B1C3D3E3, with a 3 mM AgNO3 concentration, 1:9 extract:AgNO3 ratio, a pH value of 8, 60 ℃ temperature and 180 min reaction time. To the best of our knowledge, the present study is the only one that has used the Taguchi model to establish the reaction conditions for AgNPs synthesized using an L. salicaria extract. Moreover, the traditional optimization of synthesis can be found in a single study carried out by Srećković et al., which led to the following conditions: 20 mM AgNO3 concentration, 25 ℃, pH 12, and 30 min reaction time [17]. In both cases, the synthesis was optimal at alkaline pH, which can be explained by the change in dissociation constants values for functional groups involved in the reduction process, which leads to an increase in the availability of compounds for the synthesis process [28]. Physico-Chemical Characterization of AgNPs AgNPs synthesis by means of green methods represents the focus of scientists, given that they imply lower costs and ease in the monitoring and sampling processes. Moreover, plants are easy to grow and safe to handle, and, implicitly, this type of eco-friendly synthesis could replace chemical techniques for obtaining AgNPs [29]. The steps implied by AgNPs synthesis are represented by the reduction of Ag + and agglomeration of colloidal nanoparticles with oligomeric clusters formation [30]. Considering the S/N ratio, the optimum conditions for AgNPs synthesis were A2B1C3D3E3, with a 3 mM AgNO 3 concentration, 1:9 extract:AgNO 3 ratio, a pH value of 8, 60°C temperature and 180 min reaction time. To the best of our knowledge, the present study is the only one that has used the Taguchi model to establish the reaction conditions for AgNPs synthesized using an L. salicaria extract. Moreover, the traditional optimization of synthesis can be found in a single study carried out by Srećković et al., which led to the following conditions: 20 mM AgNO 3 concentration, 25°C, pH 12, and 30 min reaction time [17]. In both cases, the synthesis was optimal at alkaline pH, which can be explained by the change in dissociation constants values for functional groups involved in the reduction process, which leads to an increase in the availability of compounds for the synthesis process [28]. Physico-Chemical Characterization of AgNPs AgNPs synthesis by means of green methods represents the focus of scientists, given that they imply lower costs and ease in the monitoring and sampling processes. Moreover, plants are easy to grow and safe to handle, and, implicitly, this type of eco-friendly synthesis could replace chemical techniques for obtaining AgNPs [29]. The steps implied by AgNPs synthesis are represented by the reduction of Ag + and agglomeration of colloidal nanoparticles with oligomeric clusters formation [30]. The first indicator of AgNPs synthesis is noticed through visual observation, but the formation of nanoparticles must be confirmed by other methods as well. Therefore, we continued monitoring the color change by means of UV-Vis spectroscopy ( Figure 2). As seen in the inset of Figure 2, the initial color of the extract was yellow, changing to dark brown after adding AgNO 3 under specific conditions. When analyzing the UV-Vis spectra, no absorption peak was observed in the 300-600 range for either the extract or AgNO 3 solution, but a distinct peak at 410 nm was revealed for the obtained colloidal solution. This peak could be due to excitation of SPR [31], which could determine the optical, physical and chemical properties of AgNPs [29]. The presence of a single SPR peak could explain the spherical shape of AgNPs [32]. The obtained results were in accordance with other studies focusing on AgNPs synthesized using L. salicaria extracts, in which the SPR band was observed at 415 nm [16] or in the 396-415 nm range [17]. The calculated concentration of AgNPs in the colloidal solution was found to be 2 × 10 −10 mol/L. The first indicator of AgNPs synthesis is noticed through visual observation, but the formation of nanoparticles must be confirmed by other methods as well. Therefore, we continued monitoring the color change by means of UV-Vis spectroscopy ( Figure 2). As seen in the inset of Figure 2, the initial color of the extract was yellow, changing to dark brown after adding AgNO3 under specific conditions. When analyzing the UV-Vis spectra, no absorption peak was observed in the 300-600 range for either the extract or AgNO3 solution, but a distinct peak at 410 nm was revealed for the obtained colloidal solution. This peak could be due to excitation of SPR [31], which could determine the optical, physical and chemical properties of AgNPs [29]. The presence of a single SPR peak could explain the spherical shape of AgNPs [32]. The obtained results were in accordance with other studies focusing on AgNPs synthesized using L. salicaria extracts, in which the SPR band was observed at 415 nm [16] or in the 396-415 nm range [17]. The calculated concentration of AgNPs in the colloidal solution was found to be 2 × 10 −10 mol/L. Differences in the position and appearance of the peak, even for the same plant species, can be explained by different harvesting areas, different conditions for extract preparation and for AgNPs synthesis. Nonetheless, these conditions can also lead to modifications in the shape and size of AgNPs. A wide peak generally indicates larger particles [29]. Moreover, their shape can be predicted considering the position of the SPR band, for example, in the 400-490 nm range, the particles are spherical [33]. The mechanism of AgNPs synthesis is explained through biomolecules present in the extract (polyphenols, phenolic acids, phytosterols, alkaloids, proteins, enzymes, sugars, etc.), which are compounds capable of donating electrons, so the reduction process from Ag + to Ag 0 can take place. This process is generally followed by agglomeration of free silver atoms and, eventually, by the formation of the AgNPs colloidal solution. Such biomolecules also participate in the functionalization and stabilization of AgNPs [34,35]. For example, there are many studies that demonstrate that phenolic acids are some of the most important bioactive substances that participate in AgNPs synthesis and even more in the process of stabilization of nanoparticles, therefore, having a synergistic effect. Yee-Shing Liu et al. [36] combined their research with that of other studies and proposed a mechanism for AgNPs formation using caffeic acid, one of the phenolic acids present in rice husk extract, along with gallic, protocatechuic, ferulic, vanillic and syringic acids. Caffeic acid in alkaline medium releases electrons, that are transferred to Ag + , which is reduced to Ag 0 . On the other hand, caffeic acid is transformed into a free radical which, in turn, reduces another Ag + and is converted to ortho-quinone. By the coupling of two free radicals of caffeic acid, an oxidative dimer is formed which releases 4 electrons that participate in the reduction of Ag + , being transformed into quinone. The formed quinones Differences in the position and appearance of the peak, even for the same plant species, can be explained by different harvesting areas, different conditions for extract preparation and for AgNPs synthesis. Nonetheless, these conditions can also lead to modifications in the shape and size of AgNPs. A wide peak generally indicates larger particles [29]. Moreover, their shape can be predicted considering the position of the SPR band, for example, in the 400-490 nm range, the particles are spherical [33]. The mechanism of AgNPs synthesis is explained through biomolecules present in the extract (polyphenols, phenolic acids, phytosterols, alkaloids, proteins, enzymes, sugars, etc.), which are compounds capable of donating electrons, so the reduction process from Ag + to Ag 0 can take place. This process is generally followed by agglomeration of free silver atoms and, eventually, by the formation of the AgNPs colloidal solution. Such biomolecules also participate in the functionalization and stabilization of AgNPs [34,35]. For example, there are many studies that demonstrate that phenolic acids are some of the most important bioactive substances that participate in AgNPs synthesis and even more in the process of stabilization of nanoparticles, therefore, having a synergistic effect. Yee-Shing Liu et al. [36] combined their research with that of other studies and proposed a mechanism for AgNPs formation using caffeic acid, one of the phenolic acids present in rice husk extract, along with gallic, protocatechuic, ferulic, vanillic and syringic acids. Caffeic acid in alkaline medium releases electrons, that are transferred to Ag + , which is reduced to Ag 0 . On the other hand, caffeic acid is transformed into a free radical which, in turn, reduces another Ag + and is converted to ortho-quinone. By the coupling of two free radicals of caffeic acid, an oxidative dimer is formed which releases 4 electrons that participate in the reduction of Ag + , being transformed into quinone. The formed quinones can bind to AgNPs and cause steric hindrance, thus, stabilizing the particles [36]. Flavonoids can also donate electrons or hydrogen and the keto form of the nucleus reduces Ag + to Ag 0 [33]. Thus, the total phenolic content of the extract used for synthesis (in suitable dilution) and of the first collected supernatant, were measured, so as to estimate the amount of such compounds involved in the synthesis process. If, initially, the phenolic content of the extract was 1.2404 mg GAE/mL, after the first separation, the supernatant only had a remaining content of 0.1122 mg GAE/mL, which could confirm the participation of bioactive compounds in the synthesis process. Other examples of biosubstances involved in AgNPs synthesis are triterpenes. Aazam et al. [37] proposed a mechanism for the synthesis of AgNPs using ursolic acid, the main constituent of an Ocimum sanctum extract. Following the redox reaction, the -OH group of the ursolic acid structure deprotonates and oxidizes with the formation of a radical, promoting the reduction of silver from Ag + to Ag 0 . Further agglomeration and formation of oligomeric clusters occurs. During these steps, several species of AgNPs can be formed, given that Ag + reacts with Ag 0 , forming Ag 2 + , which dimerizes to Ag 4 +2 . In order to highlight the functional groups and, implicitly, the compounds that participate in AgNPs synthesis, FTIR analysis was used. Figure 3 presents the comparative FTIR spectra of the extract and of the corresponding AgNPs. Other examples of biosubstances involved in AgNPs synthesis are triterpenes. Aazam et al. [37] proposed a mechanism for the synthesis of AgNPs using ursolic acid, the main constituent of an Ocimum sanctum extract. Following the redox reaction, the -OH group of the ursolic acid structure deprotonates and oxidizes with the formation of a radical, promoting the reduction of silver from Ag + to Ag 0 . Further agglomeration and formation of oligomeric clusters occurs. During these steps, several species of AgNPs can be formed, given that Ag + reacts with Ag 0 , forming Ag2 + , which dimerizes to Ag4 +2 . In order to highlight the functional groups and, implicitly, the compounds that participate in AgNPs synthesis, FTIR analysis was used. Figure 3 presents the comparative FTIR spectra of the extract and of the corresponding AgNPs. The FTIR spectrum of the extract shows a broad absorption peak at 3452 cm −1 , which was related to the stretching vibration of -OH groups found in alcohols and phenols and to the N-H stretching from amides. The peaks at 2924 cm −1 and 2854 cm −1 corresponded to C-H stretching and bending vibrations of CH3 and CH2 (alkanes) [19,28]. The bands detected at 1736 cm −1 and 1624 cm −1 were connected to the vibration of C=O from amides and carboxylic groups, and -C=C-aromatic rings, respectively [38]. The bands at 1452 cm −1 and 1364 cm −1 could be attributed to stretching vibrations of CO of carboxylic acids or esters, to bending vibration of N-H in amides or to stretching vibrations of N-H in secondary amines, or to the C-N stretch vibration of aromatic amines. Furthermore, the 1184 cm −1 and 1038 cm −1 bands could be attributed to the C-O stretching vibrations of alcohols, esters, ethers and carboxylic acids from terpenoids and flavonoids and to the C-O-C stretching of aromatic ethers and polysaccharides [39,40]. The peaks found in the 600-900 cm −1 range corresponded to CH out of plane bending vibrations and the peak at 524 cm −1 could be responsible for C=C torsion and ring torsion of phenyl, or to C-N stretch from secondary amines and amides [40,41]. Generally, similar peaks, but also shifts, The FTIR spectrum of the extract shows a broad absorption peak at 3452 cm −1 , which was related to the stretching vibration of -OH groups found in alcohols and phenols and to the N-H stretching from amides. The peaks at 2924 cm −1 and 2854 cm −1 corresponded to C-H stretching and bending vibrations of CH 3 and CH 2 (alkanes) [19,28]. The bands detected at 1736 cm −1 and 1624 cm −1 were connected to the vibration of C=O from amides and carboxylic groups, and -C=C-aromatic rings, respectively [38]. The bands at 1452 cm −1 and 1364 cm −1 could be attributed to stretching vibrations of CO of carboxylic acids or esters, to bending vibration of N-H in amides or to stretching vibrations of N-H in secondary amines, or to the C-N stretch vibration of aromatic amines. Furthermore, the 1184 cm −1 and 1038 cm −1 bands could be attributed to the C-O stretching vibrations of alcohols, esters, ethers and carboxylic acids from terpenoids and flavonoids and to the C-O-C stretching of aromatic ethers and polysaccharides [39,40]. The peaks found in the 600-900 cm −1 range corresponded to CH out of plane bending vibrations and the peak at 524 cm −1 could be responsible for C=C torsion and ring torsion of phenyl, or to C-N stretch from secondary amines and amides [40,41]. Generally, similar peaks, but also shifts, reduction or disappearance of peaks, from the extract spectrum could be observed in the FTIR spectra of AgNPs. The disappearance of peaks could be explained by the participation of these groups only in the reduction process, while modifications in peak position and intensity could indicate the involvement of the corresponding groups not only in the reduction, but also in the stabilization processes [39]. Therefore, biomolecules, such as phenolic acids, flavonoids, terpenoids and proteins, could participate in the reduction and stabilization of AgNPs. The obtained results were comparable to those obtained by Samira Mohammadalinejhad et al. and by Srećković et al. [16,17]. The first of the aforementioned groups of researchers suggested that polyphenols are involved in the reduction process, while flavonoids, polyphenols, tannins and gallic acid are responsible for the reduction process, as well as for the stabilization of nanoparticles. The morphological analysis of AgNPs was examined by TEM ( Figure 4) and the corresponding histogram can be found in Figure 5a. [16,17]. The first of the aforementioned groups of researchers suggested that polyphenols are involved in the reduction process, while flavonoids, polyphenols, tannins and gallic acid are responsible for the reduction process, as well as for the stabilization of nanoparticles. The morphological analysis of AgNPs was examined by TEM ( Figure 4) and the corresponding histogram can be found in Figure 5a. [36]. The negative zeta potential (the charge around a moving particle in the colloidal solution in electric field) of −19.62 mV could have been due to biomolecules found on the AgNPs surfaces, that might determine a repulsion between particles and implicitly prevent agglomeration [33]. This value was in accordance with that obtained by Samira Mohammadalinejhad et al. [16], namely −20 mV. The DLS analysis revealed an average AgNPs hydrodynamic dimension of 133.5 nm (Figure 5b). Other authors obtained values in the 20-138 nm range [17] or an average of 50 nm [16]. The TEM analysis measured only the average diameter of the metallic silver reduction and stabilization of AgNPs. The obtained results were comparable to those obtained by Samira Mohammadalinejhad et al. and by Srećković et al. [16,17]. The first of the aforementioned groups of researchers suggested that polyphenols are involved in the reduction process, while flavonoids, polyphenols, tannins and gallic acid are responsible for the reduction process, as well as for the stabilization of nanoparticles. The morphological analysis of AgNPs was examined by TEM ( Figure 4) and the corresponding histogram can be found in Figure 5a. [36]. The negative zeta potential (the charge around a moving particle in the colloidal solution in electric field) of −19.62 mV could have been due to biomolecules found on the AgNPs surfaces, that might determine a repulsion between particles and implicitly prevent agglomeration [33]. This value was in accordance with that obtained by Samira Mohammadalinejhad et al. [16], namely −20 mV. The DLS analysis revealed an average AgNPs hydrodynamic dimension of 133.5 nm (Figure 5b). Other authors obtained values in the 20-138 nm range [17] or an average of 50 nm [16]. The TEM analysis measured only the average diameter of the metallic silver [36]. The negative zeta potential (the charge around a moving particle in the colloidal solution in electric field) of −19.62 mV could have been due to biomolecules found on the AgNPs surfaces, that might determine a repulsion between particles and implicitly prevent agglomeration [33]. This value was in accordance with that obtained by Samira Mohammadalinejhad et al. [16], namely −20 mV. The DLS analysis revealed an average AgNPs hydrodynamic dimension of 133.5 nm (Figure 5b). Other authors obtained values in the 20-138 nm range [17] or an average of 50 nm [16]. The TEM analysis measured only the average diameter of the metallic silver core so it did not include any coating/stabilizing agent. On the other hand, the DLS analysis measured the dynamic fluctuation and velocity of particles in suspended clusters, meaning the metallic silver core and the molecules that were attached to the surface nanoparticles, which could, depending on the structure, undergo a process of solvation and expansion in solution. Therefore, the hydrodynamic diameter estimated by DLS was larger than the size estimated by TEM [33,42,43]. The quantitative elemental structure of AgNPs was investigated by EDX analysis (Figure 6). Data analysis revealed that EDX spectra of AgNPs mainly contained a specific and intense peak at 3 keV for Ag (37.52 wt %), but also C (18.83 wt %), O (24.49 wt %) and small quantities of N, Br, Si, Cl and K. Consequently, the results confirmed the synthesis of AgNPs. The presence of other elements might be related to the breakdown of capping agents from the surface of nanoparticles. analysis measured the dynamic fluctuation and velocity of particles in suspended clusters, meaning the metallic silver core and the molecules that were attached to the surface nanoparticles, which could, depending on the structure, undergo a process of solvation and expansion in solution. Therefore, the hydrodynamic diameter estimated by DLS was larger than the size estimated by TEM [33,42,43]. The quantitative elemental structure of AgNPs was investigated by EDX analysis (Figure 6). Data analysis revealed that EDX spectra of AgNPs mainly contained a specific and intense peak at 3 keV for Ag (37.52 Wt %), but also C (18.83 Wt %), O (24.49 Wt %) and small quantities of N, Br, Si, Cl and K. Consequently, the results confirmed the synthesis of AgNPs. The presence of other elements might be related to the breakdown of capping agents from the surface of nanoparticles. Antimicrobial Activity Firstly, the disk diffusion method was used, which is a routine, simple and low-cost antimicrobial susceptibility test. The antimicrobial activity was assessed using two bacterial strains, Staphylococcus aureus (S. aureus) and Pseudomonas aeruginosa (P. aeruginosa) and a fungal strain, Candida albicans (C. albicans). The results are presented in Table 4. Table 4. Antimicrobial activity of the extract and of the corresponding AgNPs. The AgNPs presented better activity, compared to the extract against the Grampositive bacteria and pathogenic yeast. However, no activity was detected for either in the case of the Gram-negative bacteria. Further, the broth microdilution method was applied, in order to determine the MIC and MBC values, and since it represents a standardized method, the results could be of clinical relevance [44]. Implicitly, the MIC and MBC/MFC values of samples against S. Figure 6. Zeta potential (a) and EDX spectra (b) of AgNPs. Antimicrobial Activity Firstly, the disk diffusion method was used, which is a routine, simple and low-cost antimicrobial susceptibility test. The antimicrobial activity was assessed using two bacterial strains, Staphylococcus aureus (S. aureus) and Pseudomonas aeruginosa (P. aeruginosa) and a fungal strain, Candida albicans (C. albicans). The results are presented in Table 4. Table 4. Antimicrobial activity of the extract and of the corresponding AgNPs. The AgNPs presented better activity, compared to the extract against the Gram-positive bacteria and pathogenic yeast. However, no activity was detected for either in the case of the Gram-negative bacteria. S. aureus Further, the broth microdilution method was applied, in order to determine the MIC and MBC values, and since it represents a standardized method, the results could be of clinical relevance [44]. Implicitly, the MIC and MBC/MFC values of samples against S. aureus ATCC 25923 and C. albicans ATCC 90028 were determined and are presented in Table 5. MIC was the lowest concentration of the sample at which bacterial growth was completely inhibited after 24 h incubation at 37 • C. In the study carried out by Srećković et al. [17], the MIC value obtained for AgNPs was 0.31 mg/mL for S. aureus and 0.62 for C. albicans. In our study, we obtained the same value for S. aureus, but for C. albicans the value was smaller (0.03 mg/mL). To the best of our knowledge, the MBC value for AgNPs synthesized from L. salicaria has not been reported until now. The highest dilution showing 100 % inhibition was 0.62 mg/mL for S. aureus and 0.31 for C. albicans. For both tested microorganisms, the obtained results for MIC and MBC/MFC for the synthesized AgNPs were better than those obtained for the extract. Indeed, most literature studies show better antibacterial activity of AgNPs on Gramnegative bacteria than on Gram-positive bacteria [45]. However, there are several research works that prove the contrary, as in our case. For example, Yage Xing et al. obtained better antibacterial activity on S. aureus, compared to E. coli for AgNPs formed using mango peel [46]. A similar example is that of AgNPs synthetized using olive leaves extract [47]. An explanation for the higher susceptibility in the case of Gram-positive bacteria, compared to Gram-negative bacteria, could be related to the differences between the wall structures of the two types of bacteria. The cell wall of Gram-positive bacteria consists of a thick peptidoglycan layer with teichoic acid and lipoteichoic acid, while the Gram-negative bacterial cell wall is more complex, containing an extra outer lipid membrane, with lipopolysaccharides, which could make the entrance of hydrophobic substances more difficult [48]. Therefore, a reason for better antimicrobial action in the case of Gram-positive bacteria could be represented by the more facile interaction between AgNPs and bacteria [49]. Moreover, the antimicrobial activity depends on the size and shape of AgNPs. Nanoparticles of smaller sizes, with a diameter of approximately 1-10 nm, have a higher surface:volume ratio and more efficient antimicrobial activity, interacting preferentially with bacteria [50]. Truncated triangular AgNPs have the strongest biocidal action compared to those that are spherical or rod shaped [51]. The mechanism surrounding the antibacterial activity has not been fully elucidated, but several explanations are possible, taking into account that AgNPs can adhere to, or can pass through, cell walls/membranes of microorganisms, or induce cellular toxicity and ROS generation, or modulate cell signaling. The first proposed mechanism is that, following the electrostatic attraction between AgNPs and the cell membrane, nanoparticles tend to adhere to the membrane, which leads to alteration of the structure and rupture of the cell wall. Moreover, the interaction between AgNPs and sulfur-containing proteins found in the cell wall can lead to a chain reaction, starting with structural modifications, affecting the transport process and increasing permeability, which leads to cell content (ions, proteins, reducing sugars) leakage and, sometimes, ATP synthesis inhibition. The second mechanism consists of the penetration of AgNPs inside the cell and nucleus, which can modify cellular functioning by interacting with the cell structure or biomolecules. Therefore, the destabilization and denaturation of proteins containing thiol groups can occur via interaction with silver ions, or AgNPs, and silver ions can interact with disulfide bonds, thus, blocking the active binding site, leading to functional deficiencies in microorganisms. Moreover, AgNPs can interact with DNA, and through the reaction of Ag + ions with nucleic acids, can lead to the destruction of the double helix structure. For the third mechanism, a high concentration of Ag + ions produce cellular oxidative stress by generation of radicals and ROS. Free radicals can cause destruction of the mitochondrial membrane and can interact with lipids, enhancing lipid peroxidation. ROS generation can lead to hyperoxidation of lipids, proteins and DNA. The last possible mechanism focuses on the modulation of the cellular signal system, which can lead to modifications in bacterial growth, as well as affect the processes at molecular and cellular levels [52,53]. Antioxidant Activity Given that Srećković et al. [17] determined the antioxidant potential of an L. salicaria extract, and of the corresponding AgNPs, by DPPH and ABTS scavenging activity methods, and the results showed for the extract from the aerial parts slightly better activity (86.38 ± 0.13 µg/mL by DPPH method and 65.33 ± 2.08 µg/mL by ABTS test) versus AgNPs (>100 µg/mL by DPPH and 141.66 ± 17.05 µg/mL by ABTS test) [17], we proposed the use of other methods, with different principles. Therefore, our study focused on the Tables 6 and 7. The inhibition of lipoxygenase, which was determined using the modified Malterud method [23], can be explained by polyphenolic compounds present in the extract that have the ability to block the activity of lipoxygenase, which catalyzes the oxidation of linoleic acid, thus, reducing the absorbance measured at 234 nm. The inhibition activity was calculated using the following formula: where A E is the difference between the absorbances of the enzyme solution without inhibitor at 90 and 30 s and A EI is the difference between the absorbances of the enzyme-inhibitor solution at 90 and 30 s, respectively. Lipoxygenases (5-, 12-, 15-lipoxygenase) are metalloenzymes that contain ferrous or ferric ions in their catalytic center, depending on the stage of the redox reaction (oxidation or reduction) [54]. Lipoxygenases catalyze the oxidation of unsaturated fatty acids with the formation of lipid peroxides, which can cause the spread of oxidation reactions or cause the oxidation of lipids, proteins, nucleic acids, thereby affecting their biological properties [55]. Uncontrolled activation of these enzymes causes oxidative stress and inflammation, and can lead to neurodegenerative diseases, atherosclerosis, diabetes or cancer [56]. AgNPs show a more intense antioxidant activity compared to the extract, the EC 50 value being also slightly better than that of gallic acid. Their influence on lipoxygenase is most probably achieved through changes in the spatial structure of the enzyme or of its active center [56]. Antioxidant compounds can also block the peroxyl radical synthesis induced by AAPH, with consequent protection of the erythrocyte membrane. The reduction in the concentration of peroxyl radicals determines the decrease of the absorbance measured at 540 nm [24]. The erythrocyte hemolysis inhibition was calculated using the formula: where A S is the absorbance of the sample and A AAPH is the absorbance of the positive control. AAPH is a prooxidant compound that causes the oxidation of hemoglobin to methemoglobin, consequently inducing hemolysis, and possibly affecting the lipophilic structure of the erythrocyte membrane [57]. The synthesized AgNPs were more active compared to the extract. However, both the extract and AgNPs can block the prooxidant and hemolytic action of AAPH, without causing hemolysis in its absence. Blocking AAPH activity is done mainly by compounds that have both hydrophilic groups, that are capable of interacting with AAPH, but also lipophilic structures that allow passage through the erythrocyte membrane and block the intracellular action of AAPH [57]. Such compounds also have a protective effect on subcellular structures, proteins and DNA and can, thus, block pathological phenomena caused by oxidative stress [58]. Generally, the prooxidant action of peroxyl radicals generated by AAPH is blocked by compounds with hydroxyl groups that are capable of neutralizing and stabilizing the formed radicals [59]. Conclusions In the present study, AgNPs were obtained via a simple and eco-friendly method, using a robust design, namely the Taguchi method, in order to identify the optimal synthesis parameters, which was achieved, for the first time, for such nanoparticles. AgNPs were successfully synthesized using AgNO 3 as a precursor and an L. salicaria extract as a reducing and capping agent, as demonstrated by FTIR analysis, which also revealed the functional groups found in the extract that are responsible for the obtaining of AgNPs. Moreover, the synthesis was confirmed through the presence of the SPR band, by both visual observation of color change and UV-Vis spectroscopy. The presence of silver was highlighted by EDX analysis, and the negative zeta potential indicated a stable AgNPs colloidal solution. The formed nanoparticles showed antimicrobial activity against S. aureus and C. albicans. The novelty of the research consisted of, besides establishing the optimal reaction conditions using the Taguchi design, testing the antioxidant activity through inhibition of lipoxygenase and peroxyl radical-mediated hemolysis, which showed promising results for the formed nanoparticles, as well as for MBC testing. Therefore, further studies are justified, with the synthesized AgNPs being potential resources for nanotechnological applications.
9,205
2022-10-01T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science", "Biology" ]
RCzechia: Spatial Objects of the Czech Republic The history of spatial data analysis in R is long and respectable Bivand (2021). The first packages focusing specifically on providing spatial data originate from the S days Becker & Wilks (1993), with maps Deckmyn (2022) being one of the oldest packages in continuous use on CRAN (since 2003). The early packages used pattern of storing spatial data internally, which created a hard limit on volume and level of detail stored. State of the field The history of spatial data analysis in R is long and respectable Bivand (2021). The first packages focusing specifically on providing spatial data originate from the S days Becker & Wilks (1993), with maps Deckmyn (2022) being one of the oldest packages in continuous use on CRAN (since 2003). The early packages used pattern of storing spatial data internally, which created a hard limit on volume and level of detail stored. With the advent of sp Pebesma & Bivand (2005) and later sf Pebesma (2018) platforms for handling spatial data the universe of packages focused on providing spatial data blossomed. There are packages with global focus, such as rnaturalearth South (2017) and regional focus like giscoR Hernangómez (2022) oriented at the EU. Number of packages are country specific, such as tigris Walker & Rudis (2022) for the US, or rgugik Dyba & Nowosad (2021) for Poland. With current near universal and reliable internet access a new pattern has emerged, with spatial data packages accessing cloud stored data files as required (caching them within the limits set by the CRAN repository policy), and distributing only lightweight code. In the context of Czech Republic there exists CzechData package Caha (2021), with somewhat overlapping functionality but available only on GitHub. The CRAN package czso Bouchal (2022) interfaces API of the Czech Statistical Office ČSÚ, providing access to statistical data about Czech administrative areas (without the spatial information itself). Package pragr Bouchal (2020), available on GitHub, provides geodata about the city of Prague. Statement of need No country specific spatial data package has been published on CRAN for the Czech Republic to date, creating a need that could be filled using global or regional packages only to a limited extent. While there there are open data resources available for researchers, mostly in the format of ESRI Shapefiles, these have a number of practical disadvantages. They have to be located and downloaded individually, and their users in R context face additional hurdles, such as conflicting Coordinate Reference Systems and character encodings. In addition some publicly available datasets are topologically invalid and many are too detailed for use by non GIS specialized audience. Features The package provides two distinct sets of spatial objects: administrative areas, and natural objects. In addition API interface wrapping is provided for geocoding and reverse geocoding functions. Administrative area polygons: • republika: borders of the Czech Republic as a polygon • kraje: 14 regions (NUTS3 areas) of the Czech Republic + Prague as a special case • okresy: 76 districts (LAU1 areas) of the Czech Republic + Prague as a special case • orp_polygony 205 municipalities with extended powers + Prague as a special case • obce_polygony: 6.258 municipalities of the Czech Republic • obce_body the same as obce_polygony, but centroids instead of polygons The country (NUTS1), regions (NUTS3) and districts (LAU1) administrative level objects from RCzechia are functionally equivalent to those provided by giscoR package Hernangómez (2022) for the Czech Republic. This is expected, as GISCO objects are standardized at the EU level, and the Czech Republic is a EU member state. For some of the most commonly used objects (republika, kraje, okresy, reky and volebni_okrsky ) an optional low resolution version is also included. To access it, specify the value of resolution parameter as "low" (default is "high"). Utility functions: • geocode: geocodes an address to coordinates • revgeo: reverse geocodes coordinates to an address The utility functions interface API of the Czech State Administration of Land Surveying and Cadastre (ČÚZK) and are therefore limited in scope to the area of the Czech Republic. The package code is thoroughly tested, with 100% test coverage. In addition to testing code the package implements unit tests on integrity of the datasets provided, such as topological validity and internal consistency between different levels of administrative units.
1,001
2023-03-02T00:00:00.000
[ "Mathematics" ]
Functional Proteomics of the Active Cysteine Protease Content in Drosophila S2 Cells* The fruit fly genome is characterized by an evolutionary expansion of proteases and immunity-related genes. In order to characterize the proteases that are active in a phagocytic Drosophila model cell line (S2 cells), we have applied a functional proteomics approach that allows simultaneous detection and identification of multiple protease species. DCG-04, a biotinylated, mechanism-based probe that covalently targets mammalian cysteine proteases of the papain family was found to detect Drosophila polypeptides in an activity-dependent manner. Chemical tagging combined with tandem mass spectrometry permitted retrieval and identification of these polypeptides. Among them was thiol-ester motif-containing protein (TEP) 4 which is involved in insect innate immunity and shares structural and functional similarities with the mammalian complement system factor C3 and the pan-protease inhibitor alpha2-macroglobulin. We also found four cysteine proteases with homologies to lysosomal cathepsin (CTS) L, K, B, and F, which have been implicated in mammalian adaptive immunity. The Drosophila CTS equivalents were most active at a pH of 4.5. This suggests that Drosophila CTS are, similar to their mammalian counterparts, predominantly active in lysosomal compartments. In support of this concept, we found CTS activity in phagosomes of Drosophila S2 cells. These results underscore the utility of activity profiling to address the functional role of insect proteases in immunity. Recognition, uptake, and destruction of pathogens by phagocytic cells are fundamental principles of innate immunity that are conserved in organisms from insects to humans (1). An important step after the uptake of material into the endosomal/phagosomal pathway is proteolysis to destroy and clear pathogens (2). Proteolysis along the endocytic pathway has been extensively studied in the mammalian system (3)(4)(5). Endocytic proteases, in particular cysteine proteases of the cathepsin (CTS) 1 family, are actively involved in MHC class II-restricted antigen presentation and, therefore, play a key role in adaptive immunity (6). Members of the CTS family, such as CTS B (7), CTS S (8,9), CTS F (10), CTS L (11), and, more recently, CTS Z (12), have been implicated in this process (13,14). Little is known about the function of CTS proteases in other biological processes that might explain both their evolutionary conservation and expansion in multicellular organisms. Several CTS-deficient strains of mice show dramatic phenotypes that indicate involvement in bone remodeling, development, and apoptosis (15,16). In addition, dysregulation of CTS activity is observed in various cancers and correlates with tumor progression (17,18). Cysteine proteases have been mainly studied in mice and humans (13,14); however, functional redundancy and significantly overlapping specificities of CTS make it difficult to explore individual CTS function in mice and humans. Therefore, less complex organisms such as the fruit fly, Drosophila melanogaster, may be helpful in gaining further insight into the functional roles of members of this protease family. In the fruit fly, and relative to the genome as a whole, proteases and immunity-related genes are characterized by specific gene expansions (19 -21). Drosophila melanogaster possesses specialized blood cells, or hemocytes, that phagocytose microbes in a manner similar to their mammalian counterparts (1,22,23). In particular, the hemocyte-like Schneider's Drosophila line 2 (S2) has emerged as a model system to study phagocytosis and for the identification of phagocytic receptors that recognize and mediate the engulfment of microbes (24,25). S2 cells, therefore, are a relevant model for the application of a functional proteomic approach that permits simultaneous detection and identification of multiple protease activities. Several approaches are now available to assess protease activity on a broad scale. Chemistry-based functional proteomics is a promising method used to assign putative gene products to an enzyme family (26). The rationale behind the design of such a functional proteomics approach is to assess the activity profiles of protease species in complex biological samples rather than merely their presence or absence. Identification and activity measurement can be performed without the need for purification of the individual enzymes under study. In addition, this approach allows activity measurement of several enzyme species at the same time, a task that would be more cumbersome using classical biochemical approaches. The success of this approach has been demonstrated for serine proteases (27), deubiquitinating enzymes (28), and cysteine proteases (29). In this study, we have used DCG-04, a chemical probe that specifically and covalently targets mammalian cysteine proteases. Our objective was to identify DCG-04-reactive polypeptides in Drosophila phagocytes and to characterize and monitor their activities. EXPERIMENTAL PROCEDURES Cell, Culture Conditions, and Reagents-A particularly phagocytic sub-line of S2 cells was obtained from A. Pearson (Department of Pediatrics, Massachusetts General Hospital, Boston, MA) (23,30). S2 cells were grown at 26-27°C in Schneider's Insect Medium (Sigma-Aldrich, St. Louis, MO) or Schneider's Drosophila Medium (Invitrogen Life Technologies, Carlsbad, CA) supplemented with 10% heat-inactivated fetal bovine serum (tested to support insect cell growth; Invitrogen Life Technologies) and maintained at a density of 2 ϫ 10 5 to 8 ϫ 10 6 cells/ml in 12.5 ml per T75 plastic tissue culture flask to ensure exponential growth. Under these conditions, S2 cells divided every 16 -18 h. Chemicals were obtained from Sigma-Aldrich, unless indicated otherwise. Active Site Labeling of Cysteine Proteases in Cell Lysates and Detection by Streptavidin Blotting-JPM-565 and DCG-04 were synthesized and purified as previously described (31,32). Cells were harvested by centrifugation at 4°C, washed in Robb's Drosophila PBS pH 6.8 (52), and cell pellets corresponding to 5 ϫ 10 7 cells were frozen at Ϫ80°C. Cell pellets were thawed on ice and lysed in 100 l lysis buffer, pH 5.0 (50 mM sodium acetate, pH 5.0, 5 mM magnesium chloride, 0.5% Nonidet-P40), incubated for 30 min and centrifuged for 15 min at 13,000 ϫ g to remove nuclei. Protein concentration of the cell extract was measured using the Bio-Rad Protein Assay (Hercules, CA) with BSA as standard (average yield, 1-2 mg protein per 5 ϫ 10 7 cells). Cell lysate (25 g protein) was incubated with DCG-04 for 60 min at 37°C. When JPM-565 was used as competitor, cell lysates were pre-incubated with JPM-565 for 15 min before addition of DCG-04. The reaction was stopped by the addition of double-concentrated reducing Laemmli sample buffer and boiling for 10 min. Samples were analyzed by SDS-PAGE and transferred to polyvinylidene membrane (Immobilon P; Millipore, Billerica, MA). After blocking overnight in PBS, pH 7.2, containing 10% non-fat dry milk, the membrane was incubated with streptavidin-horseradish peroxidase (1: 2,500; Amersham Pharmacia Biotech, Uppsala, Sweden) in PBS containing 0.2% Tween 20 for 60 min at room temperature followed by extensive washing in PBS-Tween. DCG-04-reactive polypeptides were detected by enhanced chemiluminescence (Western Lightning; PerkinElmer, Wellesley, MA). Isolation of DCG-04-Reactive Enzymes for Ms-Based Identification-Cell lysate from 5.4 ϫ 10 9 cells was prepared at pH 5.0, as described above. The lysate was divided into five samples, each corresponding to approximately 35 mg of protein. The samples were pre-cleared with 150 l bead volume streptavidin-agarose (Pierce, Rockford, IL) to decrease nonspecific background activities. Three samples were incubated with 5 M DCG-04 for 60 min at 37°C, one sample was pre-treated with 25 M JPM-565 before addition of DCG-04, and one sample received neither DCG-04 nor JPM-565. To stop the reaction, SDS was added to a final concentration of 0.5%, and the samples were incubated for 5 min at 95°C (in order to denature the DCG-04-tagged proteins to make the biotin moiety accessible). Affinity enrichment of DCG-04-reactive polypeptides was performed using streptavidin-agarose, as previously described (12,32). Briefly, the buffer was exchanged to pull-down buffer (50 mM Tris, pH 7.4, 150 mM NaCl) using a PD-10 column (Amersham Pharmacia Biotech), and each sample was incubated with 150 l bed volume streptavidin-agarose for 60 min at room temperature. After washing with excess pull-down buffer, bound polypeptides were eluted from the beads by addition of 100 l reducing Laemmli sample buffer and boiling for 10 min. DCG-04-reactive proteins were separated by SDS-PAGE (12.5%). One DCG-04-reacted sample and the control samples were stained with silver (see Fig. 3), 5% of each of these samples was used for streptavidin blotting (not shown), and the two remaining DCG-04-reacted samples were pooled and stained with Coomassie Brilliant Blue G as follows: the gel was fixed in H 2 O/25% isopropanol/ 10% acetic acid for 45 min, stained with 10% acetic acid/0.006% Coomassie Brilliant Blue G overnight, and destained with 10% acetic acid for 2h. Visible bands from the Coomassie-and silver-stained gels were excised and processed for tandem mass spectrometry analysis. Tryptic Digestion and Analysis by Electrospray Tandem Mass Spectrometry-In-gel tryptic digestion was performed essentially as described (33). The samples were subjected to a nanoflow liquid chromatography system (CapLC; Waters, Medford, MA) equipped with a picofrit column (75-m ID ϫ 9.8 cm; NewObjective, Woburn, MA), at a flow rate of ϳ150 nl/min using a nanotee (Waters), 16:1 split (initial flow rate 5.5 l/min). The liquid chromatography system was directly coupled to a quadrupole time-of-flight micro-tandem mass spectrometer (Micromass, Manchester, UK). Analysis was performed in survey scan mode, and parent ions with intensities greater than 7 were sequenced in MS/MS mode using MassLynx 4.0 software (Micromass). MS/MS data were processed and subjected to database searches using ProteinLynx Global Server 1.1 software (Micromass) against Swiss-Prot TrEMBL/New (www.expasy.ch), or Mascot (Matrixscience, www.matrixscience.com/, (34)) against the National Center for Biotechnology Information (NCBI) (www.ncbi.nlm.nih.gov/) non-redundant database (NCBInr). Alternatively, the Drosophila protein database from NCBI was also used for Drosophila-specific protein identification. In all searches, oxidation of methionine and carbamidomethylation of cysteine residues were considered as modifications. Matches for proteins were accepted as significant if scores were more than 75 using Protein Lynx Global Server 1.1, or based on the Mascot Probability Mowse Score. At least two peptides were found for each identified protein species. Information about the Drosophila gene products were obtained from the FlyBase database (FlyBase.bio.indiana.edu) and NCBI (www.ncbi.nlm.nih.gov/entrez/ query.fcgi) Active Site Labeling of Cysteine Proteases in Phagosomes of Live Cells Using Latex Beads-Phagosomal cysteine protease labeling has been adapted from Lennon-Dumenil et al. (12). S2 cells were plated on 12-well plates (10 6 cells/well) 1 day before the experiment. Streptavidin-coated latex beads (2-m diameter; Polysciences, Warrington, PA) were incubated with DCG-04 for 60 min at room temperature. Beads were washed three times with PBS and resuspended in complete culture medium. DCG-04-loaded beads were added to the cells and incubated for 30 min at 27°C (pulse). Phagocytosis was halted at 4°C, cells were collected and pelleted at 2,000 rpm (325 ϫ g) for 3 min, resuspended in complete medium, and non-internalized beads were removed by repeated centrifugation after layering on a cushion of heat-inactivated fetal bovine serum (latex beads remain in interphase). Cells were resuspended in complete medium and incubated in 12-well plates at 27°C for various periods of time (chase). At the desired time points, cells were harvested on ice, pelleted and lysed in hot SDS-sample buffer containing 100 M JPM-565, boiled for 10 min, cooled to room temperature, and passed through a 22.5-gauge hypodermic needle to shear DNA before pelleting the released (phagocytosed) latex beads at 13,000 rpm (13,000 ϫ g) for 5 min. Samples were analyzed by SDS-PAGE and streptavidin-blotting. Phagocytic uptake of streptavidin beads was controlled by light microscopy and uptake of deep-blue dyed latex beads (0.8-m diameter; Sigma-Aldrich) by light and electron microscopy (data not shown). Protease Activity Profiling in Drosophila S2 Cells Using the Mechanism-Based, Epoxide Inhibitor-Derived Chemical Probe DCG-04 -The epoxide inhibitor JPM-565 has been used as an irreversible inhibitor, with broad reactivity toward cysteine proteases (31,35). Based on the structure of this compound, a derivative was developed to include a biotin affinity tag (DCG-04; Fig. 1A) (32). This compound permits targeting of active cysteine proteases present in crude extracts via covalent attachment to the cysteine active site residue (Fig. 1B, (29)). In order to test whether DCG-04 reacts with Drosophila proteins, we incubated this probe with crude pH 5.0-level cell extracts prepared from S2 cells. DCG-04 reactive polypeptides were separated by SDS-PAGE and visualized by streptavidin blotting. As shown in Fig. 2, we observed labeling in the molecular mass ranges of 26 -29 and 33-37 kDa. Preheating of the extract before addition of DCG-04 abolished labeling (Fig. 2). Competition for labeling with increasing amounts of non-biotinylated probe (JPM-565) resulted in decreased recovery of these proteins (Fig. 2). Thus, DCG-04 appeared to react specifically in a doseand conformation-dependent manner with several Drosophila polypeptides. Identification of DCG-04-Reactive Polypeptides in Drosophila S2 Cell Extracts-In order to identify the polypeptides labeled by DCG-04, we used a strategy based on streptavidin-agarose affinity purification and tandem mass spectrometry as outlined in Fig. 1B. Drosophila S2 cell extracts were either left untreated or incubated with DCG-04 in the presence or absence of non-biotinylated competitor, and DCG-04-reactive polypeptides were purified on streptavidin-agarose. SDS-PAGE of the eluted material, followed by silver staining, revealed several DCG-04-reactive polypeptides whose abundance was reduced when a five-fold excess of non-biotinylated competitor was included (Fig. 3). Subsequent analysis using tandem mass spectrometry (liquid chromatography-MS/MS) identified a total of 20 protein species, for which at least two peptide matches were found. We grouped these polypeptides into two categories, based on the absence (Table I) or presence (Table II) of a catalytically active thiol group. Due to the absence of such a thiol group, the polypeptides listed in Table I are likely to be nonspecific contaminants, co-purified in our isolation procedure. They appear to be mostly metabolic enzymes but also include cytoskeleton proteins and streptavidin. Less frequently observed background proteins included catalase, involved in oxidative stress and aging. We found six polypeptides that might exert reactivity toward DCG-04. All had an N-terminal signal sequence indicating export into a secretory compartment (Table II). One of these was protein disulfide isomerase (PDI), an enzyme containing two thiols as active site residues. PDI catalyzes disulfide formation in newly synthesized polypeptides in the endoplasmic reticulum in a two-step reaction, which includes two alternative intermediate, mixed disulfides between the enzyme and substrate (36). Therefore, epoxysuccinyl compounds such as DCG-04 and JPM-565 may specifically bind to reactive thiol groups of such nature. Two polypeptides at ϳ120 and 160 kDa were identified as thiol ester-containing protein (TEP) 4 ( Fig. 3 and Table II). TEP 4 belongs to a group of five thiol ester-containing Drosophila proteins that show substantial structural and functional similarities, including a highly conserved thiol ester motif, to both a central component of mammalian complement system, factor C3, and to a pan-protease inhibitor, alpha2-macroglobulin (37). TEP 4 contains an internal beta-cysteinyl-gamma-glutamyl thiol ester (38). It is possible that DCG-04 forms a covalent adduct with the proteolytically activated nascent state of the thiol ester (39), although the details of this reaction mechanism remain unclear. TEP 4 appears to be expressed constitutively in Drosophila larvae but is significantly up-regulated after challenge with pathogens (37). In a mosquito hemocyte-like cell line, a related protein, TEP 1, serves as a complement-like opsonin and promotes phagocytosis of some Gram-negative bacteria. This activity is dependent on its internal thiol ester bond (40). Active Cysteine Proteases with Homology to Mammalian Cathepsins-We identified multiple polypeptide species that showed significant homologies to mammalian CTS proteases L, B, F and K, respectively (Table II and Fig. 3). The CTS F-like protease has not been previously described as an active entity but is found annotated in the D. melanogaster database FlyBase as a putative protease. The CTS B-and K-like proteases have never been isolated and characterized as proteolytic enzymes from Drosophila, although expression of the CTS B-like protease has been observed in Drosophila embryos (41) and a K-like enzyme in the flesh fly Sarcophaga peregrina (42). A CTS L-like protease has been reported to be expressed in embryonal and larval midgut (43) and was found in granules in the hemocyte-like Drosophila cell line mbn-2, suggesting lysosomal localization (44). The latter three enzymes have been implicated in general digestive processes, yolk degradation, and immunity, and are strongly conserved in other non-vertebrate species (see FlyBase and Refs. 45 and 46)). Sequence alignment and comparison of these CTS-like proteases and their corresponding mouse homologs revealed a high degree of conservation centering on two regions in the mature parts of the molecules, around the active site residues cysteine (C), histidine (H), and asparagine (N) (Fig. 4A). The N-terminal pro-regions were less well conserved but of similar length, except in the case of the Drosophila CTS K equivalent (26 -29-kDa protease), which contains an insertion of significant length not found in the pro-region of the mammalian homolog CTS K. A dendrogram constructed on the basis of overall sequence similarities shows that, with the exception of CTS K, evolutionary diversification of the CTS subgroups occurred before the ancestors of mammalian and invertebrate CTS proteases diverged from each other (Fig. 4B). These data suggest an evolutionary conserved function for these proteases and only later recruitment of these molecules for adaptive immunity in vertebrates. Drosophila Cathepsins Are Most Active at pH 4.5 and Can Be Detected in Phagosomes of Live Cells-We used activity profiling to determine under which pH conditions these enzymes are active. Extracts from S2 cells were prepared at a pH level ranging from 4.5-7.5. DCG-04 labeling resulted in distinct polypeptide profiles in a highly pH-dependent manner (Fig. 5A, third panel). At a pH of 4.5, two major bands of 26 -29 and 33-37 kDa were observed, consistent with the experiments described above for pH 5.0. This suggests that these polypeptides correspond to the Drosophila CTS homologs identified by tandem mass spectrometry. Labeling of these polypeptides was specific, because they were not observed in heat-treated samples (Fig. 5A, second panel) and were competed by an excess of non-biotinylated compound (Fig. 5A, fourth panel). The marked pH-dependence of these enzymes indicates a function within the endocytic compartment. In order to address a possible phagosomal localization of these Drosophila CTS proteases in live cells, we used an approach recently pioneered by our laboratory that monitors the proteolytic environment in phagosomes of mammalian macrophages and dendritic cells (12). Streptavidin-latex beads loaded with DCG-04 are efficiently internalized by phagocytic cells and allow sampling of the proteolytic environment during phagosome formation and maturation. Drosophila S2 cells were incubated for 0.5 h with DCG-04-loaded beads. The non-internalized beads were removed, and cells were lysed after 1-6 h of incubation (chase). To prevent post-lysis reactivity of DCG-04, cells were lysed by the addition of boiling SDS sample buffer containing an excess of non-biotinylated competitor. Streptavidin blotting revealed labeling in the 26 -29 kDa range, which increased over time (Fig. 5B). This labeling profile was dependent on DCG-04-loaded beads, because no labeling was observed in the absence of latex beads or in the presence of latex beads that had not been loaded with DCG-04. These results suggest that CTS activity can be detected by activity profiling in phagosomal compartments of living Drosophila S2 cells. DISCUSSION Chemical targeting of cysteine proteases by means of a mechanism-based probe combined with tandem mass spectrometry has allowed rapid identification of multiple active protease species in a Drosophila cell line with phagocytic properties. Four of 12 total CTS-like proteases encoded in the Drosophila genome were identified (Tables II and III). Exami- nation of S2 cell-specific expressed sequence tags available in the FlyBase database revealed that only six of these 12 CTS-like genes appear to be expressed in S2 cells, including the four proteases identified by tandem mass spectrometry (Table III and Fig. 4C). Inspection of the sequence of the other two genes revealed that the active site residue cysteine was FIG. 4. Structure-based amino acid sequence alignment of Drosophila and mouse CTS proteases. A, Sequence alignments were performed using ClustalW (www.ch.embnet.org/software/ClustalW.html) and Vector NTI (Informax, Frederick, MD) software. Conserved residues are indicated under the sequence by: *, conserved; :, strongly homologous residues; ⅐, homologous residues. The catalytic cysteine and histidine residues are indicated by dots above the sequence, and the conserved region around the catalytic cysteine is boxed. The beginning of the mature enzymes, i.e. beginning and end of single chains or heavy and light chains (by similarity to human enzymes), is indicated above the sequence by angles. not conserved, precluding reaction with DCG-04 (Fig. 4C). The Drosophila genome contains a CTS H-like protease that is not expressed in S2 cells and lacks the catalytic cysteine residue (Table III and Fig. 4C). This is in contrast to the mammalian system in which CTS H was shown to react with DCG-04 (12). We isolated not only cysteine proteases but also other enzymes that catalyze thiol-based reactions, such as PDI and TEP 4 (Table II). Although it is possible that DCG-04 directly reacted with these polypeptides (see "Results"), PDI and TEP 4 may have been isolated as a consequence of their association with cysteine proteases that were targeted by DCG-04. The observation that DCG-04 can be used to isolate insect TEP will be useful in monitoring their activity. Insect TEP have attracted considerable interest lately because they play a role in innate immunity in the mosquito vector Anopheles by limiting the multiplication of malaria parasites inside the vector organism (40). 3 We observed two forms of Drosophila CTS L and F with molecular masses of 26 -29 and 33-37 kDa that reacted with DCG-04 ( Fig. 3 and Table II). Glycosylation has been described for mammalian CTS proteases (47) and could account for this difference in molecular mass. However, it seems more likely that 33-37-kDa forms correspond to pro-forms of these enzymes that have retained all (CTS L) or part (CTS F) of their pro-peptides, as suggested for human and mouse CTS L (47). In line with this possibility, we found that the 33-37-kDa form of Drosophila CTS L contained the tryptic peptide AADESFKGVTFISPAHVTLPK 106 -126 , which is located N-terminally of the predicted cleavage site for the mature enzyme ( Fig. 4A and see Ref. 13). Labeling of CTS pro-forms by epoxide-based probes such as DCG-04 has been observed previously in mammalian cells (48). This phenomenon can be attributed to the small molecular size of the chemical probe and to its covalent reaction mechanism. Although pro-peptide efficiently prevents the access of large polypeptide substrates to the active site, it may not be bound tightly enough to prevent the small-sized DCG-04 molecule from entering the active site and from irreversibly attaching to the proenzyme. This interpretation may also explain the results obtained by activity profiling in S2 cell phagosomes (Fig. 5B). In this subcellular compartment, DCG-04-reactive polypeptides were exclusively found in the 26 -29 kDa range, suggesting that only fully processed, mature CTS proteases are exposed to phagocytosed material. Whereas the Drosophila CTS L-, B-, and F-like enzyme species shared significant overall sequence identities with their mouse counterparts (Fig. 4, A and B), CTS K (26 -29 kDa protease) appears to have unique properties. This enzyme showed only 28% overall sequence identity to mouse CTS K, 3 E. A. Levashina and F. C. Kafatos, personal communication. FIG. 5. Drosophila CTS proteases show a narrow activity optimum under acidic conditions and can be detected in the phagosome. Proteins were separated by SDS-PAGE on 12.5% gels, and labeled bands were visualized by streptavidin blotting. A, Cytoplasmic extracts from Drosophila S2 cells were prepared at the indicated pH level. Samples were incubated in the presence or absence of DCG-04 and a 10-fold excess of non-biotinylated competitor JPM-565, followed, or not, by preheating for 5 min at 100°C. Labeled polypeptides were separated by SDS-PAGE (12.5%) and visualized by streptavidin blotting. B, Labeling of phagosomal cysteine proteases by DCG-04, immobilized on streptavidin beads, and phagocytosed by live Drosophila S2 cells. Streptavidin latex beads (0.2 m) were loaded with 10 M DCG-04. S2 cells were incubated with DCG-04-loaded or unloaded latex beads for the indicated times (pulse). Non-phagocytosed beads were removed, and cells were incubated for the indicated times (chase). and 27% to mouse CTS L (Fig. 4). From a single precursor, it is processed to two separate 26 and 29 kDa polypeptides, which then dimerize via disulfide bonds, as shown for its close homologue in the flesh fly (42). The 29-kDa subunit corresponds to a single chain CTS such as mammalian CTS F or K (Swiss-Prot TrEMBL annotation, www.expasy.ch), whereas the 26-kDa subunit has no homolog in mammals, suggesting a specific role of the 26 -29-kDa protease in invertebrates. DCG-04 activity profiling at different pH levels was used to show that Drosophila CTS proteases are most active under acidic conditions, suggesting that these enzymes exert their biological function in late endocytic or lysosomal cell compartments (Fig. 5). In contrast, some mammalian CTS proteases, such as CTS L, B, and K are also active at a neutral pH level (49), which may reflect a role in a wider range of biological functions in higher vertebrates. However, the predominant presence of CTS in phagosomes of both classes of organisms (insects and mammals), as well as the phylogenetically conserved branching into CTS subgroups L, F, and B (Fig. 4B), is consistent with a general role in lysosomal prote-olysis, which precedes their function in antigen processing and presentation in mammalian cells. Drosophila, which lacks an adaptive immune system, is an appropriate organism to investigate whether CTS proteases have a function in innate immunity during a challenge with bacterial or fungal pathogens. However, examination of published genome-wide microarray data indicated no alteration of Drosophila CTS L, K, B, and F mRNA expression levels in response to immune stimuli (50,51). In addition, preliminary experiments with crude extracts from S2 cells exposed to immune stimulators revealed no significant differences in DCG-04-labeling profiles (data not shown). It will therefore be interesting to assess Drosophila CTS activity profiles in a more refined way, for example, by immunoprecipitation of specific DCG-04-reactive CTS proteases from phagosomes of living cells exposed to such stimuli. Taken together, our results demonstrate the usefulness of this functional proteomics approach and provide a basis for the simultaneous monitoring of multiple protease activities in insect models at different developmental stages or during an immune challenge.
5,997.6
2003-11-01T00:00:00.000
[ "Biology", "Chemistry" ]
Inflammatory Kidney and Liver Tissue Response to Different Hydroxyethylstarch (HES) Preparations in a Rat Model of Early Sepsis Background Tissue hypoperfusion and inflammation in sepsis can lead to organ failure including kidney and liver. In sepsis, mortality of acute kidney injury increases by more than 50%. Which type of volume replacement should be used is still an ongoing debate. We investigated the effect of different volume strategies on inflammatory mediators in kidney and liver in an early sepsis model. Material and Methods Adult male Wistar rats were subjected to sepsis by cecal ligation and puncture (CLP) and assigned to three fluid replenishment groups. Animals received 30mL/kg of Ringer’s lactate (RL) for 2h, thereafter RL (75mL/kg), hydroxyethyl starch (HES) balanced (25mL/kg), containing malate and acetate, or HES saline (25mL/kg) for another 2h. Kidney and liver tissue was assessed for inflammation. In vitro rat endothelial cells were exposed to RL, HES balanced or HES saline for 2h, followed by stimulation with tumor necrosis factor-α (TNF-α) for another 4h. Alternatively, cells were exposed to malate, acetate or a mixture of malate and acetate, reflecting the according concentration of these substances in HES balanced. Pro-inflammatory cytokines were determined in cell supernatants. Results Cytokine mRNA in kidney and liver was increased in CLP animals treated with HES balanced compared to RL, but not after application of HES saline. MCP-1 was 3.5fold (95% CI: 1.3, 5.6) (p<0.01) and TNF-α 2.3fold (95% CI: 1.2, 3.3) (p<0.001) upregulated in the kidney. Corresponding results were seen in liver tissue. TNF-α-stimulated endothelial cells co-exposed to RL expressed 3529±1040pg/mL MCP-1 and 59±23pg/mL CINC-1 protein. These cytokines increased by 2358pg/mL (95% CI: 1511, 3204) (p<0.001) and 29pg/ml (95% CI: 14, 45) (p<0.01) respectively when exposed to HES balanced instead. However, no further upregulation was observed with HES saline. PBS supplemented with acetate increased MCP-1 by 1325pg/mL (95% CI: 741, 1909) (p<0.001) and CINC-1 by 24pg/mL (95% CI: 9, 38) (p<0.01) compared to RL. Malate as well as HES saline did not affect cytokine expression. Conclusion We identified HES balanced and specifically its component acetate as pro-inflammatory factor. How important this additional inflammatory burden on kidney and liver function is contributing to the sepsis-associated inflammatory burden in early sepsis needs further evaluation. Introduction Sepsis remains a major worldwide healthcare problem with consistently high mortality. Acute kidney injury (AKI) is a most prominent and severe complication of sepsis, occurring in 23% to 51% of patients depending on the severity of the sepsis [1] and causing more than 50% of the cases of AKI in patients treated in an ICU [2]. The pathophysiological understanding of sepsis-associated AKI has evolved from a simplistic hypovolemia to more complex concepts that better reflect the multifactorial nature of the condition. While sepsis-induced decrease in global renal blood flow (RBF) plays a major role in the development of AKI, there is now evidence that AKI may also occur under conditions of renal hyperperfusion [3]. However, hemodynamic changes associated with low cardiac output leading to renal hypoperfusion and ischemia-reperfusion injury do remain a major pathogenetic factor of sepsis-associated AKI. Consequently, fluid resuscitation continues to be a mainstay of treatment in septic AKI, and preservation of 'physiological' renal blood flow for prevention of further injury to the kidney seems imperative even in the absence of ischemia as initial pathogenic factor. For fluid resuscitation the choice of fluid, however, has been a widely debated topic, with type, timing and amount all being relevant factors potentially impacting on kidney function [4]. Synthetic and natural colloids as well as crystalloids are the most commonly used types of fluids, however adverse effects on the kidney attributed to hydroxyethyl starch (HES) preparations have been a concern for considerable time [5]. This lead to the rationale of this work supposing that HES could have a proinflammatory effect. This study is specifically investigating potential differences in two different third-generation HES preparations (HES 130/0.42/6% saline and HES 130/0.42/6% balanced solution) compared to Ringer's lactate (RL) for their pro-inflammatory potential in an early sepsis model in rats. With the advancing understanding of the pathogenesis of sepsis-induced organ dysfunction the importance of inflammatory changes has been emphasized [6] and it is consequently of considerable interest to identify a potential introduction of additional inflammatory changes induced by the administration of treatments like HES preparations. The study compares the effects of the two different HES preparations with RL on tissue expression of inflammatory markers in kidney and liver as well as the urinary markers creatinine and α-microglobuline in a cecal ligation and puncture (CLP) sepsis model in the rat. As the endothelium is the first compartment exposed to inflammatory stimuli in sepsis, the inflammatory response of endothelial cells exposed directly to the HES preparations was furthermore assessed in vitro using rat endothelial cells. Animal experiment The animal experiments and methods/ procedures applied have been approved and been in accordance with the local animal care committee (Veterinäramt des Kantons Zürich), approval number 132/2007. Specific pathogen-free male Wistar rats (Charles River Laboratories, Germany) weighing 350g to 450g were used for the experiments. Animals were housed in standard cages at 22+/-1°C under a 12/12h light/dark scheme. Food and water were available ad libitum. For the induction of the sepsis rats were anesthetized using intraperitoneal ketamine (100mg/kg body weight) and xylazine (5mg/kg body weight). After shaving and disinfecting the lower quadrants of the abdomen a midline incision of approximately 4cm length was performed. The cecum was identified and the corresponding mesenteric membrane dissected. A cecal ligation was positioned consistently at the distal third, i.e. at 30-40% of the cecum. The perforation of the cecum was performed using a 18G needle, first puncturing the anti-mesenteric side, then continuing the needle penetration through the lumen to the second perforation on the mesenteric side. A small amount of fecal material was then gently extruded from both perforation holes. After repositioning the cecum within the abdominal cavity the abdomen was closed in two layers. Sham animals were treated in an identical fashion except from CLP. Access for intravenous fluid resuscitation was obtained via a sterile 22G catheter (BD Insyte, Becton Dickinson SA, Madrid, Spain) inserted into the tail vein of the animals. Animals were monitored continuously and were kept at 37°C. Anesthesia/sedation was maintained by repeated subcutaneous administration of ketamine and xylazine (25mg/kg and 1.25mg/kg body weight) every 45min. Ketamine is a well known analgesic in order to minimize suffering of the animals. Fluid resuscitation was performed using the following preparations and according to the following regimen: Ringer's lactate (RL-Ringerlactat, B. Braun), HES 130/0.42/6% saline (HES saline-Venofundin 6%, B. Braun) or HES 130/0.42/6% balanced solution (HES balanced-Tetraspan 6%, B. Braun). The carrier solution of HES products is an electrolyte solution in case of HES saline it is normal saline, in case of HES balanced a balanced electrolyte solution containing apart form sodium, chloride, calcium, potassium, magnesium, acetate and malate. All animals received RL one hour after the procedure at a volume of 30mL/kg i.v. Two hours after the procedure, animals received either RL at a volume of 75mL/kg, HES saline or HES balanced at a volume of 25mL/kg. The animals were sacrificed after 4h while under deep anesthesia with ketamine/xylazine. They were exsanguinated (by incising the abdominal aorta and inferior vena cava) and the heart was flushed with ice cold phosphate buffered saline (PBS), kidneys and liver were excised, snap-frozen in liquid nitrogen and stored at -80°C for RNA extraction. At least 4 animals were included in each sham group and 9 in each CLP arm of the study. The detailed experimental setup is displayed in Fig 1A. In vitro experiment Rat pulmonary artery endothelial cells, a gift from Dr. Roscoe Warner (Assistant professor, Department of Pathology, University of Michigan at Ann Arbor, Ann Arbor, MI; Head: Dr. Peter A. Ward) [7,8] were grown at 37°C in Dulbeco's Eagle Medium (DMEM) completed with 10% fetal bovine serum (FBS), 1% penicillin/streptomycin and 1% 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES, all from GIBCO, Carlsbad, CA). Confluent cells were incubated over night in starving medium containing 1% FBS. The next morning, cells were exposed to a 1:1 dilution of medium and either RL, HES saline or HES balanced, respectively. After a 2h the incubation solution was replaced by the same solution, completed with 0.5ng/ mL tumor necrosis factor-α (TNF-α, BD Pharmingen, San Diego, CA). Control cells were exposed to phosphate-buffered saline only. After 6h cell supernatants were harvested and stored at -20°C for further analysis. An analogue experiment was performed to determine the impact of the respective composition of the HES preparations (saline only or addition of malate and/or acetate). Ringer's lactate was the reference volume replacement. Concentrations used were 24mmol sodium acetate, 5mmol sodium malate or a mixture of 24mmol sodium acetate plus 5mmol sodium malate. The amount of acetate and malate was chosen according to the concentrations of acetate and malate in HES balanced. An overview of the experimental setting is given in Fig 1B. Assessment of inflammation using enzyme-linked immunosorbent assay (ELISA) and real time polymerase chain reaction (PCR) To be able to compare our results with other animal studies focusing on systemic or inflammatory processes we decided to focus on inflammatory mediators. Furthermore, determination of these markers allowed us to differentiate between systemic effects and inflammatory scenarios in the various tissues, where the cytokines are expressed as well. This aspect of compartmentalization seems to be essential. The final reaction volume was 15μl using a Gene Amp 5700 System (ABI, Life Technologies, CH). The comparative Ct method was used for quantification of the gene expression. Ct values of the samples were normalized to 18S. Creatinine and α-microglobulin determination Creatinine was assessed using a DriChem 4000i Analyzer (Fuji Film, Tokio, Japan) and the corresponding DriChem slides (Fuji) according the manufacturer's instruction. Alpha-microglobulin determination took place by an ELISA-kit according the manufacturer's protocol (Hölzel Diagnostika, Cologne, Germany). Statistical analysis Statistical analysis was performed using R, Version 3.2.2 (R Development Core Team, 2015), with the packages lme4 [9], lmerTest [10], and ggplot2 [11]. Linear mixed model analyses were used to assess the influence on tissue inflammatory mediator expression by the different types of treatment. Exposure to RL has been defined as the reference category. The tabular results show coefficients and corresponding 95% confidence intervals of the mixed models. Figures illustrate boxplots with medians and quartiles. Whiskers represent 5% and 95% confidence intervals. A p-value <0.05 was considered significant. Results With the aim to study a potential treatment-associated inflammatory effect in very early stages of AKI the present short-term model of CLP was chosen. Correspondingly, mild but consistent increases in inflammatory markers were observed in the CLP RL-treated compared to shamoperated RL-treated animals. In the kidney, MCP-1 and CINC-1 expression increased by a 2.8and 3.5-fold compared to sham-operated animals (both p<0.01, Table 1. This was less pronounced for TNF-α levels in CLP animals (1.4-fold increase, p<0.01). Intercellular adhesion molecule-1 was not significantly more expressed in CLP than in sham animals. The inflammatory response of CLP compared to sham animals was similarly observed in liver tissue: a 51.7-fold increase of MCP-1 was seen in the liver of CLP animals ( Table 2). The CLP procedure had a marginal effect on CINC-1 mRNA expression (upregulation by a 0.6-fold, p<0.05) and no quantifiable effect on TNF-α and ICAM-1 levels. This early phase-CLP model resulted in a mild consistent inflammatory tissue response in kidney and liver corresponding to early systemic changes as a consequence of the local intraabdominal inflammatory process. It is noteworthy that despite this relatively mild inflammation the sensitivity of the model was sufficiently high to distinguish different degrees of inflammatory reactions to the various volume repletion treatments. Fig 2). (Fig 2A), CINC-1 ( Fig 2B), TNF-α (Fig 2C), and ICAM-1 (Fig 2D) In order to further evaluate a possible impact of the administration of different HES preparations on kidney function creatinine and α-microglobuline levels in the urine were measured (Table 3, Fig 3). In rats undergoing CLP creatinine increased by 1.23mg/dL, (p<0.01) and αmicroglobuline by 11μg/mL (p<0.05) compared to sham-operated animals. A significant influence of the different fluid resuscitation regimens was not apparent. The inflammatory response to different fluid resuscitation treatments in the liver Cecal ligation and puncture animals showed a significant increase in hepatic mRNA expression of MCP-1 by 118.7-fold (p<0.001), of TNF-α by 134.4-fold (p<0.01), and of ICAM-1 by a 512.3-fold (p<0.001) when treated with HES balanced compared to CLP animals with application of RL ( Table 3, Fig 4). This difference was not seen when animals were given HES saline. Overall and similar to the results in kidney tissue, the exposure to HES balanced resulted in an accentuation of the inflammatory response in septic animals. 3A) and α-microglobuline (Fig 3B). Determination of the inflammation-triggering component of volume repletion treatment with HES balanced in vitro The in vivo results demonstrating a significantly increased inflammatory response in both kidney and liver tissue of animals with HES balanced volume resuscitation in an early septic condition raised the question about a potentially causative component of the HES balanced volume repletion treatment. For further exploration, the in vivo model was thus translated into Fig 4A), CINC-1 (Fig 4B), TNF-α ( Fig 4C) and ICAM-1 (Fig 4D) an in vitro model of endothelial inflammation for a detailed examination. In analogy to the animal experiments, an inflammatory response in endothelial cells was induced by stimulation with TNF-α, and expression levels of MCP-1 and CINC-1 were assessed. Cells exposed to TNF-α and RL expressed 3529±1040pg/mL MCP-1 and 59±23pg/mL CINC-1 protein. As illustrated in Table 4 exposure to HES balanced increased the secretion of inflammatory mediators in TNF-α-stimulated endothelial cells by an additional 2358pg/mL for MCP-1 (p<0.001) ( Fig 5A) and an additional 29pg/mL for CINC-1 (p<0.01) (Fig 5B). The levels of MCP-1 and CINC-1 were not significantly affected by HES saline in TNF-α-stimulated endothelial cells ( Table 4). In order to further characterize a potential association with the electrolyte composition of the volume resuscitation treatments the cells were either exposed to RL or to PBS with or without addition of acetate and/or malate. Exposure to HES balanced or acetate, resulted in higher MCP-1 concentrations in the supernatants of the endothelial cells. Compared to cells exposed to TNF-α and RL, mean MCP-1 levels were elevated by 842pg/mL after incubation with HES balanced (p<0.01, Fig 5A) and by 1059pg/mL after exposure to acetate-containing PBS (p<0.01, Fig 5C). No significant effect on MCP-1 secretion was observed after incubation with HES saline or malate in endothelial cells ( Table 5). Consistent with the results for MCP-1, HES balanced further increased CINC-1 expression by 29pg/mL (p<0.01), acetate by 21pg/mL (p<0.01) compared to RL (Fig 5D). Discussion In summary our in vivo results show increased levels of inflammation for the cecal ligation and puncture model. This was uninfluenced by RL and HES saline, but was further accentuated by HES balanced. Similar observations could be made in endothelial cells in vitro: TNF-α induced an inflammation in rat endothelial cells, which was uninfluenced by HES saline, but was even more accentuated in the presence of HES balanced. Testing the solution's electrolyte composition, acetate in the concentration found in HES balanced lead to a more pronounced inflammation while malate did not. Sepsis remains a major worldwide healthcare problem with consistently high mortality. The condition can be generally described as the occurrence of an infection together with a systemic response to and manifestation of the infection [12]. The body's systemic response to a pathogen may result in conditions of various levels of severity that have been categorized in three stages [12,13]: sepsis, severe sepsis and septic shock. Sepsis is characterized by a Systemic Inflammatory Response Syndrome (SIRS), the presence of at least 2 out of its 4 defining criteria and a proven or suspected infection. Severe sepsis further comprises sepsis-induced organ dysfunction or tissue hypoperfusion characterized by e.g. elevated lactate, oliguria or hypotension. Septic shock represents the most severe stage with persistent hypotension and multiorgan dysfunction and failure. Pathophysiologically, sepsis is understood as a syndrome characterized by complex series of events with immune system activation resulting in pro-and anti-inflammatory reactions, humoral and cellular responses, abnormalities in microcirculation predisposing to impaired general oxygen delivery, tissue hypoxia, multiorgan dysfunction and ultimately death [14,15]. (Fig 5A and 5B) and CINC-1 (Fig 5C and 5D) after exposure to fluid formulations (RL, HES balanced and HES saline, Fig 5A and 5C) and anions present in balanced solutions (5B and 5D). Results are presented as boxplot figures with medians and quartiles. Whiskers represent 5% and 95% confidence intervals. Sepsis-induced organ dysfunction and failure are most severe progression steps of the initial systemic inflammatory response and usually the cause of death when severe sepsis and shock are present. Target organs include the lungs, kidneys and liver. Sepsis-associated AKI reaches a mortality of 70% compared to a 45% mortality rate in AKI alone [1] and thus constitutes a severe medical problem. The specific mechanisms leading to organ failure in sepsis and also the varying degree of vulnerability of different organs are still not well understood. The systemic inflammatory response, tissue hypoperfusion associated with hypotension [16,17] and disseminated intravascular coagulation (DIC) [18] seem to represent the major causative elements leading to organ dysfunction [18]. The benefit-risk profile of fluid resuscitation as therapeutic intervention might be associated with an inflammatory response in itself and may thus represent a potentially aggravating factor in the development of the complex pathophysiologic situation in sepsis. The reported safety profile of HES preparations has been characterized by adverse effects predominantly on the coagulation system and renal function [19]. HES preparations have seen development with decreasing molecular weights and molar substitution to arrive at the current third generation preparations in order to improve their benefit-risk profile with reduced adverse effects on these two organ systems. The initially higher molecular weight was aiming at particularly long residence time in the circulation, an effect that receives less emphasis today in favor of better pharmacodynamic control and potentially reduced adverse effects. The inflammatory response to tissue injury has been a main research focus of the authors, in particular in the lung [20][21][22][23]. In the present study, the research question centered on inflammatory changes in an early in vivo sepsis model induced by cecal ligation and puncture and investigating kidney and liver tissue following various forms of fluid resuscitation as a hallmark of sepsis treatment in order to improve hypotension and associated organ hypoperfusion. As a secondary question the hypothesized inflammatory response differences to the various fluid resuscitation regimens were to be analyzed for a potential association with distinct fluid components. An in vitro model of endothelial inflammation was chosen to discern differences in the inflammatory response to the administration of different components. We found a significantly more pronounced inflammatory mediator expression in animals treated with HES balanced compared to those receiving HES saline or RL lactate. This effect could be successfully transferred, reproduced and further analyzed in vitro in a model of inflamed rat endothelial cells. Our analysis revealed that acetate but not malate or HES saline are associated with an increased expression of inflammatory mediators. There is a still on-going debate about the use of colloids in critically ill patients: concerns have been raised with regard to possible adverse outcomes in patients receiving HES solutions, especially in septic patients [24][25][26] and even a withdrawal of the marketing authorization for HES has been demanded [27]. However, HES continues to be widely used for intravascular volume maintenance or augmentation in daily clinical routine [28]. In contrast to septic patients, no differences in the incidence of death or acute kidney failure has been found in surgical and trauma patients receiving 6% HES [29], for which reason a more nuanced view on the use of HES solutions has been proposed [30]. In this model of CLP-induced septic peritonitis in rats, the research focus was on the very early expression of prognostic inflammatory mediators impacting on severity and outcome of sepsis [31,32]. No effect of HES was observed on inflammatory mediator expression in kidney and liver tissue. Interestingly a clear difference between HES balanced and HES saline has been found. HES balanced provoked a significantly more pronounced expression of inflammatory mediators compared with HES saline. This is in line with results from an earlier study investigating the effects of balanced and non-balanced colloids in acute endotoxemia [23]. In the same study, it has been hypothesized that balanced solutions containing acetate and malate could be specifically associated with conditions of severe inflammation. The present data now provides further experimental evidence that in particular acetate appears to be associated with an aggravation of the inflammatory response. This observation is corroborated by data showing other pro-inflammatory and also myocardial depressant and hypoxemia promoting characteristics [33][34][35] induced by acetate which led for example to the discontinuation of its use in fluids for renal replacement therapy. Functional renal testing does not support our findings on the level of inflammatory mediators. This is likely due to the fact that inflammation-induced functional impairment in organs is observed at a much later time point. We tried to differentiate between an inflammatory process, which is induced by application of the balanced solution, and a possible inflammatorytoxic effect, which might be seen immediately in changes in creatinine and alpha-microglobulin values. Strengths and limitations of the experimental approach. Our experimental approach has several strengths starting with the use of an early stage in vivo model of polymicrobial sepsis. In addition to confirming previous results in a different sepsis model [23], we do not only investigate differences in the inflammatory response to several fluid resuscitation solutions in vivo but furthermore also identify individual components associated with the stronger inflammatory response to HES balanced in our in vitro approach. A certain limitation of the study might is that this-like all models-represents a simplification of a scenario in the patient. This however allows discriminating factors impacting on a certain diseases in smaller study groups. Conclusions and outlook. The data presented in this experimental study indicate that not HES balanced per se but specifically acetate in HES balanced is a component contributing to a more pronounced inflammatory response to the administration of HES balanced in an early sepsis model. This additional pro-inflammatory element could potentially add to the overall burden of the local and systemic inflammatory response in sepsis and therefore may impact negatively on the course of the disease.
5,189.8
2016-03-17T00:00:00.000
[ "Medicine", "Biology" ]
Time Series Clustering: A Complex Network-Based Approach for Feature Selection in Multi-Sensor Data . Introduction The primary goal of industrial Internet of Things (IoT) has been linking the operations and information technology for insight into production dynamics. This potential flexibility entails a floor of technologies made of distributed networks of physical devices embedded with sensors, edge computers, and actuators used to identify, collect, and transfer data among multiple environments. Such IoT-based cyber-physical systems (CPS) establish a direct integration of engineering systems into digital computer-based ones, where the measurement or sensing technologies play an essential role in capturing the real world. Data collected are then the main ingredient to lift efficiency, accuracy, and economic benefits with the added merits of minimum human intervention [1,2]. A consequence of such transformation is that the sensor layer, customarily used to measure, is now the means to map the actual status (of the process) into the cyber-world to derive process information, collect it in databases, and use it as a basis for the models which can be adapted (ideally by self-optimization) to the real situations. In this vein, the CPS provides a holistic view of the engineering systems and enables a bi-directional physical to digital interaction via multimodal interfaces [3,4]. Unlike the classic concepts derived from control theory, CPS forms the basis to describe even complex interactions and thus anticipate process deviations or interpretation and prediction of system behavior, The rest of the paper is organized as follows. In Section 2, we describe the background and related works. In Section 3, we present the unsupervised FSS method. In Section 4, we describe the case study. In Section 5, we report experimental results to support the proposed approach. In Section 6, we summarize the present work and draw some conclusions. Background In this section we review the main approaches used for FSS and time series clustering, together with some complex network analysis tools that represent the constituent parts of the proposed method. Feature Subset Selection A possible classification of Feature Subset Selection (FSS) methods consists in embedded, wrapper, and filter approaches. The embedded methods [27] are usually related to learning algorithms such as decision trees or neural networks that perform feature selection as part of the their training process. The wrapper methods [28], instead, use a predetermined learning algorithm to evaluate the goodness of feature subsets. Since a training and evaluation phase is required for every candidate subset, the computational complexity of these methods is large. Differently, the filter methods [29] do not rely on learning algorithms, but select features based on measures such as correlation. By combining filter and wrapper methods, it is possible to evaluate features through a predetermined learning algorithm without the computational complexity of wrappers thanks to an initial feature filtering [30]. In the context of filter methods, it was found that clustering-based methods outperform traditional feature selection methods, and they also reduce more redundant features with high accuracy [21,22]. As a part of filter methods, the clustering algorithms are used to group features according to their similarity. From a dimensionality reduction perspective, one or more representative variables for each cluster must be identified. For example, in [31], the cluster center is used as representative, while, in [32], a single optimal variable is considered based on the R 2 correlation coefficient. FSS approaches can be further categorized into supervised and unsupervised. The former process feature selection by data class labels, while the latter relies on intrinsic properties of the data. Recently, unsupervised FSS are attracting an ever growing interest due to the widespread occurrence of unlabeled datasets in many applications [33,34]. For this reason, the present paper focuses only on unsupervised methods. Time Series Clustering Conventional clustering algorithms (partitional, hierarchical, and density-based) are unable to capture temporal dynamics and sequential relationships among data [35]. For these reasons, tremendous research efforts have been devoted to identify time series-specific algorithms. The most common approaches involve modifying the conventional clustering algorithms to adapt them to deal with raw time series or to convert time series into static data (feature vectors or model parameters) so that conventional clustering algorithms can be directly applied. The former class of approaches includes direct methods, also called raw data-based [36][37][38][39], while the latter refers to indirect methods that can be distinguished between model-based [40][41][42] and feature-based [43][44][45]. In addition, according to the way clustering is performed, the algorithms can be grouped into whole time-series clustering and subsequence clustering, a valid alternative to reduce the computational costs by working separately on time series segments. Detailed reviews of clustering algorithms for time series can be found in [46][47][48]. It has been recently demonstrated that network approaches can provide novel insights for the understanding of complex systems [23,49], outperforming classical methods in the ability to capture arbitrary clusters [50]. In particular, the weakness of conventional techniques resides in the use of distance functions which allow finding clusters of a predefined shape. In addition, they identify only local relationships among neighbor data samples, being indifferent to long distance global relationships [50]. Examples of network approaches for time series clustering can be found in the literature, making use of DTW and hierarchical algorithms [51] and community detection algorithms [50]. Evaluation Metrics for Unsupervised FSS To evaluate an unsupervised FSS method for time series, two main indicators are widely used: redundancy and information gain. In particular, the redundancy of information among a set of time series y = (y 1 , y 2 , . . . , y N ) is quantified by the metric W I , which is defined as: where MI(y i , y j ) is the mutual information between time series y i and y j [16]. A low value of W I is associated to a set of time series that are maximally dissimilar to each other. It is also possible to consider the rate of variation of this metric, represented by the redundancy reduction ratio (RRR): The information gain, instead, is computed in terms of the Shannon entropy H [52], which reads as: where X is the data matrix associated to the set of time series y, being every row a sample of the observations and every column a different time series, and x i is the ith row of such matrix. The information gain is computed as the variation of entropy between the original time series and the time series after the feature selectionȳ. If the rate of variation is considered, it is possible to define the information gain ratio (IGR): where X andX are the data matrices associated to y andȳ. Complex Network Analysis In this section, we illustrate Complex Network Analysis (CNA) methods used in the present study, namely visibility graphs, local network measures, and community detection algorithms. Visibility Graph The visibility graph algorithm is a method to transform time series into complex network representations. This concept was originally proposed in the field of computational geometry for the study of mutual visibility between sets of points and obstacles, with applications such as robot motion planning [53]. The idea was extended to the analysis of univariate time series [24], making it possible to map a time series into a network that inherits several properties of the time series itself. Moreover, visibility graphs are able to capture hidden relations, merging complex network theory with nonlinear time series analysis [23]. In particular, the visibility graph algorithm maps a generic time series segment y t n = ((s t n ) 1 , (s t n ) 2 , . . . , (s t n ) L ) in a graph by considering a node (or vertex) for every observation (s t n ) i for i = 1, . . . , L, where L is the number of observations in the segment. The edges of the graph, instead, can be generated using two different algorithmic variants: the natural visibility graphs and the horizontal visibility graphs. The natural visibility graph algorithm [24] generates an edge with unitary weight between two nodes if their corresponding observations in the series are connected by a straight line that is not obstructed by any intermediate observation. Formally, two nodes a and b have visibility if their corresponding observations (s t n ) a = (t a , v a ) and (s t for any intermediate observation (s t n ) c = (t c , v c ) such that a < c < b. t a and t b represent the timestamps of the two samples, while v a and v b are the actual observed values. A computationally more efficient algorithmic variant is the horizontal visibility graph [54,55], based on a simplified version of Equation (5). Visibility graphs can be enhanced by considering its weighted version [56], where the weight between any pair of nodes (s t n ) a = (t a , v a ) and (s t n ) b = (t b , v b ) reads as: A schematic illustration of the weighted visibility graph construction is shown in Figure 1. Local Network Measures Networks can be composed by many nodes and edges, making the analysis and the unveiling of hidden relationships very challenging. For this reason, global and local network measures are used, respectively, to extrapolate synthetic topological information from the whole network and study the role nodes play in its structure. The latter purpose can be tackled using different centrality measures. The first historically proposed one is the degree centrality [25] which allows detecting the most influential nodes within the network. This measure is based on the simple concept that the centrality of a vertex in a network is closely related to the total number of its connections. In particular, the weighted degree centrality of a node i in a graph reads as: where L is the number of nodes, w ij is the weight of the edge connecting nodes i and j, and k t n = ((k t n ) 1 , (k t n ) 2 , . . . , (k t n ) L ) is also called the degree sequence of the graph. Another measure is the eigenvector centrality [57], which is used for determining elements that are related to the most connected nodes. The betweenness centrality [58], instead, is able to highlight which nodes are more likely to be in the network communication paths, and, finally, the closeness centrality [58] measures how quickly information can spread from a given node. Community Detection Since the last century, networks have been widely used to represent and analyze a large variety of systems, from social networks [59,60] to time series. One of the driver has been the growing interest in detecting groups of nodes (communities) that are strongly connected by high concentrations of edge (relations), sharing common properties or playing similar roles in the network [61][62][63]. In particular, nodes that are central in a community may be strongly influential on the control and stability of the group, while boundary nodes are crucial in terms of mediation and exchanges between different communities [64]. Many community detection methods have been proposed to date and a possible classification includes traditional, modularity-based, spectral, and statistical inference algorithms. Traditional methods include graph partitioning [65,66], which selects groups of predefined size by minimizing the number of inter-group edges; distance-based methods [67], where a distance function is minimized starting from local network measures; and hierarchical algorithms [68] that produce multiple levels of groupings evaluating a similarity measure between vertices. Modularity-based methods [69][70][71], instead, try to maximize the Newman-Girvan modularity [72], which evaluates the strength of division into communities. One of the most popular methods is the Louvain's method [26]. This algorithm is based on a bottom-up approach where iteratively groups of nodes are created and then aggregated in larger clusters. In particular, nodes are initially considered as independent communities and the best cluster partition is identified by moving single nodes to different communities searching for a local maxima of the modularity. Then, a new network is constructed by modeling clusters as graph vertices and by computing edge weights as sum of the connection weights between adjacent nodes belonging to different communities. These steps are iteratively repeated until convergence, corresponding to a maximum of modularity. Another category of community detection methods are the spectral algorithms [73], which detect communities by using the eigenvectors of matrices such as the Laplacian matrix of the graph. Finally, statistical inference algorithms [74,75] aim at extracting properties of the graph based on hypotheses involving the connections between nodes. Visualization Community detection algorithms are typically integrated with exploratory network tools in order to improve the network visualization [76]. These tools become essential to give insight into the network structure, by revealing hidden structural relationships that may otherwise be missed. As described in [77], there is a large variety of specialized exploratory network layouts (e.g., force-directed, hierarchical, circular, etc.) based on different criteria. Among them, force-directed layouts are extensively applied in the identification of communities with denser relationships owing to their capability to capture the modular aspect of the network. An example of force-directed layout is the Frushterman-Reingold algorithm [78], which considers nodes as mass particles and edges as springs between the particles and the goal is to minimize the energy of this physical system in order to find the optimal configuration. This process is only influenced by the connections between nodes, thus, in the final configuration, the position of a node cannot be interpreted on its own, but has to be compared to the others. Evaluation Metrics Community detection algorithms can be evaluated through several external or internal indicators, customary for clustering or specially designed for community detection in networks [79]. External indicators, which evaluate the clustering results based on ground truth data, include purity [80], Rand index [81], and normalized mutual information [82]. On the other hand, internal indices, relying on intrinsic information to the clustering and specifically designed for networks. are the modularity [72] and the partition density [83]. Methods This section discusses the proposed method, starting from the problem of time series clustering up to the task of unsupervised FSS. Given a set of N time series y = y 1 , y 2 , . . . , y N , the main steps of the proposed clustering approach are here summarized. a. Remove time series noise through a low-pass filter. b. Segment time series y n into consecutive non-overlapping intervals s 1 n , s 2 n , . . . , s T n corresponding to a fixed time amplitude L, where T is the number of segments extracted for each time series. c. Transform every signal segment s t n (t = 1, . . . , T and n = 1, . . . , N) into a weighted natural visibility graph G t n . d. Construct a feature vector k t n = ((k t n ) 1 , (k t n ) 2 , .., (k t n ) L ) for each visibility graph G t n , where (k t n ) i is the degree centrality of the ith node in the graph and k t n the degree sequence of the graph. e. Define a distance matrix D t for every tth segment (t = 1, . . . , T), where the entry d t ij is the Euclidean distance between the degree centrality vectors k t i and k t j . Every matrix gives a measure of how different every pair of time series is in the tth segment. f. Compute a global distance matrix D that covers the full time period T where the entry (i, j) is computed as d ij = 1 T ∑ T t=1 d t ij , averaging the contributions of the individual distance matrices associated to every segment. g. Normalize D between 0 and 1, making it possible to define a similarity matrix as S = 1 − D, which measures how similar every pair of time series is over the full time period. h. Build a weighted graph C considering S as an adjacency matrix. i. Cluster the original time series by applying a community detection algorithm on the graph C and visualize the results through a force-directed layout. Figure 2 illustrates the flowchart of the methodology. After the initial stages of data filtering (Step a) and time series segmentation (Step b), for the transformation of every signal into network domain (Step d), we used the natural weighted visibility graphs. The natural variant was preferred to the horizontal one because it is able to capture properties of the original time series with higher detail, avoiding simplified conditions. The weighted variant, on the other hand, is used to magnify the spatial distance between observations that have visibility and thus avoid binary edges in favor of weighted edges in the visibility graph. Since we used natural weighted visibility graphs to map time series into networks, for the extraction of a feature vector for each signal segment (Step e), we considered the weighted degree centrality sequence of the network, as suggested in [84], because it is able to fully capture the information content included in the original time series [25,85]. Then, after the construction of the segment distance matrices D t and the normalized global similarity matrix S together with its graph representation C (Steps f-h), we used the modularity-based Louvain's method in step (Step i) for community detection since extremely fast and well performing in terms of modularity. To achieve a modular visualization of the clusters detected by the discussed method and their mutual connections, we used a force-directed algorithm, namely the Frushterman-Reingold layout, as a graphical representation. Finally, for specific unsupervised FSS purposes, we considered a representative parameter for each cluster. Such parameters were identified based on their importance within the communities, by considering the signals with highest total degree centrality in their respective groups. Every part of the proposed approach was developed in Python 3.6 [86], using the Numpy [87] and NetworkX [88] packages. Case Study This section deals with the case study considered for the applications of the proposed method, which is an internal combustion engine used in industrial cogeneration (or CHP). The CHP system consists of a four-stroke ignition engine (P = 1032 kW) fueled with vegetable oil, coupled to a three-phase synchronous generator. The electricity produced is used to meet the self-consumption of the plant and the production surplus is fed into the grid. The heat recovery takes place both from the engine cooling water circuit and from the exhaust gases. In particular, the heat exchange with engine cooling water (t = 65-80 • C) is used both to meet part of the plant heating requirement and for the preheating of the fuel before the injection phase. The return water from the plant is cooled by a ventilation system consisting of four fans (P = 15 kW). The exhaust gases, after being treated in a catalyst, are conveyed inside a boiler of 535 kW thermal power, which is used to produce steam at about 90 • C useful for different production lines. A schematic representation of the system is shown in Figure 3. The system is equipped with a sensor network for condition monitoring and control purposes that samples every minute for a total of 90 physical quantities at different points. The data used for the case study go from 25 June 2014 to 5 May 2015. The early preprocessing phase involved the removal of the parameters that were constantly flat and the cumulative signals, thus reducing the number of the starting parameters to 78. The final list of monitored CHP plant variables considered for the analysis is reported in Table 1. In the preprocessing phase, the outliers caused by sensor errors were also removed. To deal with unusual dynamics linked to system shutdowns, observations where the active power of the system was zero were filtered out. Afterwards, we resampled the data every 15 min to filter constant signal intervals and reduce the amount of measurements processed by the algorithm. The resulting data matrix used as input for the analysis had 30,240 rows and 78 columns. Finally, we built time series segments including 24 h of observations to capture the typical daily cycle of the plant. Results This section provides a detailed discussion about the experimental results obtained by the proposed approach, followed by a comparison with two traditional time series clustering methods. Figure 4 shows the plot of the 78 standardized signals during a representative period of about two months. Data were extracted during a total measuring time of almost 11 months. The total dataset was then analyzed by applying the method described in Section 3. After the application of a low-pass filter for noise removal, Steps b-d of the workflow, time series were segmented into non-overlapping intervals s t n , then mapped into natural visibility graphs G t n , and finally feature vectors were extracted in terms of degree sequences k t n . Afterwards, in Steps e-g, a global distance matrix D was computed by combining the contribution of all the distance matrices D t , followed by the definition of the similarity between all the pairs of time series. The resulting similarity matrix S is shown in Figure 5. As per Step h, the similarity matrix S is represented in the form of a weighted graph, also called similarity graph C, where each node corresponds to a specific signal and the edge weights quantify pairwise similarities between time series. To carry out the community detection phase, only the most important edges were taken into account. In particular, we performed edge pruning by filtering the pairwise similarities lower than the second quantile of their probability distribution. Then, as for Step i, by means of the Louvain's algorithm (see Section 2.2.3), we identified 12 different communities within the filtered similarity graph, which globally cover 70 parameters. Table 2 provides the detail of the variables contained in each cluster with reference to the parameter IDs presented in Table 1. The eight signals shown in Figure 6 were not clustered since they were characterized by independent dynamics. This subset includes engine lube oil parameters, i.e., carter pressure, sump level, and pressure; generator parameters, i.e., power factor and reactive power; parameters in the fuel primary storage, i.e., tank level and pressure; and parameters in the exhaust gas treatment, i.e., urea tank level. Time series clustering results are illustrated with reference to the functional groups shown in the block diagram in Figure 3. Most of the fuel parameters were grouped into two distinct homogeneous clusters (see Figure 7). Fuel temperatures from the primary storage to the output of the tanks 1 and 2 are included in Cluster 1 (Figure 7a), while Cluster 2 (Figure 7b) groups the fuel levels in the two tanks. Engine sensor signals fall, together with other strictly related parameters, into two distinct clusters (see Figure 8). In particular, Cluster 3 ( Figure 8a) includes all the cylinder temperatures and the exhaust temperatures, while Cluster 4 (Figure 8b) includes the casing temperatures, the supercharger temperatures, and the temperatures monitored at the engine auxiliaries, e.g., cooling water, lube oil, and intercooler subsystems. Cluster 4 also contains some parameters by the heat exchange with the engine cooling, such as water inlet temperatures of the process heat circuit and inlet fuel temperature. All the parameters of the high temperature heat recovery circuit (process steam demand) were, instead, separated into two distinct groups (see Figure 9). In detail, Cluster 5 (Figure 9a) includes the thermal power and hot water flow rate, monitored at the boiler inlet, while, in Cluster 6 (Figure 9b), all the specific steam parameters are grouped together, such as steam flow rate, pressure, and thermal power, as well as the temperature of the condensed water. As mentioned above, low temperature heat circuit sensor signals, measured at the plant inlet, are part of Cluster 4 together with other engine and auxiliaries signals (see Figure 8b), while the water temperatures at the plant outlet and the delta in-out temperature are in Cluster 7 (see Figure 10). The two principal properties of the electric power supply, frequencies and voltages, were divided into two clusters (see Figure 11). Notably, in Figure 11a, it is possible to note how the engine speed was included in Cluster 8 together with the generator and grid frequencies. On the other and, Cluster 9 includes all the generator and grid voltages. Other electrical parameters, such as powers and currents, have instead been divided into three different clusters (see Figure 12). In particular, Cluster 10 ( Figure 12a) and Cluster 11 ( Figure 12b) distinguish, respectively, the generator powers from the generator currents, while Cluster 12 (Figure 12c) groups together the grid powers and currents. The latter refer only to Phase 2 current, because the Phase 1 and 3 currents were removed in the preprocessing phase due to sensor malfunctions. The clustering results show that the proposed approach is independent from the nature of the monitored parameters and their functionality within the system. For example, Clusters 1, 2, 7, 9, 10, and 11 (Figures 7a, 10a, 11b, and 12a,b) include only homogeneous variables (e.g., temperatures) belonging the same functional area (e.g., engine). Among those, it is interesting to note how the parameters within Cluster 2, i.e., the fuel levels in the tanks for primary storage, seem to be very different from the Euclidean point of view, but the method identified a similarity in their global trends. On the other hand, Clusters 5, 6, and 12 (Figures 9a,b, and 12c) represent some examples of communities populated by heterogeneous physical parameters recorded in the same functional area. Finally, a particular interest derives from the hidden relationships identified between parameters characteristic of different functional areas. Examples are Cluster 3 (Figure 9a), which includes temperatures of cylinder and exhaust; Cluster 4 ( Figure 9b), which groups together temperatures referred to the engine external casing, the engine auxiliaries, heat recovery, and fuel pre-heating systems and the inlet fuel; and Cluster 8 (Figure 11a), which is composed by frequencies and voltages related to both the generator and the grid. After the identification of clusters, exploratory network analysis was used to render a graphical representation of their degree of similarity (the higher the similarity between nodes, the smaller their spatial distance), thus improving the cluster visualization. The Frushterman-Reingold layout applied to the similarity graph C, after edge pruning, provided the results shown in Figure 13. The force-directed layout gives the evidence of a central core of strongly connected parameters, which includes, respectively, most of the fuel temperatures in the storage area (Cluster 1), all the temperatures of cylinders and exhaust (Cluster 3), all the process low temperature parameters (Cluster 7), and most of the generator and grid parameters (Clusters 8-11). Notably, only two parameters of Cluster 3 are outside the central core, namely T29 and T34, measuring, respectively, the fuel temperature in the primary storage and in the tank 2 (the latter being a backup tank). It is also possible to notice how the temperatures of engine cooling water (T25-T27) and lube oil (T43) subsystems represent a key group in bridging the central core to the other variables of Cluster 4. Similarly, the steam parameters in the high temperature heat recovery (Cluster 6), although not directly included in the central core, appear to be strictly connected to it. As expected, no correlation is active among the fuel levels inside the tanks (Cluster 2), the power and flow rate of the hot water at the boiler inlet (Cluster 5), the grid power and currents (Cluster 12), and the rest of the network. To improve the interpretation of the results by adding quantitative information to the exploratory analysis, we calculated the cumulative percentage distribution of the average degree centrality of each cluster (see Figure 14). The bar chart in Figure 14 attributes a specific ranking to the clusters according to their average contribute to the degree centrality of the network. Overall, the results confirm the considerations made so far in relation to the core communities (Clusters 1, 3, 7, and [8][9][10][11], to the boundary communities (Clusters 4 and 6), and to the communities totally unrelated to the network (Clusters 2, 5, and 12). As for the communities included in the central core, it is possible to obtain a distinction between the roles played in the network. In detail, Cluster 10, which groups engine speed and generator and grid frequencies together, is the most influential on the control and stability of the global systems, followed by Cluster 3, which includes cylinder temperatures and exhaust gases. Finally, after clusters identification and analysis, FSS was performed by selecting in each cluster the representative signal as the one with the highest degree contribution in its group. Table 3 shows the selected variables associated to each cluster, together with their degree centrality in the similarity graph, and their share contribution to the sum of the degree centralities within the reference cluster. The representative parameters shown in the table are visually confirmed by the force-directed layout in Figure 13. For example, variable T0 (condense temperature) appears to be the most influential node of Cluster 6 (process high temperature user parameters), having a high number of connections not only with variables of its own cluster, but also with those belonging to the central core of strongly connected signals. Another example is the parameter T43 (oil temperature) with respect to Cluster 4 (parameters strictly related to the engine). As reported in the case study, the data matrix considered as input for the analysis has 30,240 × 78 dimensions. After the application of the proposed method, by considering the 12 representative cluster variables, listed in Table 3, together with the 8 independent signals shown in Figure 6, we obtained a final data matrix of size 30,240 × 20, thus reducing the dimensionality by 74.4%. Performance Metrics An exhaustive evaluation of the proposed method can be obtained by appropriate measures of clustering partitioning and FSS information content. The lack of ground truth data in the present condition monitoring application precluded the evaluation of the clustering results through classical external indices. In addition, since we used a modularity-based method for the community detection, modularity was identified as the most appropriate metric of the final clustering. A first evaluation of the clustering results was performed using the modularity measure, which quantifies the goodness of the communities on a scale that goes from −1 to 1. In particular, we obtained a modularity index of 0.72, representative of good quality results. Since the proposed approach belongs to the category of unsupervised FSS methods, the final evaluation was performed in terms of redundancy reduction ratio (RRR) and information gain ratio (IGR), defined, respectively, in Equations (2) and (4). The proposed method was compared with standard approaches for time series clustering (see Table 4). In particular, a raw data-based method was considered, which uses the Euclidean distance as time series similarity measure and a partitioning clustering, namely K-Means, for grouping variables. In addition, we included a feature-based method in the comparison, which involves the extraction of statistical parameters characteristic of the time series (i.e., average, median, standard deviation, skewness, and kurtosis) and the subsequent application of the K-Means algorithm for clustering. Table 4. Comparison of the FSS performances between the proposed approach and two standard methods: a raw data-based method and a feature-based one. The evaluation was performed by considering the redundancy reduction ration (RRR) and information gain ratio (IGR) indices. Method Optimal Table 4 shows that the time series clustering approach seems to be particularly efficient in terms of FSS, allowing a total redundancy reduction of 29.05% in the starting dataset by obtaining, at the same time, a global information gain of 10.60%. It is also interesting to note that both performance metrics are better than those obtained with the standard approaches considered. In particular, the proposed method outperforms the raw data-based clustering approach in terms of both RRR and IGR indices, with an overall performance improvement of 19.53% and 2.21%, respectively. Looking at the results obtained with the feature-based method, also in this case the proposed approach provides better results with an increase of 8.09% and 2.70% for the RRR and IGR indices, respectively. Conclusions With the advent of Industry 4.0, the increasing availability of sensor data is leading to a rapid development of models and techniques able to deal with it. In particular, data-driven AI models are becoming essential to conduct the analysis of complex systems based on large data streams. State-of-the-art models fail when dealing with overfitting in the data and suffer from performance loss when variables are highly correlated between each other. Many FSS methods have been introduced to address these problems. Notably, it has been demonstrated that clustering-based methods for unsupervised FSS outperform traditional approaches in terms of accuracy. The complexity of nonlinear dynamics associated to data streams from sensor networks make standard clustering methods unsuitable in this context. For these reasons, in this paper, we propose a new clustering approach for time series useful for unsupervised FSS, exploiting different complex network tools. In particular, we mapped time series segments in the network domain through natural weighted visibility graphs, extracted their degree sequences as feature vectors to define a similarity matrix between signals, used a community detection algorithm to identify clusters of similar time series, and selected a representative parameter for each of them based on the variable degree contributions. The analysis of the results highlights two advantages deriving from the proposed method. The first is the ability to group together both homogeneous and heterogeneous physical parameters even when related to different functional areas of the system. This is obtained by capturing time series similarities not necessarily linked to signal Euclidean distance. In the FSS perspective, the approach, by considering 12 representative variables for the identified clusters and 8 independent signals that were not clustered, reduced the dimensionality of the dataset by 74.4%. Second, as an additional advantage with respect to FSS purposes, the method allows discovering hidden relationships between system components enriching the information content about the signal roles within the network. Since the construction of a natural weighted visibility graph has time complexity O(L 2 ), being L the number of samples in a time series interval, the proposed approach was intended as an offline filtering tool. In particular, being the visibility graph the bottleneck of the algorithm, the global time complexity is in the order of O(TL 2 ), where T is the number of consecutive non-overlapping segments. Running the algorithm on a dataset of 11 months with time windows of 24 h took approximately 15 min. The idea is to consider the whole dataset at disposal in order to identify the overall most relevant signals, by averaging the contributions of all intervals. Thus, the resulting reduction in the dimensionality of data streams opens the possibility to simplify the condition monitoring system and its data. If, instead, a real time tool for FSS or time series clustering is of interest, it is possible to imagine the integration of the proposed algorithm into sensor network now-casting models, e.g., on a sliding window of 24 h the algorithm runs in less than 3 s.
8,226.4
2020-05-28T00:00:00.000
[ "Computer Science" ]
X‐Ray Co‐Crystal Structure Guides the Way to Subnanomolar Competitive Ecto‐5′‐Nucleotidase (CD73) Inhibitors for Cancer Immunotherapy Ecto‐5′‐nucleotidase (CD73, EC 3.1.3.5) catalyzes the extracellular hydrolysis of AMP yielding adenosine, which induces immunosuppression, angiogenesis, metastasis, and proliferation of cancer cells. CD73 inhibition is therefore proposed as a novel strategy for cancer (immuno)therapy, and CD73 antibodies are currently undergoing clinical trials. Despite considerable efforts, the development of small molecule CD73 inhibitors has met with limited success. To develop a suitable drug candidate, a high resolution (2.05 Å) co‐crystal structure of the CD73 inhibitor PSB‐12379, a nucleotide analogue, in complex with human CD73 is determined. This allows the rational design and development of a novel inhibitor (PSB‐12489) with subnanomolar inhibitory potency toward human and rat CD73, high selectivity, as well as high metabolic stability. A co‐crystal structure of PSB‐12489 with CD73 (1.85 Å) reveals the interactions responsible for increased potency. PSB‐12489 is the most potent CD73 inhibitor to date representing a powerful tool compound and novel lead structure. DOI: 10.1002/adtp.201900075 catalyzes the hydrolysis of adenosine-5'-monophosphate (AMP). CD73 is a 140 kDa Zn 2+ -binding glycosylphosphatidylinositol-anchored homodimeric membrane protein. [3] It can also be cleaved and released as a soluble enzyme, [4] whose crystal structure has been published (pdb: 4H2I). [5] CD73 was recently proposed as a novel drug target for the (immuno)therapy of cancer, [2,6,7] and antibodies against CD73 are currently evaluated in clinical trials. [8] However, those antibodies typically show only partial inhibition of CD73, and moreover, they may not penetrate well into solid tumors. Thus, small molecule CD73 inhibitors would be superior for therapeutic application. However, despite considerable efforts, only few inhibitors have been reported so far [9][10][11][12][13][14][15] and most of them appear unsuitable for in vivo application due to low potency, low selectivity, metabolic instability, low water-solubility, and/or high plasma protein binding. [16] In a quest to develop suitable candidates, we selected the moderately potent competitive CD73 inhibitor 1, α,β-Methylene-ADP [AOPCP (1)], a more stable analog of the natural inhibitor adenosine diphosphate (ADP), as lead structure. Substitution of Figure 1. A) Binding modes of PSB12379 (2) and PSB12489 (5) to human CD73. Superposition of AOPCP (yellow, pdb code: 4H2I), PSB12379 (turquoise), and PSB12489 (purple) bound to CD73 (molecular surface colored by electrostatic potential). Interactions of B) PSB12379 and C) PSB12489 within the substrate binding site formed by the N-terminal (blue) and C-terminal (green) domains. Difference electron density omit maps (contoured at 2.0 σ ) are shown in blue. D) Close-up of the interactions of the chloro substituent (green) in PSB12489. Distances (inÅ) and angles (°) are indicated. The NH group of N390 is positioned for a favorable side-on interaction with the chloro substituent. The carbonyl oxygen of N390 is too far away from a linear C─Cl···O arrangement for a halogen bonding interaction. the adenine core and modification of the ribose and diphosphate moieties revealed initial structure-activity relationships. [17] Pyrimidine analogs of 1 were also evaluated, but were generally less potent than corresponding purine derivatives, and their selectivity versus P2Y nucleotide receptors was mostly moderate. [18] N 6 -benzyladenosine-5 -O-[(phosphonomethyl)phosphonic acid] (2, PSB-12379, K i 9.03 nm) was discovered by our group as a more potent inhibitor than 1, [17] and 2 is now widely employed as a (commercially available) tool compound. One of its drawbacks is the potential hydrolysis of the 5 -phosphonic acid ester resulting in the formation of N 6 -benzyladenosine, which is an agonist of adenosine receptors and would thus result in undesired effects. In the present study, we describe the preparation of a co-crystal structure of inhibitor 2 with human CD73. This first structure of CD73 in complex with a potent inhibitor was utilized to design significantly improved inhibitors. Finally, we obtained an additional co-crystal structure of the optimized CD73 inhibitor 5 to evaluate our design hypothesis and to explain its improved potency. The co-crystal structures of human CD73 in the closed state with compound 2 (PSB-12379) and 5 (PSB-12489) was obtained by crystallizing the protein in the presence of 100 µm Zn 2+ and the respective inhibitor in analogy to Knapp et al. [5] For both data sets, relatively high-resolution limits were achieved, and thus well-defined electron densities for the inhibitors and the two Zn 2+ ions were obtained in the active site. These co-crystal structures allow for a rational explanation of the inhibitory potency improvements on a structural level (Table S1, Supporting Information). Previously described inhibitor 2 (PSB-12379) showed an 40-fold improved K i -value compared to AOPCP (1). [17] AOPCP binds to the closed conformation of CD73 with the adenine base forming a hydrophobic stacking interaction with F417 and F500 of the specificity pocket in the C-terminal domain, whereas the terminal phosphate group is coordinated to the two catalytic zinc ions of the N-terminal domain. [5] This binding mode is maintained for 2 ( Figure 1). In addition, positions of residues in spatial proximity to the active site of CD73 also remain unchanged, except for N186 which is shifted to provide space for the N 6 -benzyl substituent. The benzyl moiety forms hydrophobic interactions with the N-terminal domain involving the carbon atoms of D121, S185, and N186 ( Figure 1). The benzyl group exhibits weaker electron density and higher Debye-Waller factor (B-values) compared to the core structure of 2, indicating a greater flexibility of this group. To further improve the inhibitory potency of 2, combinations of the N 6 -benzyl substituent with an additional alkyl group at the exocyclic amino group were tested. This modification would, in addition, abolish the interaction of the adenosine derivatives with adenosine receptors. [19] Various combinations were tried, N 6benzyl,N 6 -methyl-substitution (3, PSB-12437) yielding the best results ( Table 1). The co-crystal structure of 2 indicates a pocket next to the adenine base with a volume of 210Å 3 . This pocket has a mostly polar surface formed by N390, D524, NH F417 , NH F500 , and CO G393 , and a hydrophobic base formed by the side chains of F412, P498, L415, L389, and I364. Since this pocket is expected to be best accessible via substitution at the C2-position of the adenine nucleobase, we synthesized 2-substituted AOPCP derivatives exploring this modification to enhance the inhibitory potency. Various substituents were tried, but only halogens (chloro and iodo) resulted in (similarly) improved inhibitory activity. Therefore, we combined the N 6 -benzyl group of inhibitor 2 with a 2-chloro substituent resulting in the very potent inhibitor 4 (PSB-12651, Table 1). Finally, the optimized inhibitor 5 (PSB-12489) was designed by combining the improved features of both inhibitors 3 and 4 (Figure 2). The resulting compound 5 (Table 1), a hybrid of 3 and 4, in fact showed increased potency as predicted based on the X-ray co-crystal structure of inhibitor 2 with CD73. synthesis, and for this purpose, the synthetic access to nucleoside 12 was significantly improved. [21] This key compound was obtained in a single step by reaction of tetraacetylribose (13) with 2,6-dichloropurine in the presence of trifluoromethanesulfonic acid affording 12 in high yield and purity after simple crystallization (Scheme 2). Compound 5 was thus obtained in 45-50% overall yield. The CD73-inhibitory potency of the new compounds was initially determined using recombinant soluble rat CD73 expressed in Sf9 insect cells [22] via a sensitive radiometric assay which allows the use of substrate concentrations around the low K m value of CD73. [23] Full concentration-response curves were determined, and K i values were calculated from the obtained IC 50 values using the Cheng-Prusoff equation (see Table 1 and Figure S4, Supporting Information). [24] The CD73 inhibitors ADP, 1 and 2 had previously been characterized in the same assay showing K i values of 3880, 197, and 9.03 nm, respectively. [17] The new inhibitors 3 and 4 displayed improved K i values of 4.64 and 1.23 nm, respectively. The hybrid inhibitor 5 having the N 6 -benzyl-N 6 -methyl disubstitution of 3 combined with 2-chloro substitution as in 4 resulted in the first subnanomolar CD73 inhibitor showing a K i value of 0.746 nm, which corresponds to a 264-fold improvement in potency as compared to the standard CD73 inhibitor AOPCP (1). Subsequently, the most potent inhibitor 5 was broadly investigated. When tested at human recombinant soluble CD73 in a radiometric assay, [25] it displayed an even lower K i value of 0.318 nm as compared to the rat enzyme (see Table 1 and Figure 3). Native human serum CD73 [26] was also potently inhibited by 5 with a K i value of 2.51 nm (as compared to a K i value of 487 nm determined for AOPCP) (Figure 3). In vivo, CD73 is known to be present in soluble as well as in membrane-bound form. Thus, we studied inhibition of natively expressed CD73 in a human breast cancer cell line, MDA-MB-231, and in human umbilical vein endothelial cells (HUVECs) by compound 5 in comparison to 1 (see Figure 3) using the same assay. [25] The IC 50 value of 5 was 104 nm in MDA-MB-231 cells and 73.5 nm in HUVEC cells, corresponding to K i values of 3-5 nm, while that for 1 was determined to be 150-200 nm. Inhibitor 5 was additionally tested at several other CD73-expressing cell lines. In all experiments, concentration-dependent inhibition of CD73 was observed with similar high potencies (see Figure S6, Supporting Information). Table S2, Supporting Information. For comparison, inhibition constants of 1 and 5 at the recombinant human enzyme have also been determined by Michaelis-Menten plots, and the results, which are in the same range, are presented in Figure S5, Supporting Information. To go one step further, we tested CD73 inhibition by compound 5 in mouse and human tissues. We utilized human tonsil and mouse spleen sections since both of these tissues are known to express CD73. [26] To this end, the Pb(NO 3 ) 2 staining technique was used for the detection of phosphate as a product of CD73-mediated AMP hydrolysis resulting in the precipitation of Pb 3 (PO 4 ) 2 . After wash-out of excess Pb(NO 3 ) 2 , (NH 4 ) 2 S was added resulting in PbS precipitation, which is detectable as brown precipitation. [26] Human tonsils and mouse spleen tissue sections were incubated with 1 mm AMP in the absence or presence of 20 µm AOPCP, or 600 nm of 5 (and also with hematoxylin/eosin dyes to distinguish different tissue structures). Especially in the regions of the central arteries and the capsule, intensive brown staining-corresponding to high CD73 activitywas observed in both tissues. This activity was clearly reduced in the inhibitor-treated samples. Inhibitor 5 was significantly more potent and efficacious than 1 as shown by reduced brown staining (Figure 4). In fact, inhibitor 5 was identified as the most potent CD73 inhibitor observed so far also in this type of assay. Next, we determined the binding mode of inhibitor 5 to human CD73 by X-ray analysis to understand the basis of its outstanding potency. The chlorine atom at C2 forms a hydrogen bond with the N390 side chain and with a water molecule coordinated to CO N499 (Figure 1). Both donors are perfectly positioned in a side-on orientation. Furthermore, two CH-groups are positioned to interact with the chloro substituent, which thus has a very favorable environment for biomolecular interactions. [27] The Cl substituent is not involved in halogen bonding interactions as the carbonyl oxygen of N390 as the only nearby interaction partner is not positioned for a favorable halogen bond. The C─Cl···O angle deviates too much from linearity ( Figure 1D). [28] The presence of the Cl-substituent in 5 also causes a relocation of a water molecule in the C2 pocket to an adjacent previously unoccupied binding site. Taken together, the chloro substitution at C2 results in overall more favorable interactions in the C2 pocket explaining the increase in inhibitory potency. A comparison of the binding modes of 2 and 5 shows that the common AOPCP core structures superimpose closely, but the N 6 -benzyl substituents differ by 37.8°in their torsion angle around the bond between N 6 and the methylene carbon of the benzyl group ( Figure 1A). This moderate reorientation of the phenyl ring is in line with the generally somewhat flexible www.advancedsciencenews.com www.advtherap.com interaction of this group with the N-terminal domain, which is also apparent in the co-crystal structure of 5 by the weaker electron density of this group. The reorientation may be caused by the presence of the N 6 methyl group in 5 and/or by a slight 0.3Å shift of the adenine ring of 5 compared to 2 toward the C2 pocket. As a next step, we investigated the selectivity of inhibitor 5 for CD73 versus related targets. Inhibition of other important ectonucleotidases, including the ectonucleoside triphosphate diphosphohydrolases (NTPDases) 1-3 and the nucleotide pyrophosphatases/phosphodiesterases (NPPs) 1-3, was investigated according to described procedures. [13] The standard CD73 inhibitor 1 was previously found to additionally inhibit NPP1. [17] The new inhibitor 5 did not inhibit any of these ectonucleotidases nor did it activate or inhibit any of the ADP-activated P2 receptors, P2Y 1 and P2Y 12 , at a concentration of 10 µm (see Tables S3 and S4, Supporting Information). Finally, inhibitor 5 was further investigated for its stability in human blood plasma and in rat liver microsomes. Known CD73 inhibitors (ADP, 1, and 2) were included for comparison. The experiments were performed as previously described [17] incubating the samples at 37°C and analyzing them by LC-MS. In human blood plasma, 5 was completely stable within the incubation period of 5 h. Compound 2 was less stable (8% degradation), 1 was metabolized by approximately 50%, while ADP was completely degraded within 30 min. Thus, the order of stability in human blood plasma was 5 ࣙ 2 > 1 â ADP (see Figure S7, Supporting Information). Incubation with rat liver microsomes demonstrated that 5 is metabolically highly stable. Only less than 5% were metabolized under the applied conditions after incubation for 8 h. Inhibitor 2 was less stable (25% degradation), while ADP and 1 were completely degraded within 5-15 min. Thus, the order of stability in rat liver microsomes was 5 > 2 â 1 > ADP (for details see Figure S7, Supporting Information). In conclusion, we obtained an X-ray co-crystal structure of human ecto-5'-nucleotidase (CD73) in complex with inhibitor 2, which allowed us to design nucleotide analogue 5. The new CD73 inhibitor 5 shows outstanding potency, selectivity, and metabolic stability, with a subnanomolar K i value at the human and the rat enzyme. Compound 5 is the most potent CD73 inhibitor described to date as demonstrated for recombinant CD73 as well as for native CD73-containing preparations including soluble enzyme in blood plasma, and membrane-bound CD73 in epithelial and cancer cells, and in mouse and human tissue sections. Importantly, for 5 there is no risk of the formation of adenosine receptor-activating compounds, which could lead to serious side effects. Therefore, 5 is an excellent tool compound for in vitro and in vivo studies. Based on our results, the first clinical candidate (AB680, see Figure S10, Supporting Information) has recently been announced. [29,30] Small molecule CD73 inhibitors are novel check-point inhibitors, which are expected to be superior to antibodies for the immunotherapy of cancer. [Final coordinates and structure factors of co-crystals with inhibitors PSB-12379 (2) and PSB-12489 (5) have been deposited in the Protein Data Bank (www.rcsb.org) under the accession codes 6s7f and 6s7h]. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
3,599.4
2019-10-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Inconsistent year-to-year fluctuations limit the conclusiveness of global higher education rankings for university management Backround. University rankings are getting very high international media attention, this holds particularly true for the Times Higher Education Ranking (THE) and the Shanghai Jiao Tong University’s Academic Ranking of World Universities Ranking (ARWU). We therefore aimed to investigate how reliable the rankings are, especially for universities with lower ranking positions, that often show inconclusive year-to-year fluctuations in their rank, and if these rankings are thus a suitable basis for management purposes. Methods. We used the public available data from the web pages of the THE and the ARWU ranking to analyze the dynamics of change in score and ranking position from year to year, and we investigated possible causes for inconsistent fluctuations in the rankings by the means of regression analyses. Results. Regression analyses of results from the THE and ARWU from 2010–2014 show inconsistent fluctuations in the rank and score for universities with lower rank positions (below position 50) which lead to inconsistent “up and downs” in the total results, especially in the THE and to a lesser extent also in the ARWU. In both rankings, the mean year-to-year fluctuation of universities in groups of 50 universities aggregated by descending rank increases from less than 10% in the group of the 50 highest ranked universities to up to 60% in the group of the lowest ranked universities. Furthermore, year-to-year results do not correspond in THES- and ARWU-Rankings for universities below rank 50. Discussion. We conclude that the observed fluctuations in the THE do not correspond to actual university performance and ranking results are thus of limited conclusiveness for the university management of universities below a rank of 50. While the ARWU rankings seems more robust against inconsistent fluctuations, its year to year changes in the scores are very small, so essential changes from year to year could not be expected. Furthermore, year-to-year results do not correspond in THES- and ARWU-Rankings for universities below rank 50. Neither the THES nor the ARWU offer great value as a tool for university management in their current forms for universities ranked below 50, thus we suggest that both rankings alter their ranking procedure insofar as universities below position 50 should be ranked summarized only in groups of 25 or 50. Additionally, the THE should omit the peer reputation survey, which most likely contributes heavily to the inconsistent year-to-year fluctuations in ranks, and ARWU should be published less often to increase its validity. peer reputation survey, which most likely contributes heavily to the inconsistent year-to-year fluctuations in ranks, and ARWU should be published less often to increase its validity. INTRODUCTION Global higher education rankings have received much attention recently and, as can be witnessed by the growing number of rankings being published every year, this attention is not likely to subside. Besides the arguable use of ranking results as an instrument for university management, it is still a common practice in many universities to use rankings as an indicator for academic performance. Rankings became a big business and as of today a plethora of regional and national rankings exist, advocated by their publishers as potentially efficient and effective means of providing needed information to universities on areas needing improvement (Dill & Soo, 2005). Numerous studies have analyzed and criticized higher education rankings and their methodologies (Van Raan, 2005;Buela-Casal et al., 2007;Ioannidis et al., 2007;Hazelkorn, 2007;Aguillo et al., 2010;Benito & Romera, 2011;Hazelkorn, 2011;Rauhvargers, 2011;Tofallis, 2012;Saisana, d'Hombres & Saltelli, 2011;Safon, 2013;Rauhvargers, 2013;Bougnol & Dula, 2014). This casts justified doubt on a sensible comparison of universities hailing from different higher education systems and varying in size, mission and endowment based on mono-dimensional rankings and league tables and hence on the usability of such rankings for university management and policy making (O'Connell, 2013;Hazelkorn, 2014). Several studies have demonstrated that data used to calculate ranking scores can be inconsistent. Bibliometric data from international databases (Web of Science, Scopus), used in most global rankings to calculate research output indicators, favor universities from English-speaking countries and institutions with a narrow focus on highly-cited fields, which are well covered in THE databases. This puts universities from non-English-speaking countries, with a focus on the arts, humanities and social sciences, at a disadvantage when being compared in global rankings (Calero-Medina et al., 2008;Van Raan, Leeuwen & Visser, 2011;Waltman et al., 2012). Data submitted by universities to ranking agencies (e.g., personnel data, student numbers) can be problematic to compare due to different standards. These incompatibilities are being amplified because university managers have become increasingly aware of global rankings and try to boost their performance by "tweaking" the data they submit to the ranking agencies (Spiegel Online, 2014). Beyond all the data issues, there is the effect that universities with lower ranking positions often encounter volatile ups and downs in their consecutive year-to-year ranks. These effects make university rankings an inconclusive tool for university managers: the ranking results simply do not reflect the universities' actual performance or their management strategies. Ranking results need to be consistent to be of use, so that long-term strategies (e.g., the hiring of high-calibre researchers from abroad or investments in doctoral education) are reflected in year-to-year scores and ranks and in perennial trends. Furthermore, results from various rankings should be concordant to allow a sort of meta-analysis of rankings. Bookstein et al. (2010) found unacceptably high year-to-year variances in the score of lower ranked universities in the THE, Jovanovic et al. (2012) and Docampo (2013) found a large number of fluctuations and inconsistencies in the ranks of the ARWU. As we again observed puzzling results in the THE 2014-15 and the ARWU 2014 that were both published recently, we accordingly analyzed the fluctuations in score and rank of the THE and the ARWU. By calculating regression analyses for both rankings for consecutive years for 2010-2014, we tried to determine the amount of inconsistent fluctuations that can most likely not be explained by changes in university performance (e.g., by increase of publications/citations, change in student/faculty numbers) and listed the universities with the most extreme changes in ranking position for the THE and ARWU. Furthermore, the mean percentages of universities that changed their rank in their groups were calculated for groups of 50 universities aggregated by descending rank in both THE and ARWU, and we calculated a regression of the ranking positions of the first 100 universities in the THE 2014 on the first 100 universities in the ARWU 2014. THE The methodology of the THE was revised several times in varying scale, before and after the split with Quacquarelli Symonds (QS) in 2010 and the new partnership with Thompson Reuters. THE calculates 13 performance indicators, grouped into the five areas Teaching (30%), Research (30%), Citations (30%), Industry income (2.5%) and International outlook (7.5%). However, THE does not publish the scores of individual indicators, only those of all five areas combined. Since 2010, the research output indicators are calculated based on Web of Science data. Most of the weight in the overall score is made up by the normalized average citations per published paper (30%), and by the results of an academic reputation survey (33%) assessing teaching and research reputation and influencing the scores of both areas (Rauhvargers, 2013;Times Higher Education, 2014). In the past, criticism has been levied against this survey. Academic peers can choose universities in their field from a preselected list of institutions and, although universities can be added to the list, those present on the original list are more likely to be nominated. This leads to a distribution skewed in favor of the institutions at the top of the rankings (Rauhvargers, 2011;Rauhvargers, 2013). THE allegedly addressed this issue by adding an exponential component to increase differentiation between institutions, yet no information is available on its mode of calculation (Baty, 2011;Baty, 2012). ARWU ARWU ranks more than 1,000 (of ca. 17,000 universities in the world) and publishes the best 500 on the web. In addition, ARWU offers to field rankings that cover several subjects and subject rankings for Mathematics, Physics, Chemistry, Computer Science and Economics & Business. Universities are ranked according to their research performance, including alumni (10%) and staff (20%) winning Nobel Prizes and Fields Medals, highly cited researchers in 21 broad subject categories in the Web of Science (20%), papers published in Nature and Science (20%), papers indexed in major citation indices (20%), and the per capita academic performance of an institution (10%). Calculation of indicators remained relatively constant since 2004. ARWU ranks universities individually or into bands by sorting on the total score, which is the linearly weighted sum of the six research output indicator scores derived from the corresponding raw data by transformations. Institutional data (number of academic staff) is not provided by universities but obtained from national agencies such as ministries, national bureaus and university associations (ARWU, 2013). In contrast to the THE, there are no teaching/student related indicators or any peer survey component in the ARWU. Due to reliance on ISI subject fields, the areas of natural sciences, medicine and engineering dominate the citation indicator, putting universities with a focus on the arts, humanities and social sciences. The per capita performance is the only ARWU indicator that takes into account the size of the institution, thus small but excellent institutions have less of a chance to perform well in the ARWU-Ranking (Rauhvargers, 2011). Already several studies, i.e., Docampo (2011) and Docampo (2013) analyzed the ARWU and its indicators and found inconsistencies and unwanted dynamical effects. We have no further information beside the public available information on the indicators and their weights on how the scores are calculated for THE and ARWU. METHODS We used the publicly available data on scores and ranks from the THE and ARWU for the years 2010, 2011, 2012, 2013 and 2014, including in the THE all universities ranked between 1 and 200 and in the ARWU the universities ranked between 1 and 100, as ARWU starts aggregating the ranking from rank 101 on. We performed the following analysis for both rankings: (i) we plotted and regressed the scores of the rankings of the year t − 1 on the scores of the year t; (ii) we plotted and regressed the ranks of the rankings of the year t − 1 on the ranks of the year t; (iii) we plotted the associations of scores and ranks and approximated the function of the association between scores and ranks; (iv) we investigated the concordance (ranking position of the first 100 universities) of the THE ranking with the ARWU ranking. For this purpose, we regressed the position of the first 100 universities in the THE-Ranking (2014-15) on the ranking position of the first 100 university in the ARWU-Ranking (2014) and finally (vii) to include also universities ranked below 200 we aggregated the THE from "position 1" on in steps of 50 universities, i.e., we defined 8 aggregated ranking groups (1-50, 51-100, 101-150, 151-200, 201-250, 251-300, 301-350, 351-400) for the THE. As the ARWU starts aggregating in steps of 100 universities from rank 201 on, we refrained from including universities ranked lower than 200 in the ARWU in our analysis, due to comparability; therefore, for the ARWU we defined 4 aggregated ranking groups of also 50 universities (1-50, 51-100, 101-150, 151-200). On basis of this rearrangement, we made the following calculations: we calculated the year-to-year mean fluctuations (%), thus the percentage of universities that changed their rank beyond their aggregated ranking groups (moving upwards, moving downwards, being newly in the rank, respectively dropping out of the ranking) to get an estimate of yearly fluctuation according to the ranking groups. We calculated the mean change of years 2012, 2013 and 2014. Direction and amount of change were not considered; only the fact that a university did change ranking group, i.e., moving within the ranking, dropping out of the ranking, respectively coming newly into the ranking, was counted. THE regression of the scores and ranks of two consecutive years The regression of the scores-particularly of the ranking 2010-2011 regressing on the scores of the ranking of 2011-2012-shows very high fluctuations (Fig. 1A), especially for the lower ranked universities. Moreover, the fluctuations among the lower ranked universities seem to be higher compared to the THE performed by QS before 2010 (Bookstein et al., 2010, Fig. 1). Note that in the rankings in the years following 2010-2011, the fluctuations in the THE-Ranking did decrease (Figs. 1B-1D). Tables S1A-S1H show the regression models including regression coefficients, confidence intervals, p-values, degrees of freedom and R 2 values for each model. R 2 ranges from 0.71 to 0.98, i.e., the models with the small R 2 do explain less variance. This confirms the visual impression of particularly high year-to-year fluctuations as depicted in Figs. 1A and 1E. Also when universities are aggregated in groups of 50 universities by decreasing rank, the observed fluctuations increase with increasing rank from year to year and the mean year-to-year fluctuation (mean percentages of universities that changed their rank in their group) increases in each subsequent lower ranked group (Fig. 3). While the mean year-to-year fluctuation is less than 10% in the first group (1-50), it is over 60% in the lowest ranked universities group (351-400). The most extreme cases of "fluctuating university ranks" from 2010 to 2014 in the THE rankings are displayed in Table S3. Association between scores and ranks A general problem of the THE remains: the difference in the scores among the 50 highest scoring universities is considerably higher compared to the difference among the lower scoring universities. This clearly suggests a non-linear relationship between scores and ranks ( Figs. 2A-2E). The consequence is that the ranks of the high scoring universities are much more robust to deviations in the scores from year to year. In the lower ranking universities, however, even very small, more or less random deviations (around 0.5%) lead to unexpected "high jumps" in the ranks from year to year (Figs. 1E-1H). We assume, that these fluctuations are to a large extent caused by the results of the peer reputation survey, which can be skewed by low response rates (Rauhvargers, 2011) and the "Matthew effect" (Merton, 1968). Interestingly, the association between scores and ranks approximates pretty well a power function of the form rank = score + score b , there b ranges between −4.106 and −4.88 for the THE ranking. In Fig. S1, the fit on basis of a power function for the ranking 2014 is shown exemplarily (as the figures are quite similar we did refrain from plotting all the power fits). ARWU While still on a high level, the regression of ranks and scores of the ARWU, show much less fluctuations compared with the THE. This indicates a more robust set of indicators. Furthermore the ARWU shows a similar, but even a more extreme pattern of non-linearity between ranks and scores, compared with the THE. Particularly the first ranked university, Harvard University, scores far ahead of all the other universities in the ARWU at each year. As in the THE the association between ranks and scores flattens from rank of 50 on (Figs. 4A-4E). As in the THE the non-linear relationship between ranks and scores increases the fluctuations in the ranking positions of the universities ranked approximately below 50 from year to year (Figs. 3A-3D). Tables S2A-S2H show the regression models including regression coefficients, confidence intervals, p-values, degrees of freedom and R 2 values for each model. R 2 ranges from 0.71 to 0.98, i.e., the models with the small R 2 do explain less variance, the amount of inconsistent fluctuations seem to be higher. The higher R 2 values indicate, that the ARWU shows less inconsistent fluctuations compared to the THE, which confirms the visual impression of the Figs. 3A-3H and confirms our notion that the inconsistent fluctuations in the THE are largely caused by the results of the peer reputations survey, which is not included in the ARWU. As in the THE the associations scores and ranks approximates pretty well a power function of the form rank = score + score b , there b ranges between −2.96 and −3.00 for the ARWU (compared with −4.106 and −4.88 for the THE). As in the THE, if universities are aggregated in steps of 50 universities by decreasing rank, the fluctuations increase with increasing rank from year to year and the mean year-to-year fluctuation (%) increases in lower ranking groups (Fig. 6). While the mean year-to-year fluctuation is less than 10% in the first group (1-50) it is over 40% in the lowest ranked universities group . The most extreme cases of "fluctuating university ranks" from 2010 to 2014 in the ARWU rankings, are displayed in Table S4. Correlation between THE and ARWU A really dramatic amount of inconsistent fluctuations reveal the regression of the ranks in the THE on the ranks in the ARWU: for the universities ranked approximately lower than the 50th rank, there is virtually no correlation between the THE and the ARWU (Fig. 7). Regression could only be plotted for universities ranked in both rankings among the first 100. The R 2 of 0.52 indicates, as seen in the plot (Fig. 7), that only a relatively small amount of the variance can be explained by the association between the THE and the ARWU (Table S3). i.e., the inconsistent fluctuations are quite high. This implies that the THE and ARWU make substantial different statements on the performance of universities ranked below 50, thus making both rankings hard if not impossible to compare. This effect is to some degree understandable, due to the different setup of indicators in both rankings; however, one could assume that the reflection of academic performance of universities is reflected more homogenously in both rankings. More homogenous results would also allow a "meta-ranking" which could be of more value for university management than singular contradicting ranking results. DISCUSSION High ranking positions achieved by a small group of universities are often selfperpetuating, especially due to the intensive use of peer review indicators, which improve chances of maintaining a high position for universities already near the top (Bowman & Bastedo, 2011;Rauhvargers, 2011). This phenomenon also corresponds to the "Matthew effect," which was coined by Merton (1968) to describe how eminent scientists will often get more credit than a comparatively unknown researcher, even if their work is similar: credit will usually be given to researchers who are already famous. The intensive and exaggerated discussion in the media of the "up and downs" of universities in the THE is particularly misleading for universities with lower ranking positions (below approximately a score of 65% and a rank of 50; above scores of 65%, the relationship between ranks and scores is steeper, and it flattens for scores below 65%). This is because the ranking positions suggest substantial shifts in university performance despite only very subtle changes in score. In fact, merely random deviations must be assumed. One reason lies in the weighing of indicators by THE, with the emphasis on citations (30% of the total score) and the peer reputation survey (33% of the total score). For lower ranked universities, a few highly cited publications, or the lack thereof, or few points asserted by peers in the reputation survey, probably make a significant difference in total score and position. Ranking results have a major influence on the public image of universities and can even impact their claim to resources (Espeland & Sauder, 2007;Hazelkorn, 2011). Accordingly, inconsistent fluctuations in ranking positions can have serious implications for universities, especially when the media or stakeholders interpret them as direct results of more or less successful university management. The use of monodimensional rankings for university management is generally doubtful. Our results show that THE, especially in its current form, has very limited value for the management of universities ranked below 50. This is because the described fluctuations in rank and score probably do not reflect actual performance, whereby the results cannot be used to assess the impact of long-term strategies. "Rankings are here to stay, and it is therefore worth the time and effort to get them right" warns Gilbert (2007). What could be done to address the fluctuations in the THE for universities below rank 50 to make it a more usable tool to assess actual performance for university management? THE has already addressed fluctuations to some extent by ranking universities only down to position 200, followed by groups of 25 from 201-300 and groups of 50 from 300 to 400. Nonetheless, based on our data we believe that this is not going far enough and suggest that universities should be summarized in groups of 25 or 50 below the position of 50. Furthermore, we believe that these inconsistent fluctuations are caused to a large extent by the results of the peer reputation survey, which can be skewed by low response rates (Rauhvargers, 2011) and biased by the "Matthew effect" that favours already well-renowned institutions (Merton, 1968). The latter could possibly also help to explain the consistency in the scores and ranks in the group of the top-50 universities. Thus, in order to increase the validity of the THE, the peer reputation survey should be omitted or given less weight in future rankings. The analysed curves of scores vs. ranking positions in Fig. 2 do have analogous characteristics for example to semi-logarithmic curves produced in analytic biochemistry. The accuracy of such curves is limited to the steepest slope of the curve, whereas asymptote areas deliver higher fuzziness (Chan, 1992). Thus, a further suggestion to avoid the blurring dilemma is the methodological approach of introducing a standardization process for THE data. This would involve using common suitable reference data to create calibration curves represented by non-linearity or linearity. Simple calibration would be as suggested to categorize universities in ranking groups or to apply a transformation to the scores, such as a logarithmical transformation. Comparing the year to year fluctuation in the ARWU with the THE reveals, that fluctuation in the ARWU ranking is overall lower as in the THE ranking ( Fig. 1 vs. Fig. 3), i.e., the ARWU ranking seems to be more stable. This is on one hand a good message: a smaller amount of fluctuations, but on the other hand, it has to be asked if a yearly publication of the ARWU makes sense, if no "real" changes can be expected. However, the same holds true for all rankings published on a yearly basis: no factual changes reflecting university strategies can be expected. The astonishing low correlation between the ranks of the THE and the ARWU ranking, particularly for the universities ranked below 50 in both rankings, creates another serious doubt if rankings should be used for any management purposes at all. Maybe a "metaanalysis" of rankings could be reasonable to derivate consistent and reliable results from rankings. If done, such and meta-analysis should include as many rankings as possible to reduce the amount of inconsistent fluctuations that do not reflect actual university performance. CONCLUSION Both rankings show fluctuations in the rank and score particularly for universities with lower ranking position (below position 50) which lead to inconsistent "up and downs" in the total results, especially in the THE and to a lesser extend also in the ARWU. The observed fluctuations do most likely not correspond to academic performance, therefore neither the THES nor the ARWU offer great value as a tool for university management in their current forms for universities ranked below 50. ADDITIONAL INFORMATION AND DECLARATIONS Funding The authors declare there was no funding for this work.
5,485.2
2015-08-25T00:00:00.000
[ "Education", "Economics" ]
Estimating Timing of Specific Motion in a Gesture Movement with a Wearable Sensor Wearable devices with motion sensors, such as accelerometers and gyroscopes, are expected to become popular. There is a lot of research on recognizing gestures using data obtained from motion sensors. A gesture is a one-off motion and its trajectory, i.e., the waveform of the gesture part, is considered to be important. After segmenting the data, gestures are recognized using a template matching method. However, there is no method to accurately detect when a specific action is performed during a gesture. Although it is possible for a player in a game to perform a throwing motion by the user performing a pitching action, it is difficult to reflect a particular moment, such as the user’s release point, in the player. The authors previously proposed a method using a wrist-worn sensor for determining the moment of touching a card in competitive karuta (a Japanese card game) and developed a system that judges the player who took a card first in a competitive karuta match. As reported in this paper, we improved the estimation method to apply our study to a variety of gestures other than those in competitive karuta and propose a method of detecting the timing of a specific action. Our system was evaluated for three types of release points, baseball throws, basketball free throws, and dart throws, with 11 subjects who had an accelerometer and a gyroscope attached to the wrist. The percentage of release point estimation errors of 12 ms or less was determined to be 100% for baseball, 87.6% for basketball, and 91.1% for darts. Introduction Along with the spread of wearable devices embedded with motion sensors, such as smartwatches and smart glasses, the research and development of applications that recognize gestures using data obtained from motion sensors has been actively conducted. The Moff Band by Moff, Inc. (1) has an installed motion sensor, enabling sound play, such as ninja throwing knife and guitar. Smartphones, such as the iPhone by Apple Inc. and Android-powered devices, and remotes of video games, such as those of Nintendo Switch, also have installed motion sensors to detect the tilting and motion of the device, enabling the user to control the game characters and draw objects intuitively. Human activities that have been dealt with in many studies are postures, such as sitting, and behaviors, such as walking, which are states in human activities lasting for a certain length of time. They are generally recognized with a classifier such as a support vector machine (SVM) or random forest (RF) operating on extracted feature values, such as the mean, variance, and fast Fourier transform (FFT) power spectrum, that express body orientation and exercise intensity. Other important activities in daily life include gestures, e.g., punches. Gestures are not states but once-off actions, and they can be recognized with a template matching algorithm such as dynamic time warping (DTW) (2) after trimming the waveform of the gesture. DTW calculates the temporal nonlinear elastic distance between two sequences of the same gestures that vary in time or speed, therefore timings of specific motions in a gesture are not taken into account in gesture recognition. By applying gesture recognition technology to video games such as Wii Sports, (3) a game user can make a character in the game throw a ball by making a gesture of throwing a ball, but information of specific timings such as the release point cannot be reflected by the character in the game. In theory, given a timestamp of a specific motion labeled in the training data, the time of the specific motion in the input data can be estimated with DTW since the DTW algorithm can find the correspondence of samples of training and input data. However, waveforms of complicated gestures, such as that of throwing a ball, include many peaks, and these peaks generally do not match completely in the DTW algorithm, resulting in a large estimation error. The authors previously proposed a system that judges which player took a card first in a competitive karuta (Japanese cards) match. (4) The system measures the motion data when players take a card by using a wrist-worn accelerometer and a gyroscope, and estimates the times when the players touch the card. Generally, competitive karuta is played without a referee, so players must judge themselves (self-judgement) even if a difficult situation arises. Most rounds are not controversial, but sometimes players get into an argument over who touched a card first, which disrupts the other matches in the room because multiple matches are simultaneously played in parallel with one reciter in a large room. In this paper, in order to apply the method of estimating the card touch time to other gestures, we improve the method to find the threshold parameter that had been set manually in our previous work. We assume that the users wear a sensor on their wrist such as a smartwatch and we evaluate our method for baseball pitching, basketball free throws, and dart throws. Our motion timing estimation method can be used for video games and virtual reality/augmented reality (VR/AR) systems in which the characters can be manipulated by moving the user's body. This research has been approved by Human Ethics Committee of Graduate School of Engineering, Kobe University. This paper is organized as follows. Section 2 introduces related work on gesture recognition and motion timing estimation, Sect. 3 explains the proposed system, and Sect. 4 evaluates the performance of our system. Lastly, Sect. 5 concludes this paper. Related Work Activity recognition using wearable sensors has mainly tackled tasks to classify an unknown activity into one of a number of predefined activity classes, while there have been some studies on detecting the moment when the motion changes. Gesture recognition There have been many studies on activity recognition using wearable sensors, some of which have been applied to sports. Kos and Kramberger (26) proposed a miniature wearable device for detecting and recording the movement and biometric information of a user during sport activities. The device weighs 5.8 g and has an accelerometer, a gyroscope, a temperature sensor, and a pulse sensor. Lapinski et al. (5) evaluated professional baseball pitchers and batters by using wearable sensor systems, and Ladha et al. (6) proposed a climbing performance analysis system using a watch-like sensing platform that measures acceleration. Kosmalla et al. (7) also proposed a system for climbing using wrist-worn inertia measurement units. The system can automatically recognize the route that a climber took during a climbing session. Bächlin et al. (8) built a system consisting of sensing and feedback hardware for swim analysis. The system opens up exciting new possibilities in the field of swimming training, as objective values can be provided at all times for complete training. Lee et al. (18) proposed a hand gesture recognition algorithm with an inertial sensor and a magnetometer. Six gestures were tested and achieved an average recognition accuracy of 98.75%. None of these systems, however, can estimate the instant an action is performed. Zhou et al. (9) constructed a system that uses textile pressure-sensing matrices. The system can distinguish different ways in which a player's foot strikes the ball. Connaghan et al. (10) investigated tennis stroke recognition using a single inertial measuring unit (IMU) attached to a player's forearm. They classified tennis strokes into serves, forehand, and backhand. However, these studies did not measure the timing of the ball being struck. Blank et al. presented an approach for ball impact localization on table tennis rackets using piezoelectric sensors. (11) However, they did not examine the precision of the ball impact timing. The same group also proposed a system that uses inertial sensors attached to a table tennis racket. (12) The system detected table tennis strokes by using an event detection method. This method detected strokes with an accelerometer installed on the racket grip and achieved a precision of 0.957 and a recall of 0.982. Motion timing estimation Chi et al. (19) proposed a system that assists the umpires in Taekwondo matches by attaching piezoelectric sensors to the body protectors of the players. Helmera et al. (20) proposed an automated scoring system for amateur boxing by attaching an array of piezoelectric sensors to the players' vests. Maglott et al. (27) investigated the difference of arm motion during basketball shooting. They used a tight-fitting stretchable sleeve embedded with two 9-axis inertial measurement units (IMUs). From their experiment, it is reported that trained shooters shot free throws faster than novice shooters; however, only the timing of peaks is compared and the motion is not considered. Kim and Park (29) developed a golf swing segmentation algorithm from 3-axis acceleration and 3-axis angular velocity data. The algorithm divides the input sequence into five major predefined phases with an average segmentation error of 5-92 ms. Lian et al. (28) developed a recognition algorithm for six serial phases of a throwing action in baseball from acceleration data. They achieved a recognition accuracy of 91.42-95.14% for three test subjects for the six phases; however, the estimation error of the segmentation was not evaluated. Moreover, Mencarini et al. (25) surveyed and reviewed a corpus of 57 papers published from 1999 to 2018 regarding HCI research tackling on wearable technology in the sports domain. Kanke et al. (13) proposed the Airstic Drum, which is a drumstick with an accelerometer to play an actual drum by physically striking the drum surface in front of the user and a virtual drum by striking the air. When hitting the real drum, the actual sound is produced, and when hitting the virtual drum, the sound is output from the system. Airstic Drum identifies whether the object hit is a real or virtual drum before the moment that the drum is hit, and only outputs a sound when the virtual drum is hit. However, the difference between the moment of striking and the moment of sound output was not quantitatively evaluated. In addition, the algorithm was specialized for detecting drum strikes and it is unknown whether the system can be applied to other activities. The current authors (14) proposed a method that recognizes gesture activities while moving with high accuracy. The method judges the constancy, i.e., the periodicity of the waveform, of human activities by calculating the autocorrelation of acceleration values and conducts gesture recognition only when the constancy breaks. In this study, we did not evaluate whether the moment when the constancy breaks is the correct starting point of the gesture, and the constancy decision was made every 800 ms; therefore, it is difficult to use the method for detecting timing with an accuracy of 10 ms. Yoshizawa et al. (15) proposed a method that finds the changing point of activities from acceleration data and obtained a precision of 50% for changing point detection when the allowable error was within 1800 ms. We also proposed a system that judges which player took a card first in a competitive karuta match. (4) In competitive karuta, the time difference between different players touching a card is extremely small, and our proposed system distinguishes time differences of milliseconds. In this study, we improve the method of finding the threshold parameter that had been set manually in our previous work and evaluate our method for baseball pitching, basketball free throws, and dart throws. Proposed System In this section, we explain the proposed method used to estimate the timing of a specific motion in a gesture. System structure We propose a system that uses an inertial sensor attached to the wrist of the user's dominant hand, as shown in Fig. 1. The sensor used in the system contains a wireless three-axis accelerometer and a gyroscope (WAA-010 by Wireless Technologies, Inc. (16) ). The sensor has dimensions of W39 × H44 × D12 (mm 3 ) and weighs only 20 g. In other words, the sensor is small and light and does not interfere with gestures. The proposed system estimates the time of a specific motion based on the gesture data labeled with correct motion timing. Figure 2 shows the flow of the system. A user's movement is captured through the small wrist-worn sensing device, which is configured to record three-axis acceleration and angular velocity data. The sensor data is sent to a device such as a smartphone via Bluetooth, and the system installed on the device compares the input data with the training data. Then, the time at which the user performed the specific motion is estimated. The exact time of the specific motion is labeled with the training data, which is collected in advance. The confidence of the time estimation is then calculated. Lastly, our system outputs the estimated time. Data segmentation Since the sensor data is captured before and after the gesture, the system detects the gesture within the stream of acceleration and angular velocity data, and extracts the data. The system calculates the composite value of the three-axis accelerometer where a x (t), a y (t), and a z (t) are the acceleration values in the x-, y-, and z-axes directions, respectively. If condition A(t) > Th s is first satisfied for T s ms, the proposed system determines that the gesture movement begins at time T start . Then, if condition A(t) > T e is satisfied for T e ms, the system determines that the gesture finishes at time T end . α and β are thresholds of the start and end of a gesture, set to 1300 and 1100 mG on the basis of a pilot study, respectively. The following segmented data G is obtained through the data segmentation, where g x (t), g y (t), and g z (t) are the angular velocities around the x-, y-, and z-axes, respectively. x start x start x end y start y start y end z start z start z end x start x start x end y start y start y end Acquisition of training data The proposed system compares the segmented and training data to estimate the specific motion time. One might think that the motion time in the segmented data can easily be estimated only by seeing the waveform, without training data. However, this is difficult because the motion time in the segmented data does not always correspond to the peak of the data obtained from the sensor attached to the wrist. For example, the ball release point is the point of maximum palm speed in baseball pitching, which means that acceleration does not show a peak. (21,22) The gestures are video-recorded with a high-speed camera (SONY Cybershot RX100ICV (DSC-RX100M4)) at 960 fps and the exact time of each motion is manually obtained and given to the training data afterward. Estimation of motion time The proposed system utilizes two methods of estimating the motion time: a feature-valuebased method (method 1) and a waveform-similarity-based method (method 2). By combining these two methods, the proposed system estimates the motion time. The algorithms of the methods are described in detail below. Method 1: feature-value-based method The feature-value-based method uses a sliding-window approach. Given the segmented data of gesture G in Eq. (1), feature values are extracted over a three-sample window that is slid in steps of one sample. The feature values used are max, min, and variance for the six axes (6 axes × 3 features = 18 dimensions) and the angle of the wrist for the three axes (3 axes × 1 feature = 3 dimensions), giving 21 dimensions in total. The angle of the wrist is calculated by integrating the angular velocity. These feature values F(t) = ( f 1 (t), f 2 (t), ..., f 21 (t)) are calculated over the window [t − 1, t, t + 1] from t = T start + 1 to T end − 1. Feature values are also calculated from the training data at motion time T true only, i.e., F(T true ) is calculated. Feature vector F(t) is standardized using Z = (F(t) − M)/S since the scales of the feature values are different, where M = (m 1 , ..., m 21 ) and S = (s 1 , ..., s 21 ) are respectively the mean and standard deviation of F(t) over the training data. After this conversion, the 21-dimensional feature vector Z(t) = (z 1 (t), z 2 (t), ..., z 21 (t)) is obtained, whose mean and variance become 0 and 1, respectively. The Euclidean distance between the ith training data ) shows the minimal value, where N is the number of training data. T min is estimated as the motion time of the input data since the waveform of the input data around T min is similar to that of the training data at the motion time. Method 2: waveform-similarity-based method The waveform-similarity-based method calculates the similarity between training and input data by DTW, (2) which measures the similarity of two time-series data. Advantages of DTW include the ability to calculate the temporal nonlinear elastic distance, the ability to measure the similarity between two sequences that may vary in time or speed, and the fact that the number of samples in the two time series need not be equal. The details of the algorithm are as follows. For simplicity, we explain the algorithm for one-dimensional data. When training data X = (x 1 , ..., x m ) and input data Y = (y 1 , ..., y n ) with lengths m and n, respectively, are compared, an m×n matrix is defined as d(i, j) = |x i − y i |. Next, warping path W = (w 1 , ..., w k ), which is the path of the pairs of X and Y indices, is found. W satisfies three conditions. • Boundary: w 1 = (1, 1), w k = (m, n) The following steps are used to find the path with the lowest cost that satisfies the above conditions. is the distance between X and Y. The returned value is divided by the sum of the lengths of the input and training data since the DTW distance increases with the length of the sequences. The motion time for the input data is estimated by finding the index of input data corresponding to the index of the motion time in the training data on the warping path, as shown in Fig. 3. If multiple indices of input data correspond to the index of the motion time in the training data, the estimated motion time is set to the earliest index. Judgement of confidence flag Since motions are not always similar even for the same gesture, the proposed system has to consider anomalous input data. Even if the input data is completely different from the training data owing to an unintended motion or hitting an object, the motion time is estimated, resulting in an inaccurate measurement. To address this problem, our system utilizes a confidence flag that represents whether the estimated motion time is reliable or not. If it is reliable, the confidence flag becomes "HIGH", and if not, it is "LOW". Figure 4 shows the algorithm used to judge the confidence flag. Suppose that N samples of training data are collected in advance. The actual motion time is labeled with the training data and the motion time is estimated for the training data by methods 1 and 2 independently in an N-fold cross-validation manner. The difference between the labeled motion time (ground truth) and the estimated motion time becomes the error. If the error is less than or equal to α ms, the confidence flag HIGH is given to the training data, if not, LOW is given. How to set α is explained later. The training data with the LOW flag is considered as the anomalous data since no other training data is close to it. The proposed system classifies the input data into HIGH and LOW using the model that has learned training data labeled with the confidence flags in parallel with estimating the motion time by methods 1 and 2. The confidence flags are trained and classified, and the motion time is estimated for each method. For method 1, 21-dimensional feature values over the window at t = T min extracted in method 1 are trained with the J48 classifier, which is a C4.5 algorithm implemented on WEKA. (17) For method 2, DTW distances between the input data and the best-matching training data calculated in method 2 are trained with Random Tree, which is a decision tree algorithm implemented on WEKA. The reason that Random Tree is employed is that J48 did not work well for the scalar explanatory variable, i.e., DTW distance. The confidence of methods 1 and 2 can be obtained by classifying the input data. Calculation of the threshold Here, the method of determining the threshold α when labeling the confidence flag as HIGH or LOW is described. Figure 5 shows the algorithm for determining α. Firstly, the estimation error is calculated in the same manner as in Sect. 3.4. The estimation error is an integer multiple of the sampling interval T inr . Let the maximal and minimal estimation errors be N max T inr and N min T inr , respectively. The α range in which the confidence flags of training data include both HIGH and LOW is (N min + 1)T s ≤ α ≤ (N max − 1)T s . The assignment of HIGH and LOW flags changes in this range since all the confidence flags are HIGH in the range α < (N max − 1)T s and LOW in the range α > (N min + 1)T s . Then, the classifiers are trained using the feature values labeled with the confidence flags in methods 1 and 2. In order to find the best α, the proposed method creates a confusion matrix of the classification results of the confidence flags in a cross-validation manner to evaluate the classification accuracy for methods 1 and 2 by changing α. A confusion matrix is a table showing the classification results for both the input and the output: e.g., a 2 × 2 matrix in a two-class classification problem. range (N min + 1)T s ≤ α ≤ (N max − 1)T s , and C 1 (α) and C 2 (α) are obtained for methods 1 and 2, respectively. Lastly, the value of α when C 1 (α) and C 2 (α) show the highest value is set to the threshold. A high C(α) means that the confidence flags are classified with high accuracy; therefore, strange input data, i.e., data potentially producing a large estimation error, will be given a LOW flag in the judgement of the confidence flag phase explained in Sect. 3.5. Output of estimation timing Since preliminary experiments showed that the motion times estimated by the methods are not always accurate, the proposed method outputs the estimated motion time by combining both methods and considering the confidence flags. There are four combinations: two confidence flags and two methods. The conclusive estimated motion time is adopted according to the following rules. • If the confidence flag of method 2 is HIGH, the motion time estimated by method 2 is adopted regardless of the confidence of method 1. This is because a preliminary experiment showed that the accuracy of estimating the motion time by method 2 was superior to that by method 1. • Otherwise, if the confidence flag of method 1 is HIGH, the motion time estimated by method 1 is adopted. • If the confidence flags of both methods are LOW, the motion time is not estimated and UNIDENTIFIED is output. Evaluation This section evaluates the estimation error of the proposed method applied to baseball pitching, basketball free throws, and dart throws. All three gesture throwing, but we adopted them as different movements. In baseball pitching, a whole arm is used. Basketball free throws mainly use the wrist. Dart throws use the arm with fixing the elbow. Environment We evaluated the performance of estimating the ball release time in baseball pitching. Data of pitching action were captured 70 times in total from three subjects A, B, C (all righthanded males) through the proposed system by attaching a wireless sensor to their dominant hand through the proposed system. As an indicator of the performance, we measured the error of the estimated motion time, which is the difference between the estimated release time and the exact release time. The experiment was video-recorded with a 960 fps high-frame-rate camera. We examined the video and added the release time to the sensor data. Acceleration and angular velocity data were collected at 333 Hz. We estimated the release time for each subject independently in a leave-one-sample-out cross-validation manner. Table 1 shows the threshold α giving the highest accuracy of confidence flag classification. For example, for subject A, the minimal estimation error N min T s and maximal estimation error N max T s obtained through cross-validation by method 1 within the training data are 0 and 18 ms, respectively. The sampling interval T s is 3 ms so N min = 0 and N max = 6 are calculated, resulting in α in the range 3 ≤ α ≤ 15. By calculating the accuracy of confidence flag classification C(α) in the range of α, the maximal accuracy of 0.92 is obtained at α = 9 (ms). Figure 6 shows the histogram of the estimation error of the release point. Outputs of UNIDENTIFIED, i.e., the decision when the confidence flags for both methods 1 and 2 are LOW, are removed from the results. From the results, the largest error is 12 ms and 15.9% of the estimated motion times are exact (±0 ms error). Considering that the sampling interval of the system is 3 ms, 61.9% of the errors are within ±3 ms. The mean absolute error is 3.75 ms. Results A 3 ms estimation error means a 10 cm error for baseball pitching as the speed of the hand immediately before releasing a ball is 30 m/s. UNIDENTIFIED was output seven times out of 70 trials, all of which occurred for subject B. This is because method 1 set α to 3 ms for subject B; therefore, even the outputs whose estimation error was 3 ms were given the LOW confidence flag, resulting in UNIDENTIFIED. Environment We evaluated the performance of estimating the ball release time in basketball free throws. Data of free throws were captured 20 times from five subjects D, E, F, G, and H (all males) by attaching a wireless sensor to their dominant hand through the proposed system; a total of 100 samples were collected. Four subjects were right-handed and one subject was left-handed. Two of the subjects had more than three years of experience playing basketball. As an indicator of the performance, we measured the error of the estimated motion time, which is the difference between the estimated release time and the exact release time. The experiment was videorecorded with a 960 fps high-frame-rate camera. We examined the video and added the release time to the sensor data. Acceleration and angular velocity data were collected at 1000 Hz. We estimated the release time for each subject independently in a leave-one-sample-out crossvalidation manner. Table 2 shows the threshold α giving the highest accuracy of confidence flag classification, and Fig. 7 shows the histogram of the estimation error of the release point. Outputs of Table 2 Results of threshold α for release point of basketball free throws. Method Subject UNIDENTIFIED, i.e., the decision when confidence flags for both methods 1 and 2 are LOW, are removed from the results. From the results, the largest error is −107 ms and 21.0% of the estimated release points have an error within ±1 ms. The mean absolute error is 6.28 ms. UNIDENTIFIED was output 19 times out of 100 trials, all of which occurred for subject G. The estimation error for subject G was large even when α was set to as much as 53 ms, resulting in 19 out of 20 samples having the LOW confidence flag. From the table, subjects E and G showed larger estimation errors than subjects D, F, and H for both methods 1 and 2. This was considered to be because subjects E and G had more than three years of experience playing basketball and their free throw motions were flexible, while the other subjects performed stable wrist movements. Environment We evaluated the performance of estimating the timing of releasing darts. Data of throwing action were captured 30 times from three subjects I, J, and K (all right-handed males) by attaching a wireless sensor to their dominant hand through the proposed system; a total of 90 samples were collected. As an indicator of the performance, we measured the error of the estimated motion time, which is the difference between the estimated release time and the exact release time. The experiment was video-recorded with a 960 fps high-frame-rate camera. We examined the video and added the release time to the sensor data. Acceleration and angular velocity data were collected at 1000 Hz. We estimated the release time for each subject independently in a leave-one-sample-out cross-validation manner. Table 3 shows the threshold α giving the highest accuracy of confidence flag classification, and Fig. 8 shows the histogram of the estimation error of the release point. Outputs of UNIDENTIFIED, i.e., the decision when confidence flags for both methods 1 and 2 are LOW, are removed from the results. From the results, the largest error is −107 ms and 20.0% of the estimated release points are within ±1 ms. The mean absolute error is 4.51 ms. No UNIDENTIFIED was output for the dart throw data. Discussion We evaluated the performance of the proposed system through experiments where 10 subjects attached a wireless sensor to their wrist and performed baseball pitching, basketball free throws, and dart throws. The time resolution of the human eye is about 50 ms (24) . From this point of view, the error of 50 ms could be one of the requirements. For baseball pitching, all the estimation errors were within ±50 ms and the best results were obtained among the three gestures. For basketball free throws, 96% of errors were within ±50 ms and, for dart throws, 99% of errors were within ±50 ms. The reason that the error of the estimated release point of basketball free throws was highest was considered to be that the subjects who had experience of basketball performed more varied throws; they flexibly adjusted their way of throwing according to the angle and distance to the goal. In addition, the ball size may have also affected the results, i.e., a basketball is larger than a baseball, meaning that it takes more time for the ball to be released from the fingers. Since the sensor is attached to the wrist, the sensor data hardly changes when the ball is released from the fingers. When estimating the basketball release point, sensor data at the release point in the training data matched with the number of data points in the testing data that was larger than that of a baseball, meaning that the release point was not estimated correctly. Furthermore, when the proposed method estimates the time of the release point, not the frame immediately before the fingers completely separate from the ball, but the frame where the fingers begin to separate from the ball was often chosen. It is probable that the histograms of the release point estimation error were negatively biased. In addition, the reason for the estimation accuracy of darts being lower than that of baseball is considered to be that the movement of darts released from the pinched state is less stable than that of a baseball. However, since darts are smaller than a basketball, the time for darts to separate from the fingers is shorter, so it is considered that the estimation error is smaller than that for a free throw motion. Limitations Lastly, we describe the limitations of the proposed method. With regard to the threshold α, in the evaluation experiments, α was determined for each gesture and subject from the training data, and these values were different for each gesture and user, so the reusability of α was low. We assume that there are several ways of collecting training data, such as using a ball with a sensor, or using a remote for video game to throw a ball while holding down the button and releasing the button at the moment when the ball is released. If it is not possible to collect training data with motion occurrence times from the user himself/herself, we can collect training data with motion occurrence times from multiple users for each gesture in advance and use a general value of α, although the estimation error would be large. In the evaluation, the models were built for each subject and several subjects showed a large maximum error. This is an issue of stability of the subject's movements, as mentioned for basketball. If there is a large difference between the training and testing data, the estimation error becomes large. We have confirmed that this is a limitation of the proposed method because the gesture recognition is generally erroneous if the training data and testing data come from different users. (23) We need to increase the number of training data to cope with this problem. It is also possible to reduce the output with a large error by setting a strict threshold value; however, in this case, even if the output has a small error, it will be treated as undecidable (UNIDENTIFIED) and recall will be reduced. With respect to the types of motion that can be detected by the proposed method, this paper focused on a characteristic moment, i.e., the release point, in baseball, basketball, and darts. The proposed method can estimate the time if it is a unique moment of action that occurs during a gesture. However, we have not been able to determine the extent to which the proposed method is applicable if there is no unique moment. For example, it is theoretically difficult to estimate the time at a single point in a period of time during a stationary state using the current proposed method. The types of behavior that can be estimated by the proposed method should be verified as future work. Conclusion In this paper, we proposed a method of estimating the time of a moment when a particular action occurs during a gesture. A motion sensor is attached to the user's wrist that measures the acceleration and angular velocity during the movement, and estimates the time when a specific movement occurs. We estimated the release points for three types of movement, baseball pitching, basketball free throws, and dart throws, using the proposed method, and assuming that the estimation error is less than the sampling interval of the sensor, the estimation accuracy was determined to be 61.9% for baseball, 21.0% for basketball, and 20.0% for darts. The percentage of release point estimation errors below ±12 ms was obtained to be 100% for baseball, 87.6% for basketball, and 91.1% for darts. In the future, we will focus on wrist movements to extract specific movements in a gesture and evaluate the estimation accuracy. Furthermore, we will propose a method of detecting multiple specific actions in a gesture, which will expand the range of gesture recognition.
8,250.2
2021-01-15T00:00:00.000
[ "Computer Science" ]
Mathematical Models of Receptivity of a Robot and a Human to Education The paper gives general definitions of the mathematical theory of emotional robots able to forget older information. A formalized concept of relative receptivity of the robot to education is introduced. An algorithm of a voice training program for public speakers described in the paper is based on the theory of emotional robots. Also the paper presents a method of estimation of a coefficient of human emotional memory and estimation of a relative receptivity of a robot and a human to education; the method is based on application of the voice training program. Introduction According to forecasts, by 2018 the world market of humanoid robots has to make 25.5 billion dollars.For the process of building such robots it is important to develop a mathematical tool and the software simulating an "emotional" sphere of functioning of human-like robots. Suppose the robot experiences emotions. Methods Assume the robot's emotion has a form of a certain integrated function ( ) ( ) ( ) ( ) where t is the current time of the robot's education, ( ) The current time satisfies the relation , where τ is the current time of effect of the current emotion from the beginning of its manifestation, According to (1) we can write down a formula defining the robot's education at the end of the i-th step [2]: ( ) ( ) ( ) ( ) Equation ( 1) can be written down in the following form: Definition 3. Emotions initiating equal elementary educations at the end of the time step are called tantamount emotions. Definition 4. A uniformly forgetful robot is a forgetful robot whose memory coefficients corresponding to the end time points of each emotion are constant and equal. Assume that for tantamount emotions of the uniformly forgetful robot at the end of each step the relations , Then, according to the formula of the sum i of terms of geometric series, Relation (3) implies ( ) So, the formula is obviously true.This limiting value is the robot's limiting education.Obviously (3) -( 5) are true only when the robot experiences emotions continuously: one after another.But the robot may have a break in experiencing emotions.In this case the robot forgets its last education.The following definition is introduced to describe this process.Definition 5. A dummy step is a time interval during which the robot's education decreases by θ times. A real educational process of the robot can obviously be approximated by the education process of the uniformly forgetful robot with tantamount emotions. Let us have a look at an example.Assume the values ( ) ( ) ( ) of the robot's education are defined at the end of each step and dummy step, and the robot's memory coefficient θ is also defined. To estimate the educational process parameter q of the uniformly forgetful robot with tantamount emotions it is enough to solve the following optimizing problem: solve for where ( ) ( ) Applying methods of definition of extremum for single-variable functions we obtain the equality ( ) ( ) which is the solution of Problem (6). For alternating steps in a series "steps-dummy steps-steps" the formula of education of the uniformly forgetful robot with tantamount emotions ( ) takes the form where i is the number of steps in the first series, j is the number of dummy steps in the second series, k is the number of the steps in the third series. [3] is the first to introduce models of receptivity to the education ε and relative receptivity to the robot's education α . If the condition 0 q > is satisfied, then according to [3] receptivity to the education ε of the uniformly forgetful robot with tantamount emotions satisfies the relation ( ) where q is the elementary education of robots with tantamount emotions, 0 q > , θ is the robot's memory coefficient, ( ) is the robot's education at which the robot memorizes its last education which is defined by proximity to the limiting education. According to [3] the relative receptivity to education can be written down in a form of the following equality: ( ) It is easy to see that the relative receptivity α to education is a dimensionless quantity, ( ] 0,1 α ∈ , and the less α is, the worse the robot's receptivity to education.Suppose on the third round of the series of steps and dummy steps (second series of steps) the robot memorized the formerly received education. Using (7) and (8) we obtain Below we describe practical application of the obtained relations. Let us dwell on the definition of memory coefficients of the human whose analog is the emotional robot.For this purpose we use the well-known software system Vibraimage-7 developed by ELSYS enterprises (St.Petersburg, Russia) [4].Vibraimage-7 is a software system for analyzing the psychophysiological and emotional condition of a person.On the basis of microvibrations of the human's head read by the webcam connected to the computer this software system is able to define his or her emotional condition expressed by a value with a range from 0 to 100. For measuring memory coefficients, the examinee is placed into the isolated room with the webcam.The computer with the program system is installed in the room next door.The examinee is placed opposite to the webcam.During the experiment this person is supposed to be relaxed and not to think about anything.The rest of the instructions are also very simple-the examinee is to look at the webcam for about 2 minutes while the program is operating and until the operator tells him or her that the experiment is over.After the examinee confirms that he or she is ready the supervisor of the experiment gives a command to start the experiment and goes out of the room with the webcam to activate Vibraimage-7.Thus, the examinee spends two minutes in the isolated room without external irritants while the program system is working.The experiment takes 2 minutes, and the data of the examinee's emotional condition is read at one-minute interval. When the program system cycle is done, the supervisor comes into the room to notify the that the experiment is over.So, in the course of the experiment we can obtain two readings of the examinee's education values which reflect the emotional condition of the examinee varying with the course of time. Assume that the equivalent of the examinee's emotional condition measured by means of Vibraimage-7 is the robot's education.Then, during the experiment we can obtain two values of education: , R t R t .Consi- dering that the examinee was not impacted during the procedure, based on the first two values of ( ) ( ) and education model (2) with ( ) 0 i r τ ≡ we can find the memory coefficient Suppose we need to model the emotional behavior of the robot interacting with the person who produces an effect on the robot by a signal injection, for example, by means of a microphone built in the robot.Suppose the emotional stimulus for the robot is the volume of sound [5].Thus, it is necessary to define the dependence between the robot's emotions arising in the course of its interaction with the examinee (person) and the volume of the sound signal generated by that person to affect the robot. To define the dependence between the human's emotions and the sound volume, we developed the computer program describing the following situation: "only one robot and one human are involved in the interaction.The robot has to respond emotionally to the sound impact (audio signal) generated by the human". In [6] we can find the description of the SoundBot program [7] simulating the mimic emotional response of the robot to audio stimuli.According to the description of the program functionality, it can be used by public speakers for voice training. In this program, the speaker is listened (and estimated) by the robot with a non-absolute memory [1] which is capable for responding emotionally to the speaker's performance similar to the emotional reaction of a human listener. Thus, the voice training technique is reduced to the following steps: 1) Set the upper and lower thresholds (bounds) of the robot's positive emotion defining the voice volume range within which the voice is to be trained. 2) Start the process of training of the speaker.In the course of this process the robot receives audio stimuli until only i positive emotions are generated going sequentially one after another. 3) Human-robot interaction is interrupted for a period of j dummy steps. 4) The speaker's voice is tested until the robot responses with a first positive emotion; this period takes k steps. On the basis of the methods described above we performed a series of experiments on voice training of several speakers with predetermined memory coefficients.The results of these experiments and the corresponding values of relative receptivity α of robots to education are presented in Table 1. Conclusions According to [6], the psychological parameters described above for robots, are to be assumed as approximate psychological characteristics of humans, therefore the robot's relative receptivity to education can be assumed equal to the human's relative receptivity to education in the first approximation.This can help when modeling humanoid robots as psychological analogs of humans. Thus, the paper presents mathematical models of characteristics of robots' receptivity to education and the method of approximate calculation of these estimates for a human and a robot by way of example of voice training. The considered methods calculation of robot's receptivity to education can be applied for estimation of vocal abilities of deaf and hearing-impaired children; also they can facilitate adaptation of actors to an auditorium where they are supposed to perform. The presented methods are tested and approved so they can be accepted in the relevant field and applied in a rather short period. , T is the step i.e. the time step which is the duration of emotion, i is the serial number of an emotion experienced by the robot.Let us give several definitions introduced in[1].Definition 1.The robot's elementary education ( ) r τ is a function of the following form: a function of the following form: are coefficients of the robot's memory.It should be noted that the robot's memory coefficients determine that part of the former (previous) education of the robot remembered by the latter. Table 1 . Memory coefficients and relative receptivity to education.Analyzing the table it is possible to conclude that a bigger robot's memory coefficient corresponds to a bigger relative receptivity to education (except for line 5).
2,457
2014-07-03T00:00:00.000
[ "Computer Science" ]
NOAA and BOEM Minimum Recommendations for Use of Passive Acoustic Listening Systems in Offshore Wind Energy Development Monitoring and Mitigation Programs Offshore wind energy development is rapidly ramping up in United States (U.S.) waters in order to meet renewable energy goals. With a diverse suite of endangered large whale species and a multitude of other protected marine species frequenting these same waters, understanding the potential consequences of construction and operation activities is essential to advancing responsible offshore wind development. Passive acoustic monitoring (PAM) represents a newer technology that has become one of several methods of choice for monitoring trends in the presence of species, the soundscape, mitigating risk, and evaluating potential behavioral and distributional changes resulting from offshore wind activities. Federal and State regulators, the offshore wind industry, and environmental advocates require detailed information on PAM capabilities and techniques needed to promote efficient, consistent, and meaningful data collection efforts on local and regional scales. PAM during offshore wind construction and operation may be required by the National Oceanic and Atmospheric Administration and Bureau of Ocean Energy Management through project-related permits and approvals issued pursuant to relevant statutes and regulations. The recommendations in this paper aim to support this need as well as to aid the development of project-specific PAM Plans by identifying minimum procedures, system requirements, and other important components for inclusion, while promoting consistency across plans. These recommendations provide an initial guide for stakeholders to meet the rapid development of the offshore wind industry in United States waters. Approaches to PAM and agency requirements will evolve as future permits are issued and construction plans are approved, regional research priorities are refined, and scientific publications and new technologies become available. INTRODUCTION Rapid global economic growth has contributed to today's increasing demand for energy. The development of alternative renewable and clean energy sources, such as solar, wind, and hydrogen energy, has become a priority as countries seek to expand their use of renewable energy sources and meet goals to reduce greenhouse gas emissions (Leung and Yang, 2012). Among these many renewable resources, offshore wind energy development offers rapidly evolving technological approaches, promising commercial prospects, and large-scale electricity generation such as in Europe. The speed and manner in which many coastal nations pursue offshore renewable energy development has varied dramatically in the past (Portman et al., 2009), and the United States (U.S.) is now poised to rapidly develop offshore wind leases throughout the Atlantic Outer Continental Shelf (OCS), as well as the Pacific and Gulf of Mexico. A recent White House statement announced that in order to position the domestic offshore wind industry to meet its target of deploying 30 gigawatts of offshore wind by 2030, the Department of the Interior's Bureau of Ocean Energy Management (BOEM) plans to advance new lease sales and complete review of at least 16 Construction and Operations Plans (COPs) by 2025, representing more than 19 GW of new clean energy (Office of the Press Secretary, 2021). The main environmental concerns related to offshore wind development for marine animals are primarily focused around construction and operations through increased noise levels, behavioral changes, displacement from important biological areas such as feeding grounds, risk of vessel collisions, changes to benthic and pelagic habitats, alterations to food webs, and pollution from increased vessel traffic or release of contaminants from seabed sediments (e.g., Tougaard et al., 2009;Bailey et al., 2014). The potential effects of offshore wind energy on protected marine species are regulated primarily by National Oceanic and Atmospheric Administration (NOAA) under the Endangered Species Act, Marine Mammal Protection Act, National Environmental Policy Act, National Marine Sanctuary Act, and the Energy Policy Act and by BOEM through issuance of leases and approval of COPs. During activities where potential adverse effects may occur to marine species, including marine mammals, a combination of visual surveying and passive acoustic monitoring (PAM) may be required to record information on species presence and behavior to inform BOEM and NOAA mitigation requirements measures aimed at minimizing potential effects. PAM may also be required during operations to record ambient noise levels and to monitor noise impacts on marine species. In addition, PAM is a proven method for monitoring calling species to gain ecological context before, during, and after offshore wind development activities (site characterization, construction, operations, and decommissioning). The inclusion of PAM alongside visual data collection is valuable to provide the most accurate record of species presence as possible. In the case of data collection for mitigation, visual observers will add to the detection probability of the species of interest, which is ideal when aiming for a 100% detection rate. While in the case of data collection for monitoring visual surveys, PAM can be seen as orthogonal and complementary methods. PAM provides long continuous time series with low spatial resolution, while visual surveys provide snapshots with low temporal resolution but high spatial resolution. Just as visual observations can be limited by poor weather and light conditions, PAM systems also have limitations, such as when animals are not calling. Visual and PAM approaches are well understood to provide best results when combined together (e.g., Barlow and Taylor, 2005;Clark et al., 2010;Gerrodette et al., 2011). However, in these recommendations we just focus on the applications and uses of PAM. Passive acoustic monitoring encompasses a functional suite of technologies that can answer scientific questions and inform management and/or mitigation decisions over long temporal and large spatial scales (Rountree et al., 2006;Van Parijs et al., 2009Marques et al., 2013;Gibb et al., 2019). The tools that are available to acquire and analyze passive acoustic data have undergone a revolutionary change over the last couple of decades and have substantially increased our ability to both collect extensive time series and apply PAM as a functional management tool (e.g., Mellinger et al., 2007;Luczkovich et al., 2008;Van Opzeeland et al., 2008;Zimmer, 2011;Sugai et al., 2019;Desjonquères et al., 2020). PAM platforms include moored recording buoys, autonomous underwater or surface vehicles (Autonomous Underwater Vehicles/Autonomous Underwater Vehicles), profile drifters, and towed hydrophone arrays (Figure 1), which can be strategically located to provide realtime information for immediate mitigative decision-making, monitor or assess the effects from specific activities, and gather continuous archival recordings for long-term monitoring, periodic evaluation, and adaptive management (Van Parijs and Southall, 2007;Van Parijs et al., 2009). PAM allows a broad spectrum of data to be collected, including all calling marine animal species within recording range, different call types, distributions and occupancy, individual calling locations, and abundance of some species, as well as anthropogenic and other natural sounds, collectively known as an underwater "soundscape" (e.g., Van Parijs et al., 2009;Marques et al., 2013;Mooney et al., 2013;Baumgartner et al., 2018;Figure 2). Although our primary focus is on marine mammal mitigation and monitoring for offshore wind applications, the PAM techniques mentioned here can also be used to characterize soundscapes, monitor ambient noise levels, and provide essential information on other soniferous species such as fishes (e.g., Zemeckis et al., 2019;Caiger et al., 2020). As offshore wind development expands across regions, PAM data can increase in utility when collected in a standardized method and analyzed using similar techniques. Given the value of PAM data, especially for future permit requests, authorizations, and research, these recommendations also contain information on standardizing data collection methods, processing and analyses, archiving acoustic recordings, and data products, as well as steps to making these products publicly available. Several previous workshops have started the discussion to improve standards for PAM data collection, data analyses, and archiving (BOEM, 2018;Gulka and Williams, 2018;Kraus et al., 2019;POWER-US, 2019;NYSERDA, 2020;BOEM, 2021;WCS, 2021), and standards are increasingly FIGURE 1 | The illustration shows examples of different types of acoustic technologies. From left to right, the illustration shows a moored surface buoy, wave glider, SoundTrap on the seafloor, bottom-mounted acoustic recorder (High-frequency Acoustic Recording Package [HARP]), Slocum glider, NOAA ship towing a hydrophone array, tagged Atlantic cod, humpback whale with an archival tag, drop hydrophone deployed from a small boat, and autonomous, free-floating acoustic recorder (Drifting Acoustic Spar Buoy Recorder [DASBR]). The different technologies are highlighted with colored circles that show a zoomed-in view of the instruments and colors that represent the type of data collected: green for real-time data capabilities, orange for archival data, and blue for active acoustics. being documented in the Oceans Best Practices Repository 1 . Our recommendations build on these previous efforts. Project-specific PAM Plans, developed by project proponents and approved by Federal agencies, should include descriptions of equipment, procedures (deployment, retrieval, detection, and analyses), ISO data quality standards and protocols that will be used for monitoring and mitigation. In the United States, PAM specifications for inclusion in a PAM Plan will need to be developed in consultation with NOAA and other permitting agencies, such as BOEM. To design a PAM Plan, the following six topics need to be included and addressed: species of interest, PAM system types, PAM recording technologies, PAM study design, PAM system requirements, and PAM data archiving and reporting. Species of Interest Prior to designing any PAM Plan, it is essential to identify and understand the acoustic frequency ranges of the sound sources that are of interest and in need of monitoring (Figure 3). Unlike in southern North Sea waters, where only a handful of marine mammal species require consideration, most United States waters are frequented by a large number of protected species (Jefferson et al., 2011). In the case of the Atlantic OCS, where offshore wind energy development will initially occur, the primary baleen whale (mysticetes) species of concern include: North Atlantic right whales (North Atlantic right whales; Eubalaena glacialis), humpback whales (Megaptera novaeangliae), minke whales (Balaenoptera acutorostrata), fin whales (Balaenoptera physalus), blue whales (Balaenoptera musculus), and sei whales (Balaenoptera borealis). These species are low-frequency sound producers (i.e., most of the acoustic energy is below 1 kilohertz [kHz]), and therefore all PAM recording technologies, PAM system requirements, and PAM designs need to be constructed with these frequency requirements and specific call types in mind (Table 1). Other species of interest for this region are the higher frequency producing toothed whale (odontocetes) species, such as sperm whales (Physeter macrocephalus), beaked whales (Ziphiidae), pilot whales (Globicephala spp.), dolphins (Delphinidae), and ultrahigh-frequency harbor porpoises (Phocoena phocoena). The frequency ranges for these species and the need for additional FIGURE 2 | This conceptual illustration shows images of anthropogenic (human-created), biological (marine animal), and abiotic (environmental) sources of sound and approximately proportional sound waves, making up an ocean soundscape. The sound sources include weather, earthquakes, snapping shrimp, harbor seal, Atlantic cod, right whale, sperm whale, common dolphins, fishing vessel, shipping vessel, seismic survey ship, and wind farm development. The sound waves are represented by overlapping colored circles that indicate the type of sounds: human-made sounds are orange, animal sounds are light blue, and environmental sounds are light green. The circles increase in size to show the approximate magnitude of sound waves and distances noise travels underwater. PAM recording technology, PAM system requirements and PAM design needs to be considered when creating a PAM Plan ( Table 2). Additional species of interest are acoustically active fishes or invertebrates, for which a combination of PAM and acoustic telemetry can be used to delineate the temporal and spatial extent of spawning grounds (e.g., Ingram et al., 2019;Zemeckis et al., 2019). Fish are generally low-frequency sound producers with most species' core frequency occurring below 1 kHz (Figure 3). PAM System Types Here, we divide the PAM approaches into two different system types of data collection, archival and real-time data collection (Figure 1). Both the scientific objectives and the specifications for data management differ depending on the data collection methods. These approaches may have distinct applications for either mitigation or long-term monitoring, or have utility for both applications. Archival PAM Systems Archival PAM recordings are primarily used for long-term monitoring, with capabilities for recording durations ranging from several weeks up to several years (Sousa-lima et al., 2013). Continuous recordings provide an uninterrupted record of species' acoustic presence, allowing investigators to evaluate species distribution and occurrence, and changes in animal calls, which provide information on behavioral state (e.g., foraging, reproduction, socializing) in a given area or region. In addition they allow for the evaluation of seasonal, inter and intra annual variation in species presence and occurrence over time. Alternatively, recordings can be duty-cycled (defined as the fraction of time that a PAM system is actively recording) to maximize recording duration at sea while limiting equipment interactions (e.g., retrieval to swap out hard drives or batteries). Duty-cycled data is less preferred, as inevitably some information is lost, and biases are introduced by using a reduced recording schedule. If the duty-cycle listening period and recording interval are not appropriately matched to the duration and timing of animal calls, potential detections may be missed and species occurrence underestimated (Miksis-Olds et al., 2010;Sousa-lima et al., 2013;Thomisch et al., 2015;Stanistreet et al., 2016). For example, in Thomisch et al. (2015), duty-cycling at 50 and 2% showed a decrease in accuracy in both acoustic presence and call rate estimates. If it is necessary to duty-cycle, frequent and Note that the core detection bandwidth is usually less than the full bandwidth of all vocalizations within a given species' repertoire since the full frequency range is not always needed to successfully detect every species. shorter recording periods may improve accuracy of daily acoustic presence. Duty-cycling effects are most pronounced for species with low and/or temporally clustered calling activity (Thomisch et al., 2015)-i.e., non-song call types such as the North Atlantic right whale-and are less pronounced for species that click over long time intervals, such as beaked whales (Stanistreet et al., 2016). For higher frequency species, a PAM click detector recorder can be an efficient method for data collection (e.g., Note that the core detection bandwidth is usually less than the full bandwidth of all vocalizations within a given species' repertoire since the full frequency range is not always needed to successfully detect every species. Bailey et al., 2010a;Temple et al., 2016;Wingfield et al., 2017). The click detector recorder stores continuous higher frequency clicks of delphinids, harbor porpoise, and other high-frequency odontocetes but does not provide a PAM sound record as is the case for most other recorders. Archival PAM systems are often moored near the seabed with no surface expression and are returned to the surface by divers or by using an acoustic release mechanism. Acoustic data are therefore only recovered and analyzed at the end of the recorder deployment. Consequently, the analyses that are conducted will be retrospective and not real time. However, the need to wait for data records until retrieval can be resolved by using archival PAM systems with surface expression such as those used by Brandt et al. (2018) allow for more frequent data collection. Archival data can be useful to build long-term monitoring records of the presence of sound producing species, both temporally (seasonal and yearly occurrence) as well as spatially (occurrence in and across different regions). These data can also be valuable for evaluating potential effects of construction as changes in species presence and behavior can be correlated with construction activities. Real-Time PAM Systems Real-time PAM systems that enable rapid detection and recognition of marine mammal calls are invaluable for monitoring but are essential components for mitigating potential effects from wind energy development. Real-time acoustic alerts can be used to respond quickly to the presence of protected species in a construction area (e.g., during impact pile driving as long as the presence of multiple construction noise sources do not mask their presence) or in the vicinity of transiting vessels, thereby reducing the risk of vessel strike (e.g., Spaulding et al., 2009;Baumgartner et al., 2019Baumgartner et al., , 2020Baumgartner et al., , 2021Norris et al., 2019;Kowarski et al., 2020;Wood et al., 2020). Real time is defined here as the relay of PAM data (processed or raw) within an operationally usable time span (e.g., data relay frequency may range from every minute, hour, to daily, depending on how quickly the information is needed for decision-making). In effect, any data from the acoustic detection can be used to optimize, or at least provide, timely information to help direct current operations and/or tracking of a species (Klinck et al., 2012;Baumgartner et al., 2013Baumgartner et al., , 2019Baumgartner et al., , 2020Kowarski et al., 2020). Real-time PAM can also be used to improve and adjust to noise produced by pile driving, by offering real time feedback on the noise produced by the hammer and the capacity to adjust this if needed. Real-time PAM can be conducted from a variety of platforms, including vessels, surface buoys, autonomous vehicles such as gliders, and drifting buoys. The PAM data will travel from a PAM recording sensor to the receiving station on shore/vessel at regular time intervals agreed upon in the PAM Plan. The frequency of data relay is constrained by the type and cost of data upload. Cell phone towers can be used for data relay if sensors are located close to land, as reception tends to be lost beyond 15 miles offshore. Iridium satellite data transmission (currently the most common type) costs are based on the quantity of data and frequency at which you want to upload, and which data service plan you select. Another possible option is cabled arrays laid out on the seafloor, which can also provide a real-time data feed straight to shore where such an installation is feasible [e.g., Lindsey et al. (2019)]. PAM Recording Technologies There was an ever-increasing number of PAM technologies, varying in recording and data collection capabilities, available to broader science, management, and industry communities (Figure 1). The following represent several of the general categories of recording technologies currently available. The considerations and recommendations provided in the PAM System Types section ("PAM System Types") above should be evaluated based on the chosen recording technology. PAM Fixed, Bottom-Mounted Archival Passive acoustic monitoring fixed, bottom-mounted archival recorders are moored on or near the ocean floor for several weeks to months (many recorders can now record continuously for 4 to 6 months), and up to several years. They should be spaced at distances that encompass the estimated calling/detection radius of the species of interest when they are within, and in the vicinity of, the operating area. The detection radius will vary depending on a number of factors, including those influencing signal propagation (i.e., water depth and temperature, substrate type, noise levels), as well as source level and directivity differences in calls between individuals and species. An estimate of the minimum number of hydrophones needed for detection of the call types of the species of interest should be made and hydrophones placed with these considerations in mind to minimize potential missed portions of the operating or surrounding areas (Figure 4; Table 3). The percentage of area that is desirable or required to be covered by the hydrophone spacing needs to be considered. An example of such a design can be seen in Supplemental Information I, where we aim to have 100% coverage of the acoustic radius of a calling North Atlantic right whale individual within the lease block areas and 50% coverage outside of the areas. In addition, where required, multi-element bottom-mounted hydrophone arrays can be used to track the movements of calling individuals (e.g., Stanistreet et al., 2013). PAM Fixed Surface Buoys, Real Time and Archival Passive acoustic monitoring surface buoys are a valuable technology used for both relaying real-time information at regular intervals as well as collecting long-term archival data from a single location. They have been used effectively for monitoring and for mitigation purposes, spaced at appropriate listening distances for the species of interest, in busy shipping lanes, as well as in numerous other areas, including prospective wind leases (Table 3; Supplementary Information II). As there is a connection to the surface, noise produced by the FIGURE 4 | This figure provides a stylized example of the average acoustic detection ranges of varying categories (A-E) over which different species groups and representative call types for each can be heard. Detection range A covers 0.1 km and includes sounds such as Atlantic cod (Gadus morhua) grunts. Detection range B covers up to 0.5 km and includes harbor porpoise (Phocoena phocoena) clicks and other fishes. Detection range C covers up to 6 km and generally includes dolphin species whistles. Detection range D covers up to 10 km and includes vocalizations of baleen whale species such as North Atlantic right whale (Eubalaena glacialis) upcalls and sperm whale (Physeter macrocephalus) clicks. Detection range E covers a range from 20-200 km and represents vocalizations of other baleen whale species such as blue whale (Balaenoptera musculus) and fin whale (Balaenoptera physalus) song. Circles sizes are not to scale. mooring/floatation system can add unwanted noise and must be carefully considered in the design phase. In the cases where these buoys have been used successfully, the surface recorders were anchored using a special mooring system constructed to reduce noise (e.g., Baumgartner et al., 2019). PAM moored surface buoys are essential for the purpose of real-time mitigation, but they are also extensively used for long-term monitoring of species presence. PAM real-time buoy systems should be placed with similar listening distances in mind as for PAM bottom-mounted recorders. However, depending on the use of the platform and requirements, the number of moored surface buoys may differ. If the intent is to minimize ship strikes in a high vessel transit area, the PAM moored surface buoys should cover the vessel transit lanes (see example in Table 3 and Supplementary Information II). However, if the intent is to listen for the presence of endangered species in the vicinity and within the wind lease construction area, then the number and spacing of buoys should reflect the effective listening area 3 | A list of the PAM platform types that are currently used for the collection of data for monitoring or mitigation purposes, the spatial scale over which they can collect data, the format and type of data collected, and the current applications in which they have been used to date. Platforms Spatial PAM Autonomous Underwater Vehicles and Autonomous Surface Vehicles, Real Time and Archival An Autonomous Underwater Vehicle is a robot that travels underwater without requiring input from an operator. Underwater gliders are a subclass of Autonomous Underwater Vehicles and can have a single PAM recorder or a PAM array placed inside or strapped to the outside of the vehicle. Autonomous Underwater Vehicles are boats or ships that operate on the surface of the water without a crew and can similarly be equipped with PAM recording equipment. PAM Autonomous Underwater Vehicles are very versatile and increasingly used to monitor large spatial areas [with the capacity to cover 10 to 1000 s of kilometers ] over long time periods (generally from 3 to 6 months) and to relay back information either in real time or through archival recordings. They can be programmed to follow tracklines and navigate to new positions throughout a deployment, either diving up and down the water column to collect oceanographic data or following a straight path at a set depth. Gliders can be categorized into battery-operated gliders and wave-propelled gliders. The former have been demonstrated to be highly effective for real-time monitoring and mitigation for North Atlantic right whales and other baleen whale species (Table 3; Baumgartner et al., 2014Baumgartner et al., , 2020Kowarski et al., 2020). They have also been shown to be valuable in understanding the spatial distribution of toothed whales, such as beaked whales (Klinck et al., 2012). Wave gliders have had some success in archival monitoring of toothed whales (Küsel et al., 2017) but still require further development. In particular, the self-noise produced by the wave glider continues to limit the detection capability of low-frequency baleen whales (Baumgartner et al., 2021). Further technological development may be able to shield this noise in the future. Similarly, Autonomous Underwater Vehicle technologies, such as sail drones and self-navigating vessels, are in development and will likely present new and innovative solutions to add to the current suite of real-time and archival solutions for PAM in the near future (Klinck et al., 2009;Mordy et al., 2017). Autonomous Underwater Vehicles are ideally suited for monitoring an area to inform BOEM and NOAA mitigation requirements. Their tracklines can be remotely piloted or redirected if needed while the instrument is out at sea. For example, if the glider locates an area of high vocal activity, it can be instructed to stay in place and only move on once the activity has decreased. Currents and wind can influence coverage of tracklines, and this needs to be taken into consideration during the design process ( Figure 5). Again, tracklines should be designed with consideration of the listening radius of the focal species and the total detection radius of the mitigation and monitoring area that is required. Gliders are slow moving instruments (∼ 3 knots) that provide point sample data. If the aim is to understand a small area in great detail so as to not miss the presence of the species of interest, then dense tracklines that are surveyed frequently are recommended. If the intention is to cover a large area to detect individuals before an activity requiring mitigation occurs or to inform BOEM and NOAA mitigation decisions over larger areas, then broader tracklines are needed to be able to cover the detection area in question. This will result in less frequent coverage of a trackline, unless several gliders are used. The detection radius and duration it would take a glider to cover the entire desired mitigation area would need to be considered in determining the confidence in monitoring of the detection area for broadly spaced tracklines over a set time period, for example over 24-hour segments. In contrast to fixed recorders, the information that gliders can provide is sparse but spatially broad (Table 3). PAM Drifters, Real Time and Archival Passive acoustic monitoring drifters involve the deployment of acoustic recorders that have a single hydrophone and/or a vertical array suspended in the water column, depending on their design. They typically have a surface expression that allows for satellite tracking capabilities for subsequent recovery. Due to their surface expression, they can provide either realtime information or archival recordings, depending on what is needed. Their location and movement patterns are dependent on currents and wind. Unlike gliders, they cannot be repositioned or redirected remotely. This technology has potential, but has not yet been extensively used or tested in many studies, although it shows promise for certain applications such as monitoring species presence, estimating abundance and measuring ocean noise metrics (e.g., Barlow et al., 2014;Griffiths and Barlow, 2016;Fregosi et al., 2018). When incorporating vertical line arrays in PAM drifters, it can provide information on vertical bearing angles, thus obtaining the estimates on the depth of vocalizing animals (e.g., Griffiths and Barlow, 2016). Passive acoustic monitoring drifter placement and density can vary greatly depending on the monitoring goal and the local oceanic conditions. Currents and wind direction would need to be well understood because they will affect each drifter independently. It is important to determine how the oceanographic conditions of the area will affect coverage of the monitoring region using these types of platforms. For example, if the site is near areas of high current, such as the Gulf Stream, the recorders may be quickly displaced when they reach that current. However, in a more sheltered, less dynamic environment, PAM drifters could provide good coverage. Drifters can be outfitted with satellite transmitters and so can be retrieved once their data collection period is over. In contrast to fixed recorders, the information that drifters can provide is sparse but spatially broad; however, unlike Autonomous Underwater Vehicles, they are constrained in the areas they cover by ocean currents, tides, and wind (Table 3). Towed PAM Arrays, Real Time and Archival When it is necessary to know the range, bearing, location, or depth of a vocalizing marine mammal (e.g., for real-time tracking or estimating abundance of individuals), an array of time-synchronized hydrophones is required (Thode, 2004;von Benda-Beckmann et al., 2010;Gillespie et al., 2013;DeAngelis et al., 2017). PAM arrays may be towed behind a vessel that is underway, but in this case, flow noise can obstruct lowfrequency sounds from being heard. Generally, lower speeds produce lower flow noise; however, speed must also be considered to maintain horizontal orientation of the towed array. This generally prevents towed arrays from being useful to monitor for low-frequency baleen whale species unless the ability to record low-frequency sounds over the noise is clearly demonstrated. The Acoustical Society of America (ASA) is currently working to develop an American National Standards Institute-approved standard for towed hydrophone arrays. The fundamental goal of this ASA standard is to reduce situations where background noise levels prevent effective PAM. To achieve this goal, the standard employs a suite of strategies to standardize how acoustic measurements are logged, reported, and evaluated (Thode and Guan, 2019). Typical PAM arrays for monitoring purposes can be towed behind a vessel (or Autonomous Underwater Vehicle), in which case the tracklines to be covered by the vessel (or Autonomous Underwater Vehicle) with the array need to be designed with the listening radius of the system and species in mind (Table 3). Other types of arrays, such as bottommounted cabled arrays, are not discussed here because they are less frequently used, given the cost and infrastructure needed to lay and maintain them. PAM Study Design The study design is a critical component to any monitoring program and needs to be carefully defined. Both the study objectives and the capacity to address these objectives need careful consideration. Three basic questions need to be asked and addressed in any study design: Why monitor? What needs to be monitored? How should monitoring be carried out? (Yoccoz et al., 2001). PAM technology can be used to satisfy a wide range of monitoring and mitigation requirements (e.g., Bailey et al., 2010b;Forney et al., 2017;Brandt et al., 2018). In relation to wind energy development, there are several questions that most monitoring and mitigation programs need to address: does wind energy activity within and across multiple lease areas affect marine animal distribution, behavior, and communication space, and how can we reduce vessel strike risk and prevent exposure of marine animals to loud sounds during construction activities in the wind energy lease areas? To answer the first question, baseline acoustic data collection is essential in order to build an understanding of the inter-and intra-year variability of species presence in an area. Robust baseline monitoring allows for inference to be drawn as to the cause of any observed changes and whether they are a result of oceanographic, ecological, or climatological factors, or due to anthropogenic effects. Both large-scale and smallscale trends and changes in species distribution, occurrence, calling behavior (e.g., foraging, socializing, reproduction), and movements can be derived from archival PAM data collection. For examples of large-scale monitoring studies for baleen whales see Davis et al. (2017Davis et al. ( , 2020, toothed whales see Barlow and Taylor (2005), Verfuß et al. (2007), Stanistreet et al. (2017), Carlén et al. (2018), Stanistreet et al. (2018), and fishes see Wall et al. (2012), Wall et al. (2013). For smallscale regional or area specific monitoring studies for baleen whales see Parks et al. (2007), Morano et al. (2012a), and Charif et al. (2019); toothed whales see Lewis et al. (2007), Johnston et al. (2008), and Bailey and Thompson (2010), and fishes see Rowell et al. (2015), Zemeckis et al. (2019) and Caiger et al. (2020). Passive acoustic monitoring archival recordings are also increasingly being used to monitor the long-term ambient noise and communication space available to marine animals is a given area, in addition to the composition and health of marine soundscapes e.g., the prevalence of non-biological or anthropogenic sound sources Hatch et al., 2012;Staaterman et al., 2014;Erbe et al., 2016;Merchant et al., 2016;Haver et al., 2018). Potential effects of anthropogenic activities on marine species can be evaluated through applying the collected data to analytical frameworks, such as Before-After-Control-Impact (BACI) and Beyond-BACI designs (e.g., Underwood, 1992Underwood, , 1994, or Before-After-Gradient (BAG) analyses (Ellis and Schneider, 1997;Brandt et al., 2011;Methratta, 2020). Some of the first applications of BACI to offshore wind development evaluation for marine mammals are Carstensen et al. (2006) and Scheidat et al. (2011). To answer the mitigation question on reducing vessel strikes and preventing exposure to loud construction sounds, robust real-time monitoring needs to be established throughout the impacted area and the area directly in the vicinity. The timing of data reporting and the subsequent actions taken need to clearly show how it will be effective at minimizing risk. In Supplemental Information I we outline and map a regional PAM monitoring design approach for long term monitoring focused primarily on baleen whales. In Supplemental II, we discuss design approaches and considerations for mitigation of vessel strike risk, while in Supplemental III we provide ISO data templates which serve as guidelines for consistent and standardized data collection. Purpose of PAM Design When PAM is utilized, the design intent may be to: (1) Understand distribution of species. This involves monitoring a given area prior to, during, and after the construction period in order to understand the presence and distribution of species of interest. The duration of this monitoring can vary, but in order to capture variation in movement patterns, data collection is recommended at least 3 to 5 years prior to construction, during construction, and at least 3 to 5 years during wind farm operation. It would be a best practice to continuously collect the data during these time periods was continuous through the three phases (pre-construction, construction, and operations). Multiple years of data from the same area are needed in order to understand the inter-annual variability in species movement. These data provide an understanding of the annual presence, occupancy, and distribution of a species; help discern the potential impact of other factors, such as climate change, that may influence distribution; and help determine the likelihood of the species being in the area during construction and/or during subsequent long-term operation. They can also help understand any changes, or lack thereof, in species' acoustic presence related to construction activities or turbine operation (e.g., Carstensen et al., 2006;Brandt et al., 2011;Scheidat et al., 2011;Dähne et al., 2013Dähne et al., , 2017. For example, Brandt et al. (2011) were able to demonstrate a decreasing effect of construction noise on harbor porpoise acoustic activity with distance using a BAG design. Seasonal and annual variations in presence and distribution can also be analyzed with respect to oceanographic conditions (e.g., DNV KEMA Renewables Inc, 2018). Paired with metocean monitoring and visual survey data, these efforts can add meaning as both methodologies can complement each other and reduce their biases (e.g., when a marine mammal is not calling or not visible at the surface). Supplemental Information I provides an example of a proposed United States East Coast PAM design for understanding distributional changes of species with regards to offshore wind development. (2) Monitoring to reduce effects on species during construction. This focuses on monitoring a given area during the construction period of a wind development area to inform mitigation actions, such as delaying, ceasing, or proceeding with pile driving when a protected species is confirmed acoustically within a relevant impact zone (i.e., Shutdown or Clearance Zone). Rules for defining which acoustic information triggers this decision should be established in advance and in concert with visual monitoring. Applying any mitigating action based on PAM needs to be clearly thought through with consideration to limitations of each type of system and thoroughly described in the PAM Plan. Supplemental Information II provides an example of a proposed United States East Coast PAM design for monitoring in order to reduce effects on species during construction. (3) Monitoring for reducing risk of vessel strike. In order to monitor for species presence to reduce vessel strike risk, the design of the PAM system must be able to reliably detect the presence of the species of interest. Additionally, a thorough decision-making and communication process when a detection is made is needed to ensure that vessels are alerted and slow down to reduce vessel strike risk. An example of this decisionmaking process is the triggering of NOAA's Slow Zones 2 . These Slow Zones are established when North Atlantic right whales are detected both visually (i.e., Dynamic Management Area) and acoustically (i.e., Acoustic Slow Zone). A Dynamic Management Area is triggered when 3 or more North Atlantic right whales are sighted within 3-5 miles of one another. This criteria emerged from Clapham and Pace (2001), which showed an aggregation of three or more whales is likely to remain in the area for several days, in contrast to an aggregation of fewer whales. Given that visual and acoustic data differ, where the number of individual North Atlantic right whales cannot yet be derived from acoustic data alone, an Acoustic Slow Zone is established when three or more upcall detections from an acoustic system occur within an evaluation period (e.g., 15 min), an acoustic equivalent determined by NOAA NEFSC acoustic experts. To trigger an Acoustic Slow Zone, an acoustic system must meet the following criteria: (1) evaluation of the system has been published in the peerreviewed literature, (2) false detection rate is 10% or lower over daily time scales, and (3) missed detection rate is 50% or lower over daily time scales. Once triggered, Slow Zones are set up as a rectangular area encompassing a circle of 15 (for Dynamic Management Areas) or 20 (for Acoustic Slow Zones) nautical miles around the core sightings (Dynamic Management Area) or recorder location at the time of detection (Acoustic Slow Zone). The Slow Zone lasts for 15 days and can be extended with additional sightings or acoustic detections. Supplemental Information II provides a more detailed example of a United States East Coast PAM design for reducing vessel strike risk. All real-time PAM designs need a clear and well thought out and consistent process, including PAM placement and technology type, species detection, integration with other visual data observations, communication of information to Protected Species Operators (PSOs)/shoreside operators, and response of the information/detection. Limitations of each real-time PAM system should be well understood and considered in detail in any PAM Plan. Efficacy of the system used and its capacity to detect the signal/species of interest is essential to developing a successful and credible PAM Plan. Lastly, it is important to note that no one PAM system is capable of answering all needs and that frequently a mixture of PAM systems, technologies and designs are likely needed to address all monitoring and mitigation requirements. PAM System and Data Analysis Requirements In this section, we present some broad PAM system requirements both in terms of hardware needed and automated software for analysis of calls and ambient noise metric measurement. Standards and guidelines are increasing in availability (e.g., Robinson et al., 2014;van der Schaar et al., 2017;Ainslie et al., 2019), through projects such as ADEON (Atlantic Deep Water Sea Ecosystem Observatory Network 3 ), JONAS (Joint Framework for Ocean Noise in the Atlantic Seas 4 ), JOMOPANS (Joint Monitoring Programme for Ambient Noise North Sea 5 ), additional existing practices on PAM such as the International Quiet Ocean Experiment can also be found through the Ocean Best Practices Repository 6 . PAM Hardware For all PAM technologies, the hydrophones and related hardware need to be calibrated (every 3 to 5 years) and their performance systematically measured and optimized within frequency bandwidths of interest for the particular activity, species, and environment. Calibration data, and relevant settings and sensitivities should be noted for all hardware used in recording/monitoring to ensure consistency among measurements for particular hardware and software [more detail can be found in Biber et al. (2018)]. Array synchronization information (where relevant) should also be documented. This information should be permanently associated with the recordings as metadata. All hardware should be tested and optimized for low selfnoise, including the mooring system. In addition to calibration, the system should be fully tested to ensure adequate sensitivity in the area where it will be deployed and with the type of signals it would receive. Additional environmental data will need to be collected to allow for adequate system evaluation. If this cannot be done at the project site, the system should be fully tested in a comparable location (i.e., an area exhibiting similar depth, temperature, substrate, current, acoustic propagation, and ambient noise, with relevant sound sources). The system needs to be designed, installed, and operated by those having expertise with the specific PAM technology, including placement in the water, attachment of cables to reduce strumming and noise, acoustic release, suitable anchorage for the conditions, software use, etc., Knowledgeable and experienced personnel should operate the units in all situations. Ideally, the PAM technology used should have been used for the same purpose in other field efforts and have clear and detailed information available about its previous performance and reliability for PAM purposes. If this is not the case, this information needs to be gathered and provided in publicly available documentation as part of the PAM Plan. PAM Species-Specific Automated Detection Software Passive acoustic monitoring data analyses for species presence should occur through either a) visual processing of data by an acoustic expert familiar with the call types of the species of interest, or b) by using comprehensively tested PAM software detector(s) for which performance has been documented, and the performance metrics are publicly available for outside evaluation and have been reviewed and deemed acceptable by a panel or group of experts. Visual review will likely be required to some degree when dealing with acoustic detection of rare species such as North Atlantic right whales and/or for ensuring data quality. Standard performance metrics require evaluation and reporting, such as precision, recall, and accuracy, as well as false detection, false positive, false omission and missed detection rates (Figure 6) (e.g., Baumgartner et al., 2019;Kirsebom et al., 2020;Madhusudhana et al., 2020;Gervaise et al., 2021). PAM software detectors comprise a wide range of custom-built computer programs, aimed at automating the process of detecting target species' calls in a dataset [see review in Bittle and Duncan (2013), Shiu et al. (2020), Gervaise et al. (2021)]. In both PAM archival and real-time data analysis, in addition to any software detector(s) used, some level of visual confirmation by an acoustic expert often still remains essential to improve accuracy and minimize error in call type reporting. For PAM archival data analysis, where the objectives tend to focus on retrospective understanding of species presence, movements, or behavior, daily or hourly reporting of species detections are the time frames most frequently used (e.g., Davis et al., 2017Davis et al., , 2020Stanistreet et al., 2017;Halliday et al., 2019). This method can speed up the process of data analysis by only requiring a positive species confirmation at the hourly or daily level. For example, when evaluating the presence of North Atlantic right whales along the United States East Coast, the presence of three upcalls within a 24-hour time period serves as the determination that at least one whale is present during that day (Davis et al., 2017). Three upcalls, rather than a single upcall, are used in order to decrease the likelihood of incorrect species determination given that other baleen whales can produce similar calls to North Atlantic right whale upcalls (Davis et al., 2017). However, each species and call type will require different levels of additional verification as needed (if any) and decisions made as to what level of certainty is acceptable. For example, automated detectors for 20 Hz fin whales have a high detection accuracy (e.g., Morano et al., 2012b), and, in this case, it may be reasonable to simply take the detector output with no further evaluation. For PAM real-time data analysis, additional visual verification of the detected sound is likely to be needed since the occurrence of a given call type may influence whether operations are able to continue or are required to shut down (in the situations where shut down is possible). The ability to satisfy this confidence metric can be achieved by carrying out PAM training for operators and through evaluation of detections by analysis experts. A PAM expert is defined as a scientist who has 6 months or more of experience working with the call types of the species of interest, who can distinguish between confounding sounds (Kowarski et al., 2020), and who has experience working with the relevant detection software. Additionally, established and publicly available protocols on how a species is determined present for the specific PAM system and software must be documented in the PAM Plan.; for an example, see DeAngelis et al. (2016). All PAM Plans should provide clear documentation of the efficacy of their detection capabilities and classification software for the specific signals of interest. Examples of comprehensive testing for real-time and archival PAM can be found in Baumgartner et al. (2019Baumgartner et al. ( , 2020, Kowarski et al. (2020), and for PAM towed arrays can be found in Gillespie et al. (2013). The PAM Plan should demonstrate for all PAM systems and moorings that (a) the species' signal of interest can be heard reliably beyond the self-noise, and (b) any detection and classification software that is used has (1) been tested; (2) clearly documented reliability in detecting a given species; and (3) software performance metrics that are openly available. PAM Localization Localizing calling species during the construction phase of offshore wind projects would be very useful for satisfying mitigation requirements regarding the location and distance of the species in question, relative to the sound source (e.g., pile driving location). Localization can be carried out by the placement of multiple fixed or mobile omnidirectional hydrophones arranged in a configuration that allows for localization of the vocalizing animal using the difference in the time of arrival of a call (or calls) on multiple time-synchronized sound recorders (e.g., Stanistreet et al., 2013;Hastie et al., 2014;Risch et al., 2014;Gillespie et al., 2020;Gervaise et al., 2021). It can also be achieved by using multiple sensors that can calculate bearing [e.g., directional autonomous seafloor acoustic recorders (DASARs); Greene et al., 2004;Blackwell et al., 2007;Mathias et al., 2012;Blackwell et al., 2013]. For stationary systems, a minimum of three hydrophones placed within a range that guarantees overlapping receptivity (i.e., multiple arrivals) is necessary to localize the positions of vocalizing animals using time-of-arrival methods (e.g., Stanistreet et al., 2013;Tremblay et al., 2019). Sensor positional, timing, and speed accuracy all need to be considered, as well as sensor configuration (geometric dilution of precision). However, for mobile platforms, such as a ship with a linear towed hydrophone array, two or more hydrophones can be used for the calculation of bearings; sequential bearing calculated as the platform moves can be used to estimate the location of calling animals (i.e., time-motion analysis, typically with left-right ambiguity) (e.g., von Benda-Beckmann et al., 2010;von Benda-Beckmann et al., 2013). In most cases, decisions on mitigation measures (e.g., pile driving shutdown when feasible or vessel speed reduction) can be made simply based on the range of calling animals from the noise source, without resolving the left-right ambiguity issues or knowing the bearing of the calling animal. However, under certain situations with anisotropic noise propagations, it would be necessary to know the location of the calling animal. Under such situations, localization and methodology should be included in the PAM Plan to approximate locations of the animal or sound source for purposes of taking action. Bottommounted recorders, real-time systems, towed and vertical arrays, and drifters can all be used for localization purposes, depending on the accuracy needed and the species of interest. Determining which system and technology to use requires careful consideration and supporting evidence to demonstrate that the design is appropriate. The analytical component of localization can be highly time consuming, which can be costly, and each array design requires careful documentation of the localization errors of the system. PAM Ambient Noise Metrics The measurement of background sound levels-i.e., ambient noise metrics-is an additional and important dataset that can be obtained from acoustic recordings made on any platform. Although the primary focus of this effort is to document the vocalizations of marine mammals, additional acoustic analyses of abiotic acoustic sources in the same recordings can reveal temporal patterns in distinct frequency bands that correlate with other factors such as wind speed. Ambient noise metrics could also document the level of potential increases in ambient noise due to wind farms. Metrics for the coincidental recording of ambient noise should be included in PAM Plans and include factors anticipated to be associated with offshore wind development such as vessel traffic and operational noise. These kinds of ambient noise metrics provide a record of acoustic conditions in a given environment and are essential for understanding changes in the sound levels across different regions and time (Dekeling et al., 2014). Currently, available standards and those in development can be found by searching the Ocean Best Practices Repository (see foot note 1). Measurements of ambient noise metrics can be carried out using a number of open source programs such as PAMGUIDE (Merchant et al., 2015 7 ) or MANTA 8 . These programs provide a standard series of measurements at the decidecadal level that can be replicated across projects. An ongoing framework inventory on existing standards for observations of sound in the ocean can be found in the Ocean Best Practices Repository (International Quiet Ocean Experiment WG on Standardization, 2018). PAM Archiving, Reporting, and Visualization Here we define PAM archiving as (1) the storage of recordings in a publicly accessible location; (2) PAM reporting as the reporting of data outputs such as detections, locations, or bearing of speciesspecific calls in a structured and publicly available venue; and (3) PAM visualization as the representation of these data outputs on a publicly available website. PAM Archiving Passive acoustic monitoring archiving is essential in order to provide a long-lasting record of the efforts invested in PAM data collection. PAM archival and real-time datasets require the archiving of several items: • The acoustic sound recordings, which are the raw sound recordings made using the PAM technology, should be compressed into a standardized lossless format such as FLAC for archiving. • The associated metadata, which is the information associated with the deployment and retrieval of the PAM technology at sea (e.g., recorder type, depth, location, and functionality) and information on the recording settings, such as the sampling rate and recording schedule. • Derived analytical products, such as the software program used and evaluation of efficacy of species detection, number of hourly or daily species detections, sound source levels, and other relevant measured sound parameters. Archiving of acoustic sound recordings is encouraged through NOAA's National Center for Environmental Information (NCEI) archiving service 9 . PAM metadata are required criteria for archiving at NCEI. The process and metadata details can be provided upon request. These should be used as a guide in PAM Plans for documenting relevant information regarding the field recording effort (deployment and retrieval information), as well as resulting analyses (such as species detections or noise metrics). PAM Detection/Data Reporting All confirmed passive acoustic detections of target species/species, whether from archival or real-time data, must be archived in a publicly accessible location. For the United States East Coast, all species detection data and ambient noise metrics should be reported to the Northeast Passive Acoustic Reporting System via<EMAIL_ADDRESS>Formatted spreadsheets that follow ISO standards with required detection, measurement, and metadata information are available for submission purposes (see Supplemental Information III for details). When real-time PAM is used during construction for mitigation purposes, a subset of the information required on species detections is expected to be provided and uploaded no later than 24 after the detection. Full acoustic detection data, metadata, and GPS data records must be submitted within 48 h via the formatted spreadsheets. When PAM is used for long-term monitoring, all data (detection data, metadata, GPS data, and ambient noise data) should be provided via the formatted spreadsheets and uploaded within 90 days of the retrieval of the recorder or data collection. The spreadsheets can be downloaded from https://www.fisheries.noaa.gov/resource/document/passiveacoustic-reporting-system-templates. For further assistance, contact<EMAIL_ADDRESS> PAM Data Visualization All PAM detections and metadata submitted to the Northeast Passive Acoustic Reporting System are visualized on the Passive Acoustic Cetacean Map 10 . We encourage PAM detections to be shared across widely used and recognized platforms and regional 9 https://ngdc.noaa.gov/mgg/pad/ 10 https://apps-nefsc.fisheries.noaa.gov/pacm web portals; for the United States East Coast, some of these standardized efforts are the Passive Acoustic Cetacean Map (see foot note 10), WhaleMap 11 , and WhaleAlert 12 . CONCLUSION These PAM recommendations provide a guide to understanding the various aspects required for designing and conducting PAM for both monitoring and mitigation. While the PAM Plans approved by agencies will ultimately determine full requirements, this six-step process provides a holistic look at each of the components that are needed when considering the development of a PAM Plan as well as long-term baseline monitoring. PAM technologies are a rapidly developing area, and new technologies and applications are likely to be available in the near future. Similarly, the data collection, analysis, and archiving of these data is ever evolving as the needs and applications grow; therefore, new developments will emerge as offshore wind development gets underway. These NOAA and BOEM recommendations will be updated and improved as new information and guidance becomes available. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. FUNDING NOAA's National Marine Fisheries Service and BOEM provided salary time for each of the co-authors to provide input into these recommendations.
12,496.4
2021-10-27T00:00:00.000
[ "Environmental Science", "Engineering" ]
Migration of Bisphenol A from Can Coatings into Beverages at the End of Shelf Life Compared to Regulated Test Conditions Beverage cans are used for energy drinks, soft-drinks, sparkling waters, and beer. Bisphenol A is still part of the formulation of epoxy coatings of beverage cans. Due to concerns that bisphenol A acts as an endocrine-active substance, the migration of bisphenol A is restricted. Typically, the migration from beverage cans is tested at elevated temperatures into food simulants, like 20% ethanol in water. However, comparison tests of the migration of bisphenol A at the end of shelf life, with the migration into ethanolic food simulants, are not available in the scientific literature. The aim of the study was to determine the migration of the migration of bisphenol A into real beverages, compared to routine migration tests into the European official food simulant of 20% ethanol at 40 ◦C and 60 ◦C after storage for 10 days. As a result, bisphenol A-containing coatings show a considerably higher migration when tested at 60 ◦C in comparison to 40 ◦C. On the other hand, migration into energy drinks and coke, from the same coatings at the end of shelf life when stored at room temperature, was below the detection limit in either case. As expected, migration values of bisphenol A below the analytical detection limits were observed for any test conditions from the coating labeled bisphenol A-free. Spiking tests show that bisphenol A is stable in real beverages. Therefore, it can be concluded that the accelerated migration tests with 20% ethanol at the test conditions 10 d at 40 ◦C and 10 d at 60 ◦C significantly overestimate the real migration into beverages at the end of shelf life. This overestimation of the migration of bisphenol A is due to swelling of the epoxy can coating by the ethanolic food simulant. These findings were supported by migration modeling based on diffusion coefficients predicted for polyethylene terephthalate. Introduction The global demand for beverage cans grew around 4% in 2018, and reached about 350 billion units, mostly due to the increase of beverage packaging.The market is driven by an increased consumption soft-drinks and beer in developing countries (China, in the Middle East, and India).Globally, >90% of beverage cans are made out of aluminum, and the rest are made from steel.Bisphenol A is still part of the formulation of epoxy coatings of beverages cans and, therefore, bisphenol A might migrate from packaging materials into food.A review with concentrations of bisphenol A in food and consumer exposure is available in the scientific literature [1].Due to concern that bisphenol A acts as an endocrine-active substance, the migration of bisphenol A is restricted.The migration of constituents from beverage can coatings are typically tested by the use of ethanolic food simulants.Simulants like 20% ethanol in water are used, in general, to exclude interferences of beverage components with substances migrating from the coating, which might result in masking some of the migrants, which Beverages 2019, 5, 3 2 of 7 poses a risk for false negative results of migration testing.Ethanolic simulants are good matrixes for gas chromatographic analysis of migrated organic substances.In addition, good solubility of organic migrants increases the sensitivity of the migration test or gives the test a worse-or worst-case character compared to the migration into real aqueous beverages.On the other hand, it is known that ethanolic solutions show strong interactions between polyester materials like epoxy can coatings.These interactions result in a swelling of the polyester matrix which significantly increases the extent of migration, especially if the migration test is performed at elevated temperature [2].This increased migration is part of the safety concept, that ethanolic solutions act as worst-case simulants compared to real beverages, which results in a safety factor.On the other hand, contact migration testing conditions that are too overestimative might pose the risk that the specific migration limits for migrants like bisphenol A are exceeded, which results in the non-compliance of the simulant-tested beverage cans. Bisphenol A has been in focus of toxicologists and the European legal bodies during the last decades, and some countries forbade use of bisphenol A in food packaging.In Europe, until September 2018, the use of bisphenol A as a monomer in the production of plastic materials and articles in contact with food, except infant feeding bottles, was authorized with a specific migration limit (SML) of 600 µg/kg food, according to Regulation (EU) No 10/2011 [3].As a reaction to new toxicological data [1], and considering the fact that non-dietary sources of bisphenol A also contribute to the overall exposure, the SML was reduced to 50 µg/kg food for plastic materials in 2018 [4].This limit is applicable from 6 September 2018, and also applies to varnishes and coatings intended to come into contact with food.France even went further and prohibited any food packaging intended to come into direct contact with food containing bisphenol A [5].In order to assure compliance with the migration limit, migration tests using food simulants shall be carried out, or, as an alternative, the migration can be calculated based on the residual content of the substance in the material applying generally recognized diffusion models based on scientific evidence that are constructed, such as to overestimate real migration [3]. To our knowledge, comparison tests of real-life storage at the end of shelf life with simulant test using ethanolic food simulants at elevated temperatures (e.g., 10 days (d) at 40 • C or 10 d at 60 • C) are not available in the scientific literature.It is therefore not known which maximum shelf life of the real beverage is covered by the migration test into ethanolic solutions performed at 40 • C or 60 • C. The aim of the study was to determine the migration of the migration of bisphenol A into real beverages, compared to routine migration tests into the European official food simulant of 20% ethanol at 40 • C and 60 • C after storage for 10 d. Sample Material Cans with inside epoxy coatings were provided by a commercial supplier.Each can type was provided in an empty, unused state, as well as filled with energy drink or coke.The filled cans had already reached the corresponding end of shelf life.Two of the coatings contain bisphenol A (BPA), one is a BPA-free coating.The investigated samples are given in Table 1. Migration Contact Experiments Migration contact experiments were carried out according to the European Standard EN 13130-1 [6].The cans were filled corresponding to the nominal volume with 20% ethanol in water (v/v), which is the food simulant allocated to clear drinks, such as energy drinks or coke, according to the European Plastics Regulation (EU) No 10/2011.The filled cans were stored for 10 d at 40 • C or 60 • C in a temperature-controlled cabinet.Aliquots of each migration solution were then spiked with a defined amount of the internal standard 13 C 12 -bisphenol A prior to analysis.Each migration contact was performed in triplicate. Preparation of Food Samples One hundred milliliters of the energy drinks/coke were weighed into a beaker, spiked with a defined amount of the internal standard 13 C 12 -bisphenol A, and degassed for 5 min in an ultrasonic bath in order to avoid experimental interference with the carbonization.Subsequently, the samples were purified and concentrated by solid phase extraction (Chromabond HR-P).The resulting eluates were evaporated to dryness in a stream of nitrogen, redissolved in a mixture of acetonitrile/water in a ratio of 1:1, and filtrated prior to analysis.A triplicate determination was performed for each sample. Extraction of Samples for the Determination of the Residual Content of BPA From the empty cans, test specimens with a contact surface of 0.5 dm 2 were cut out, spiked with a defined amount of the internal standard 13 C 12 -bisphenol A, and extracted with acetonitrile using a Büchi ® speed extractor (temperature: 100 • C; pressure: 100 bar; number of cycles: 3; duration of static phase: 3 min).Aliquots of the extracts were diluted with ultrapure water (<0.055µS/cm) in a ratio of 1:1, and filtrated prior to analysis.A triplicate determination was performed for each sample. Quantification of Bisphenol A The quantitative determination of bisphenol A in the prepared samples was achieved by HPLC using mass spectrometric detection (Thermo TSQ Quantum Ultra AM).The chromatographic separation was carried out on a Thermo Accucore Polar Premium (2.6 µm, 100 × 2.1 mm at 40 • C) column.Mass spectrometric detection was performed after negative heated electro spray ionization (HESI) on a Thermo TSQ Quantum Ultra AM triple quadrupole mass spectrometer using multiple reaction monitoring (MRM) mode.The samples were measured both against external standard solutions and against the internal sample preparation standard 13 C 12 -bisphenol A. Recovery Experiments In order to determine whether bisphenol A might react with the energy drinks/coke during storage in the coated cans, recovery experiments were carried out.Degassed drink samples (20 mL) were spiked with different concentrations of bisphenol A (58.95, 234.2, and 1171 µg/L) and the recovery was determined according to the method described in Section 2.5. Migration Modeling Migration was modeled using AKTS SML software v4.54 (AKTS AG Siders, Siders, Switzerland) [7].The diffusion coefficients were calculated based on the activation energy of diffusion of the corresponding migrants [8].For the calculation, a thickness of the inner coating of 10 µm was used.As a worst case, good solubility of bisphenol A in the beverages was assumed for all predicted values which represents a partition coefficient between polymer and food of K = 1. Results and Discussion Within this study, the migration of bisphenol A into the food simulant 20% ethanol was determined at the test conditions 10 d at 40 • C and 60 • C, respectively, in comparison to the real migration into energy and coke drinks after storage until the corresponding end of shelf life.The results of the migration tests are summarized in Table 2.In addition, the concentration of free bisphenol A in the can coatings was determined.From the concentrations in the can coatings and from the dimensions of the cans, the maximum total transfer into the filling good was calculated (Table 3).The applied migration test conditions, 10 d at 60 • C, are the test conditions used to represent a storage time of 365 d at room temperature (end of shelf life) according to Regulation (EU) 10/2011.The test conditions 10 d at 40 • C are the corresponding test conditions according to previous regulations.Three different cans from commercial suppliers were tested.One supplier provided cans with bisphenol A-free coatings, as well as conventional bisphenol A coatings.The other two suppliers provided cans with bisphenol A coatings.As a result, migration values of bisphenol A below the analytical detection limits were observed for all test conditions from the coating labeled bisphenol A-free.For the bisphenol A containing coatings, however, the migration was considerably higher when tested at 60 • C in comparison to 40 • C. On the other hand, migration into the energy drinks and coke from the same coatings, at the end of shelf life when stored at room temperature, was below the detection limit in either case. In order to exclude that bisphenol A might have reacted with the drinks during storage in the coated cans, the drinks were spiked with different concentrations of bisphenol A, and the recovery was determined.Recovery was in the range of 99-112% for all three drinks.This result indicates that bisphenol A is stable in energy drinks and coke, and that the non-detectable migration from epoxy coated cans were correct and not due to a loss of bisphenol A during the migration test.Thus, it can be concluded that the accelerated migration tests with 20% ethanol at the test conditions 10 d at 40 • C and 10 d at 60 • C overestimate the real migration into food at the end of shelf life.The shelf life at room temperature for the energy drinks and coke was 365 d and 180 d, respectively.The test at 60 • C even led to concentrations in the drinks that are close to a total transfer of bisphenol A from the coating (Figure 1). (end of shelf life) based on the concentrations of bisphenol A determined in the epoxy resins.The modeled migration was 0.068 µ g/L for sample 1 and 0.035 µ g/L for sample 2. In both cases, the predicted migration was below the detection limit of 0.14 µ g/L in the energy drinks and coke, respectively.Even though the results cannot be compared directly, the modeled migration is in good agreement with the findings of the experimental migration test into energy drinks. Conclusions The present study has shown that migration tests using 20% ethanol as food simulant at the accelerated test conditions of 10 d at 60 °C highly overestimate the migration into real food (in this case, energy drinks) at the end of shelf life.The migration of bisphenol A into 20% ethanol at 60 °C is a factor of >67 (sample 1) or >35 (sample 2), respectively, higher than the migration into real beverages at the end of shelf life.This overestimation of the migration is due to swelling of the epoxy can coating by the ethanolic food simulant.Water-based food simulants, or the energy drink itself, is less swelling and, therefore, the migration into real foods is much lower and, in the case of this study, below the experimental detection limits.Migration into 20% ethanol at 60 °C results in a migration value which is close to the total migration (Figure 1), which indicates that the applied contact conditions represent, more likely, extraction conditions than migration conditions.The test conditions 10 d at 40 °C also turned out to be overestimative, though to a lesser degree.However, compared to real food, the migration is overestimated by a factor of 4, at minimum.The overestimative effect due to swelling should be considered when performing compliance tests for coated cans with 20% ethanol as simulant.However, it should also be noted that in none of the performed overestimative migration tests was the specific migration limit of bisphenol A of 50 µ g/L, according to Regulation (EU) No 10/2011, exceeded.The migration potential of bisphenol A in the can coating, which assumes a complete transfer of bisphenol A into the beverage, was only 10.2 µ g/L (sample 1) and 5.3 It has also been observed for polyethylene terephthalate (PET) materials that ethanolic food simulants lead to an increased migration due to swelling effects [2,9,10].Similar effects are expected to have occurred in the investigated epoxy resins.Furthermore, a strong dependency of the activation energy of diffusion on the molecular size of the migrating species has been observed for PET [11].Thus, for PET materials, the test conditions 10 d at 60 • C have been found to be increasingly overestimative compared to storage until the end of shelf life at room temperature with increasing molecular size [8].For the diffusion of bisphenol A in the investigated epoxy resins, no activation energies are available from the literature.However, as an approximation, the same model for the prediction of the activation energy and diffusion coefficient was applied as has been developed for PET materials [8].Using this model, from the molecular volume of bisphenol A of 221.11Å 3 , an activation energy E A of 165.8 kJ/mol (pre-exponential factor D 0 = 1.22 × 10 11 cm 2 /s) and a diffusion coefficient D P of 1.11 × 10 −18 cm 2 /s at 25 • C was predicted.This diffusion coefficient was used to calculate the migration from the investigated cans at room temperature for a storage time of 365 d (end of shelf life) based on the concentrations of bisphenol A determined in the epoxy resins.The modeled migration was 0.068 µg/L for sample 1 and 0.035 µg/L for sample 2. In both cases, the predicted migration was below the detection limit of 0.14 µg/L in the energy drinks and coke, respectively.Even though the results cannot be compared directly, the modeled migration is in good agreement with the findings of the experimental migration test into energy drinks. Conclusions The present study has shown that migration tests using 20% ethanol as food simulant at the accelerated test conditions of 10 d at 60 • C highly overestimate the migration into real food (in this case, energy drinks) at the end of shelf life.The migration of bisphenol A into 20% ethanol at 60 • C is a factor of >67 (sample 1) or >35 (sample 2), respectively, higher than the migration into real beverages at the end of shelf life.This overestimation of the migration is due to swelling of the epoxy can coating by the ethanolic food simulant.Water-based food simulants, or the energy drink itself, is less swelling and, therefore, the migration into real foods is much lower and, in the case of this study, below the experimental detection limits.Migration into 20% ethanol at 60 • C results in a migration value which is close to the total migration (Figure 1), which indicates that the applied contact conditions represent, more likely, extraction conditions than migration conditions.The test conditions 10 d at 40 • C also turned out to be overestimative, though to a lesser degree.However, compared to real food, the migration is overestimated by a factor of 4, at minimum.The overestimative effect due to swelling should be considered when performing compliance tests for coated cans with 20% ethanol as simulant.However, it should also be noted that in none of the performed overestimative migration tests was the specific migration limit of bisphenol A of 50 µg/L, according to Regulation (EU) No 10/2011, exceeded.The migration potential of bisphenol A in the can coating, which assumes a complete transfer of bisphenol A into the beverage, was only 10.2 µg/L (sample 1) and 5.3 µg/L (sample 2), respectively, which is a factor of approximately 5 and 10 below the specific migration limit of bisphenol A. Prediction of the migration of bisphenol A from its residual concentration in the epoxy can coating by use for migration modeling, with the published modeling parameters of PET [8], results in a much more realistic migration value compared to the migration into swelling ethanolic simulants like 20% ethanol.The determination of bisphenol A in non-filled cans is therefore a useful test for production control.Empty bottle coatings might be tested for their residual bisphenol A concentration.Assuming total transfer of bisphenol A into the can-packed beverages, the worst-case concentration can be compared to the legal specific migration limit of 50 µg/L.If the migration potential of bisphenol A exceeds the specific migration limit, migration modeling, as applied in this study, can be used for the prediction of the migration at the end of shelf life.Such a procedure avoids that food simulants and elevated storage conditions, like 20% ethanol for 10 d at 60 • C, result in non-realistic, very overestimative migration values for bisphenol A. Figure 1 . Figure 1.Migration of bisphenol A from the epoxy-coated cans sample 1 and 2 into different test media and calculated migration at the end of shelf life. Figure 1 . Figure 1.Migration of bisphenol A from the epoxy-coated cans sample 1 and 2 into different test media and calculated migration at the end of shelf life. Table 1 . Sample material used for the study. 2b cans filled with energy drink, BPA-based epoxy coating, filling volume 250 mL 3a empty cans, BPA-free epoxy coating, filling volume 330 mL 3b cans filled with coke, BPA-free epoxy coating, filling volume 330 mL Table 2 . Results of the migration of bisphenol A into the food simulant 20% ethanol and into energy drinks. Table 3 . Migration potential of bisphenol A in the can coatings and calculated total transfer to the filling good.
4,555.6
2019-01-07T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Information Content of Annual Earnings Announcements: Evidence from Moroccan Stock Market The objective of our work is to study the information content of accounting results of listed companies in the Casablanca Stock Exchange. Applying the methodology adopted by Beaver (1968), we analyze the market reaction, in terms of changes in prices and volumes around the announcement of annual earnings. Our study focuses on 75 companies listed on the Casablanca Stock Exchange for the period [2010 to 2015]. It accounts for the presence of an informational content for the accounting results around the announcement date. Thus, this confirms the hypothesis of financial market efficiency in its semi-strong form (Fama, 1970) and the results obtained in other financial markets (US, French, Chinese ...). Keywords: Information Content, Moroccan Stock Market, Earnings announcements. originally developed by Beaver (1968); Then, the approach based on valuation relevance developed by Ball and Brown (1968) and FFJR (1969) and finally, the approach value relevance or association uses simple long term relationship tests between return, on the one hand, and an accounting variable, on the other hand (Ramesh and Thiagarajan, (1995); Collins et al, (1997), Lev and Zarowin (1999) ...).. Applying the first two approaches to the same sample of companies for the period , Lo & Lys (2000a) notice that when the information content remains constant valuation relevance criterion decreases. The authors argue that this outcome difference between the two approaches is probably due to the presence of publications rather than to those publicly revealed (for example, the accompanying notes to the stock options). Unlike association studies using a long-term event window and accusing for unfairness in methodology (Kothari and Warner, (2007)), the information content studies use a short-term window (e.g., 18 weeks) and report in their majority significant results in different countries, for example, studies of Ball and Brown, (1968) in the United States, the study of Brown (1970) in Australia, the study of Firth (1981) in the United Kingdom, and the studies of Haw, Qaqing, and Wu (2000) and Huang and Li (2014) in China. Moreover, several recent studies confirm the increased level of information content of accounting result the last two to three decades in relation to their value relevance, which is usually measured over a wide window and reflected by R2. This calls into question the usefulness of the information content outside reporting period (Kim and Verrecchia, 1991). Additionally, Landsman and Maydew (2002) confirm that the market reaction is particularly stronger for large companies. Recently, Huang and Li (2014), adopting the approach of Beaver (1968) have carried out a study comparing the behavior of prices and volumes subsequent to the announcement of accounting results in China and the United States. The authors identify a positive reaction during the announcement for both markets. However, the Chinese financial market shows a less intense reaction comparing to US market. According to the authors, this can be explained by the possibility of information leakage before its announcement. In addition, event studies do not take into account the degree of financial markets development, company sizes, or implemented governance mechanisms. These external factors can be, in our view, essential in explaining the behavior of prices and volumes around the announcement dates. Methodology and Data Consistent with methodology developed in Beaver (1968), I examine the information content of earnings announcements by analyzing the price change and the behavior of trading volume around the earnings announcement week. Price Analysis To measure the impact of earnings announcements on volatility of stock return, I analyze the abnormal price change in the report period, which is defined as the 17 weeks surrounding the annual earnings announcement week (i.e. 8 weeks before and 8 weeks after the earnings announcement see the timeline below), relative to nonreport period. If earnings announcements have information content in the sense of leading to changes in the equilibrium value of the market price, the magnitude of the price change should also be larger in week 0. To find out the abnormal price change in the report period, we eliminate the effect of market-wide events upon the individual stock's price change. We use the following model: R it = a i + b i Rm t +u it , (1) Where R it is the weekly return for firm i in week t and R mt is the weekly market return in week t. The residual, uit, represents the part of stock return that could not be explained by the market-wide price movement as reflected in R mt . To get estimates of the abnormal price change in the report period, we first estimate model (1) with weekly data from non-report weeks for each firm year. After obtaining the coefficients ait and bit from the non-report periods, these coefficients are applied to the variables in the report period to compute abnormal price change, uit, which cannot be explained by market-wide information. Following Huang and Li (2014), we don't specify the direction and the magnitude of the prince change in the report period. Therefore, our measure of abnormal return volatility, Uit, is based on the mean of the squared market model adjusted returns, u2it, obtained from model (1) divided by the abnormal return during the nonreport period,σi2. For each week of the report period, we average Uit over each firm and each week and compare the values in each of the 17 report weeks with the average values from non-report weeks. If earnings reports possess information content, then Uit will be larger than 1. Volume analyze The aim objective of volume analysis is to evaluate the impact of earnings announcement on trading volume in report period. To do this, we should remove the effects of market-wide events upon the individual security's volume because it is possible that the abnormally high volume may be caused in part by market-wide pieces information or by another noise causing increases in the volume. The following model was used to abstract from market-wide events: V it = a i + b i Vm t +ε it (2) Where V it is an average of the daily percentage of shares traded in a week. The number of trading days in the week is used to adjust for the fact that not all weeks have the same number of trading days. Vit is computed as follows: V mt represents the level of volume for all listed firms; thus V mt is defined as follows: Following Beaver (1968), we estimate model (2) with weekly data from non-report weeks for each firm year to obtain estimates of abnormal trading volume in the report period. Thus, if earnings announcements have information content, the assumptions of the classical regression model are violated during the report period since E(εit) would not be zero. To compute abnormal trading volume, εit, which represents the part of trading that cannot be explained by market-wide information, we first estimate the coefficients ait and bit from the nonreport period, then, these coefficients are applied to the variables for the report period. Sample and Data The study is based upon a sample of annual earnings announcements released by Morrocan's firms listed on the Casablanca Stock Exchange Official List during the period [2010][2011][2012][2013][2014][2015]. All companies listed on the Casablanca Stock Exchange are required to make annual earnings announcements. Usually, these are short versions of the annual report that are made public before the annual report. Data and the date of earnings announcements are directly collected from communication service of Casablanca Stock Exchange. We only include a share firms with available annual earnings announcements in the sample, resulting in an average of 71 firms per year of a total of 425 firms years. Table 1 provides the distribution of our sample firms across years. It reports the number of firm week observations and number of firms by year, resulting in a quasi-constant percentage distribution by year. Table 2 also shows that most earnings are released between 6 to 11 weeks after the fiscal year end. In addition, most earnings announcements (around 92%) are achieved before 3 months after the fiscal year end. Table 3 indicates the descriptive statistics for the main variables used in the tests. Vit has a mean of 0,33.10-3, which is weekly greater with that of V mt (0,26.10-3). The Mean of R it is 0,8.10-4, indicating an annual return of around 0,4% which is very low compared to other stock emerging market and confirming the weak contribution of stock market as source of financing in Moroccan economic context. Vit is the average of the daily percentage of shares traded for firms i in week t. Rit is the weekly return for firm i in week t. Rmt is the weekly market return in week t. Price Analysis To confirm the presence of information content of earning around earnings announcement, the magnitude of the price change should be larger in week 0 than during the non report period. Thus, we compare return volatility in the report and non-report periods. Table 4 provides the Fam-Macbeth regression results of model 1. The estimates of ait, bit, uit are obtained from regression with non report period data. While, the explanatory power of market return is closer to 7%, the coefficient bit shows an estimation of 1,0531 which indicates an operational measure of a stock's riskiness. is the weekly return for firm i in week t. Rmt is the weekly market return in week t. T-statistics for the time-series averages are presented in the parentheses. *, **, and *** denote t-statistics are significant at 10%, 5%, and 1% level, respectively. We use model (1) to compute the residual, uit, for each week t of the report period and for each earnings announcements as follows: uit = Rit -ai -biRmt. To obtain Uit the residual (uit) was squared and divided by abnormal return in the non-report period for each firm (σi2). Then: Uit = u2it/ σi2. Figure 1 describes the behavior of the average of Uit over each firm and each report week. It shows that the magnitude of the price change in week 0 is much larger than its average during the non report period. This result confirms that earnings disclosures bring an abnormal price activity beginning one week before the announcements. In other hands, it supports the results obtained recently in other countries like US and China (Huang and Li, 2014). Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol. 8, No. 14, 2017 Figure 1, Price residual analysis (U it ) Figure 1 depicts the return volatility changes in the 17-week report period. The change in return volatility is computed with Uit = u2it/s2it, where uit is estimated using model (1) Rit = ai + biRmt +uit with data from the non-report period. sit is the abnormal return in the non-report period for firm i. The dotted line, which equals 1, indicates the average price residual in the non-report period. To support our finding, we also calculate and present the frequency of U it (figure 2), which is higher than 1 in the report period. The plot shows the frequency of U it being above 1, it reaches a peak in week 0. They are furthers peaks of U it mainly occur in post announcement weeks. In sum, ours results confirm that relating to mean U it analysis: there is abnormal price activity at the time of earnings announcement. Volume Analysis The estimate of model (2) is based on non report period to obtain the coefficients ait and bit which are applied to the model in report period in order to control for forces that are unrelated to earnings announcements. Then, the model (2) allows to assess the presence or not of abnormal trading volume activity around the earnings announcements. Table 5 shows the Fama-MacBet regression results for model (2). We find that Vmt has less explanatory power, with the adjusted R2 being 2% compared to that noted in others studies, for example, Huyan and Li (2014) find a R2 of 6% for US market and 18% for Chinese market. Adjusted R-squared 0.000101*** (4.453) 1.684 (11.619)*** 0.019 The table reports the Fama-Macbeth regression results for trading volume. The dependent variable Vit is the average of the daily percentage of shares traded for firms i in week t. Vmt is the average of the daily percentage of shares traded on the market in week t. T-statistics for the time-series averages are presented in parentheses. *,**, and *** denote t-statistics are significant at the 10%, 5%, and 1% level, respectively. To examine the trading volume activity around the earnings announcement, we first calculate the raw trading volume and the abnormal trading volume for each week during the report period. Then, we plot the timeseries average of the cross-sectional mean trading volume over the 17 week report period. The figure 3 illustrates that in the weeks immediately prior to the announcement; the trading volume is below the normal level (the mean trading volume in the non-report period) and it starts to climb up from the announcement week to reach the peak level one week after. Our evidence is different to that obtained in others studies (Beaver, 1968;Landsman & Maydew;, Huang & Li 2014 suggesting the specificity of financial market and the investors behaviors in developing and emerging countries. This result confirms that not all investors may trade instantly upon the announcement of annual earnings but their trades persist for one week later. Thereafter, we complete our analysis by computing the mean abnormal trading volumes, εit, (figure 4) using data from non report period to removing the impact from market-wide effects. The main result is that trading volumes, εit, show a similar pattern to that of raw trading volumes. Therefore, the abnormal trading volumes reach the peak level with a lag of one week and return to his normal level from week 2. Figure 4 depicts the time-series average of cross-sectional mean abnormal trading volumes over the 17week report period. The abnormal trading volume εit is calculated using model (2) Vit = ait + bitVmt + εit with data from the non-report period. The dotted line indicates the mean abnormal trading volume in the non-report period. The similar pattern is also shown (figure 5) when we compute the frequency of positive abnormal trading volume in each report period week. The usefulness of frequency analyze is to eliminate the concern that our finding are driven by a few dominant observations. Then, the frequency exponentially climbs up in week 0 to reach the peak in week 1 supporting the result obtained for the mean abnormal trading volumes noted in figure 4 and going back to his normal level in week 2. Conclusion Our study fits into field of information content studies that are largely applied in different stock markets worldwide. In our knowledge, it's the first event study that is achieved in Moroccan Stock Market. The main 17-Week of Announcement Period Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol. 8, No. 14, 2017 objective of this work is to determine the presence of abnormal trading volume and volatility around the earnings announcements and then confirm the efficiency hypothesis as defended by Fama (1970) in his semi strong form. The result shows the presence of information content of earnings around the reporting period, in terms of abnormal trading volume and price change. However, the volume change shows a one week lag comparing to the price change which depicts a high volatility in earnings announcement week. This result can pave the way for the future researches to determine the specificities of Moroccan Stock Market, to analyze the investor's behaviors in emerging and developing countries because the similar result are observed in China case (Huang & Li, 2014). Further, our study sketches a future studies to be interested by the impact of financial and accounting announcement, the relation between mandatory and voluntary reporting and return of the firm especially in Moroccan Stock Exchange which is characterized by his weak role to attract investors and to fund firms.
3,733.8
2017-01-01T00:00:00.000
[ "Economics" ]
IFI16 Inhibits Porcine Reproductive and Respiratory Syndrome Virus 2 Replication in a MAVS-Dependent Manner in MARC-145 Cells Porcine reproductive and respiratory syndrome virus (PRRSV) is a single-stranded positive-sense RNA virus, and the current strategies for controlling PRRSV are limited. Interferon gamma-inducible protein 16 (IFI16) has been reported to have a broader role in the regulation of the type I interferons (IFNs) response to RNA and DNA viruses. However, the function of IFI16 in PRRSV infection is unclear. Here, we revealed that IFI16 acts as a novel antiviral protein against PRRSV-2. IFI16 could be induced by interferon-beta (IFN-β). Overexpression of IFI16 could significantly suppress PRRSV-2 replication, and silencing the expression of endogenous IFI16 by small interfering RNAs led to the promotion of PRRSV-2 replication in MARC-145 cells. Additionally, IFI16 could promote mitochondrial antiviral signaling protein (MAVS)-mediated production of type I interferon and interact with MAVS. More importantly, IFI16 exerted anti-PRRSV effects in a MAVS-dependent manner. In conclusion, our data demonstrated that IFI16 has an inhibitory effect on PRRSV-2, and these findings contribute to understanding the role of cellular proteins in regulating PRRSV replication and may have implications for the future antiviral strategies. Introduction Porcine reproductive and respiratory syndrome (PRRS), caused by PRRS virus (PRRSV), is an economically important viral disease all over the word [1,2]. PRRSV is a member of the Arteriviridae family in the order Nidovirales and consists of an enveloped 15 kb positive-strand RNA genome containing at least ten open reading frames (ORFs) [3][4][5][6]. PRRSV is divided into two genotypes: the European genotype (type 1) and the North American genotype (type 2). There is considerable sequence variability within both groups, and only about 50-60% nucleotide sequence identity between the two subtypes [7,8]. Recently, based on a new proposed classification scheme, type 1 and type 2 PRRSV Antibodies and Reagents Mouse anti-Flag M2 monoclonal antibody, mouse anti-c-Myc monoclonal antibody, anti-Flag affinity gel beads, anti-c-Myc affinity gel beads, and mouse IgG agarose were purchased from Sigma-Aldrich. PRRSV N protein antibody was purchased from GeneTex. Mouse anti-β-actin monoclonal antibody was purchased from GenScript. Rabbit anti-MAVS monoclonal antibody was purchased from Cell Signaling Technology (CST). The secondary antibodies conjugated to HRP were purchased from Jackson Immuno Research. Polyinosinic-polycytidylic acid (polyI:C) was purchased from Sigma-Aldrich. Recombinant human IFN-β was purchased from PEPROTECH. Lipofectamine2000 transfection reagent and Lipofectamine RNAiMAX transfection reagent were purchased from Invitrogen (Carlsbad, CA, USA). Plasmids To generate IFI16-Myc, the cDNA of IFI16 from MARC-145 cells was amplified and cloned into pCMV-Myc vector (Beyotime, Shanghai, China). The genes of RIG-I, MDA5, MAVS, TBK1, and IRF3 from MARC-145 cells were cloned into pCMV-Flag (Beyotime), creating RIG-I-Flag, MDA5-Flag, MAVS-Flag, TBK1-Flag, and IRF3-Flag respectively. The expression plasmids for 3×Flag-tagged MAVS and MAVS truncations were constructed by standard molecular cloning method from cDNA templates. All constructs were confirmed by DNA sequencing. The IFN-β promoter reporter plasmid (p-284) has been constructed as previous described [31]. All primers used are listed in Table 1. Table 1. Primers used for expression plasmid construction. Quantitative Real-Time PCR Analysis Total RNA was extracted from cells using RNeasy Mini Kit (Qiagen), reverse transcription was accomplished with PrimeScript™ RT Master Mix (Perfect Real Time) (Takara, Kyoto, Japan) according to the manufacturer's protocol. And then the samples were subjected to real-time PCR analysis with specific primers by using a Fast Start Universal SYBR Green Master Kit (Roche, Basel, Switzerland) on a 7500 fast RT-PCR system (Applied Biosystems, Foster City, CA, USA). Relative analysis of gene expression was evaluated using the 2 −∆∆CT method, in which target gene expression was normalized by glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. Primers are shown in Table 2. Dual Luciferase Reporter Assay MARC-145 cells in 24-well plates were transfected with indicated expression plasmids and IFN-β luciferase reporter plasmids by using Lipofectamine 2000 transfection reagent (Invitrogen). At 36 hpt, the cells were transfected with polyI:C (10 µg/mL) (Sigma-Aldrich, St. Louis, MO, USA) for 12 h, and then the cells were harvested and the luciferase activity was measured by using dual-luciferase reporter assay system (Promega, Madison, WI, USA). Co-Immunoprecipitation HEK-293T cells were transfected with IFI16-Myc and MAVS-3×Flag, MAVS truncations plasmids or control vector. At 48 hpt, the cells were lysed in IP-lysis buffer (Sigma-Aldrich) containing protease inhibitor cocktail (Roche) for 1 h at 4 • C. The lysates were centrifuged at 12,000× g for 15 min at 4 • C, and the supernatants were pre-cleared with mouse IgG-Agarose (Sigma-Aldrich) at 4 • C for 2 h. Then, the pre-cleared supernatants were incubated with anti-c-Myc affinity gel beads or anti-Flag affinity gel beads (Sigma-Aldrich) for 4 h or overnight at 4 • C. The precipitates were washed five times with TBS buffer and detected by western blotting. Virus Titration Virus titers were determined according to a previous report [32]. Briefly, MARC-145 cells, grown in 96-well plates, were infected with ten-fold serial dilution of samples. After 1 h incubation at 37 • C, the supernatants were replaced with fresh DMEM containing 2% FBS. Five days post infection, the cytopathic effect (CPE) characterized by clumping and shrinkage of cells was obviously visible in MARC-145 cells and the viral titers, expressed as 50% tissue culture infective dose (TCID 50 ), calculated according to the method of Reed-Muench [33]. Statistical Analysis Statistical graphs were created with GraphPad Prism software, and all data were analyzed using Student's t tests as the mean values ± the standard deviations (SD) of at least three independent experiments. The asterisks in the figures indicate significant differences (*, p < 0.05; **, p < 0.01). IFI16 Inhibits PRRSV-2 Replication Since type I interferon and interferon-induced genes could efficiently inhibit PRRSV replication in MARC-145 cells [21,23], and it has been reported that IFI16 could be induced by type I interferon [34][35][36], we firstly confirmed whether IFI16 could be induced by type I interferon in MARC-145 cells, and then explored whether IFI16 could inhibit PRRSV replication. MARC-145 cells were treated with IFN-β, and then the expression of IFI16 was detected. Consistent with the results of previous reports, IFN-β could also efficiently induce the expression of IFI16 ( Figure 1A), and the expression of IFI16 was enhanced in an IFN-β-dose-dependent manner and peaked at 24 h in MARC-145 cells ( Figure 1B). In addition, the transcription level of IFI16 was increased in the cells infected with PRRSV-2 ( Figure 1C,D). MARC-145 cells, and then explored whether IFI16 could inhibit PRRSV replication. MARC-145 cells were treated with IFN-β, and then the expression of IFI16 was detected. Consistent with the results of previous reports, IFN-β could also efficiently induce the expression of IFI16 ( Figure 1A), and the expression of IFI16 was enhanced in an IFN-β-dose-dependent manner and peaked at 24 h in MARC-145 cells ( Figure 1B). In addition, the transcription level of IFI16 was increased in the cells infected with PRRSV-2 ( Figure 1C, D). Next, to explore whether IFI16 could inhibit PRRSV-2 replication, MARC-145 cells were transfected with IFI16-Myc and control vector for 24 h, and then the cells were infected with PRRSV-2 at a MOI of 0.1. The results in Figure 2 show that overexpression of IFI16 reduced the RNA levels of PRRSV (Figure 2A), the PRRSV titers ( Figure 2B), and the expression levels of the N protein of PRRSV ( Figure 2C,D). To further examine the antiviral effects of IFI16 on PRRSV-2, we used specific siRNA to down-regulate the expression of endogenous IFI16, which led to lower expression of IFI16 than that in cells transfected with control siRNA ( Figure 3A). Compared with the nontargeting control siRNA (siNC)-transfected control cells, the RNA levels of PRRSV ( Figure 3B), viral titers ( Figure 3C), and the expression of N protein ( Figure 3D,E) were significantly increased in MARC-145 cells transfected with siIFI16. Together, these results indicated that IFI16 acts as an antiviral protein against PRRSV-2. To further examine the antiviral effects of IFI16 on PRRSV-2, we used specific siRNA to down-regulate the expression of endogenous IFI16, which led to lower expression of IFI16 than that in cells transfected with control siRNA ( Figure 3A). Compared with the nontargeting control siRNA (siNC)-transfected control cells, the RNA levels of PRRSV ( Figure 3B), viral titers ( Figure 3C), and the expression of N protein ( Figure 3D,E) were significantly increased in MARC-145 cells transfected with siIFI16. Together, these results indicated that IFI16 acts as an antiviral protein against PRRSV-2. IFI16 Enhances the MAVS-mediated Type I IFN Signaling IFI16 plays an important role in Sendai virus (SeV)-mediated production of type I IFN [30,37], and in light of the fact that PRRSV could slightly induce the production of type I IFN [31,38], we wonder whether IFI16 was useful to the transcription of type I IFN during PRRSV infection. MARC-145 cells were transfected with IFI16-Myc or control vector, and 24 h later, the cells were infected with PRRSV-2 BJ-4 at a MOI of 1. The results showed that IFI16 could enhance PRRSV-induced production of IFN-β (Figure 4). Viruses 2019, 11, x FOR PEER REVIEW 7 of 14 (E) The relative intensity ratios of N protein were analyzed by Image J software. All experiments were repeated at least three times with similar results. * p < 0.05, ** p < 0.01. IFI16 enhances the MAVS-mediated type I IFN signaling IFI16 plays an important role in Sendai virus (SeV)-mediated production of type I IFN [30,37], and in light of the fact that PRRSV could slightly induce the production of type I IFN [31,38], we wonder whether IFI16 was useful to the transcription of type I IFN during PRRSV infection. MARC-145 cells were transfected with IFI16-Myc or control vector, and 24 h later, the cells were infected with PRRSV-2 BJ-4 at a MOI of 1. The results showed that IFI16 could enhance PRRSV-induced production of IFN-β (Figure 4). Given the fact that IFI16 plays a key role in the signaling through RIG-I, and among the different RNA sensors, DExD/H-box RNA helicases of the RLR family have been identified as essential sensors of RNA viruses [15,30,39,40], we explored the role of IFI16 in the RIG-I-mediated signaling pathway. Firstly, we found that IFI16 could enhance polyI:C-induced IFN-β promoter activity and the transcriptional levels of IFN-β ( Figure 5A,B). Next, to further investigate the mechanism of IFI16 in enhancing type I IFN signaling, MARC-145 cells were co-transfected with plasmids encoding IFI16, components of RIG-I pathway, and IFN-β reporter plasmid. The results showed that IFI16 greatly enhanced the IFN-β promoter activity, the mRNA transcriptional levels of IFN-β, ISG15, and ISG56 induced by MAVS ( Figure 5C-F). Subsequently, cells were co-transfected with MAVS, IFN-β reporter plasmid, and different concentrations of IFI16, and the results showed that IFI16 strongly increased MAVS-mediated IFN-β promoter activity and the mRNA transcription levels of IFN-β, ISG15, and ISG56 in a dose-dependent manner ( Figure 5G-J). Since IFI16 could enhance MAVS-mediated type I IFN signaling, to identify whether MAVS is indispensable for IFI16 to regulate the production of type I IFN, MARC-145 cells were transfected with siMAVS, IFI16-Myc, and IFN-β reporter plasmid, and 36 h later, the cells were stimulated with polyI:C. And the results showed that IFI16 could not promote polyI:C-induced type I IFN production upon silencing of MAVS ( Figure 5K-L). Collectively, these data indicated that IFI16 positively regulates MAVS-mediated type I IFN signaling. Given the fact that IFI16 plays a key role in the signaling through RIG-I, and among the different RNA sensors, DExD/H-box RNA helicases of the RLR family have been identified as essential sensors of RNA viruses [15,30,39,40], we explored the role of IFI16 in the RIG-I-mediated signaling pathway. Firstly, we found that IFI16 could enhance polyI:C-induced IFN-β promoter activity and the transcriptional levels of IFN-β ( Figure 5A,B). Next, to further investigate the mechanism of IFI16 in enhancing type I IFN signaling, MARC-145 cells were co-transfected with plasmids encoding IFI16, components of RIG-I pathway, and IFN-β reporter plasmid. The results showed that IFI16 greatly enhanced the IFN-β promoter activity, the mRNA transcriptional levels of IFN-β, ISG15, and ISG56 induced by MAVS ( Figure 5C-F). Subsequently, cells were co-transfected with MAVS, IFN-β reporter plasmid, and different concentrations of IFI16, and the results showed that IFI16 strongly increased MAVS-mediated IFN-β promoter activity and the mRNA transcription levels of IFN-β, ISG15, and ISG56 in a dose-dependent manner ( Figure 5G-J). Since IFI16 could enhance MAVS-mediated type I IFN signaling, to identify whether MAVS is indispensable for IFI16 to regulate the production of type I IFN, MARC-145 cells were transfected with siMAVS, IFI16-Myc, and IFN-β reporter plasmid, and 36 h later, the cells were stimulated with polyI:C. And the results showed that IFI16 could not promote polyI:C-induced type I IFN production upon silencing of MAVS ( Figure 5K,L). Collectively, these data indicated that IFI16 positively regulates MAVS-mediated type I IFN signaling. IFI16 Interacts with MAVS IFI16 is mainly localized in nucleus and partially localized in cytoplasmic and mitochondria [29], while MAVS could localize in cytoplasmic and mitochondria [41]. Besides, IFI16 could facilitate the MAVS-mediated type I IFN signaling, so to determine whether IFI16 could interact with MAVS, the co-immunoprecipitation assays (co-IP) were performed in 293T cells, and the results showed that IFI16 specifically interacted with MAVS ( Figure 6A,B). As a central adaptor of IFN signaling, MAVS contains an N-terminal CARD domain, a proline-rich domain, and a C-terminal transmembrane domain (TM) [42,43]. So, to determine the functions of MAVS domains in the binding with IFI16, a series of MAVS truncations were generated ( Figure 6C). In the assays of co-IP, IFI16 only interacted with full-length MAVS, but not with truncations of MAVS ( Figure 6D). These results indicated that the CARD domain and TM domain of MAVS are sufficient and efficient for the binding with IFI16. IFI16 Interacts with MAVS IFI16 is mainly localized in nucleus and partially localized in cytoplasmic and mitochondria [29], while MAVS could localize in cytoplasmic and mitochondria [41]. Besides, IFI16 could facilitate the MAVS-mediated type I IFN signaling, so to determine whether IFI16 could interact with MAVS, the co-immunoprecipitation assays (co-IP) were performed in 293T cells, and the results showed that IFI16 specifically interacted with MAVS ( Figure 6A,B). As a central adaptor of IFN signaling, MAVS contains an N-terminal CARD domain, a proline-rich domain, and a C-terminal transmembrane domain (TM) [42,43]. So, to determine the functions of MAVS domains in the binding with IFI16, a series of MAVS truncations were generated ( Figure 6C). In the assays of co-IP, IFI16 only interacted with full-length MAVS, but not with truncations of MAVS ( Figure 6D). These results indicated that the CARD domain and TM domain of MAVS are sufficient and efficient for the binding with IFI16. Antiviral Activity of IFI16 is Dependent on MAVS Since IFI16 possesses significant antiviral activity and enhances MAVS-mediated type I IFN signaling, to investigate whether the antiviral activity of IFI16 is dependent on MAVS, we silenced MAVS using siRNAs, and as shown in Figure 7, marked reduction of MAVS expression was observed in the MAVS silenced cells (Figure 7A,B). And, IFI16 could not decrease the PRRSV RNA levels ( Figure 7C Antiviral Activity of IFI16 is Dependent on MAVS Since IFI16 possesses significant antiviral activity and enhances MAVS-mediated type I IFN signaling, to investigate whether the antiviral activity of IFI16 is dependent on MAVS, we silenced MAVS using siRNAs, and as shown in Figure 7, marked reduction of MAVS expression was observed in the MAVS silenced cells (Figure 7A,B). And, IFI16 could not decrease the PRRSV RNA levels ( Figure 7C Discussion In this study, we revealed a novel mechanism by which IFI16 inhibits PRRSV-2 replication. Initially, we showed that IFI16 could efficiently repress PRRSV-2 replication. Subsequently, we demonstrated that IFI16 markedly enhances MAVS-mediated type I IFN signaling and binds directly with MAVS. Finally, the ability of IFI16 to antagonize the replication of PRRSV-2 is dependent on MAVS. Taken together, these findings suggest that IFI16 plays an important role in response to PRRSV-2 in MARC-145 cells. IFI16 is an intracellular DNA sensor and could mediate the induction of IFN-β by stimulating with single-stranded and double-stranded DNA sequences or DNA virus infection [29]. And IFI16 has also been described as an antiviral restriction factor against DNA viruses since it could directly interact with DNA virus components and block viral replication [44,45]. Besides, IFI16 plays a broader role in the regulation of the type I IFN response to DNA viruses in antiviral immunity [30]. However, the functions of IFI16 in regulating RNA virus including PRRSV replication are largely unknown. In this study, we provide the first evidence that IFI16 could inhibit PRRSV-2 replication in MARC-145 cells. Additionally, IFI16 is widely expressed in endothelial and epithelial cells [46], and although the roles of IFI16 in porcine alveolar macrophages (PAMs) currently could not be investigated, this study may have implications for other RNA viruses. It has been reported that IFI16 was essential for SeV-mediated production of type I IFN [30,37], and SeV-mediated induction of type I IFN comes primarily from RIG-I activation [39,40,47], which indicated that IFI16 may be involved in regulating the RIG-I signal pathway. Here, we have shown that IFI16 could interact with MAVS and promote MAVS-mediated type I IFN signaling, and MAVS was essential for IFI16 to regulate type I IFN signaling. MAVS is a central adaptor protein for RIG-I signaling pathway, which indicted that IFI16 may positively regulate RIG-I signaling pathway. Given that IFI16 transcriptionally regulates ISGs to enhance IFN responses to DNA Discussion In this study, we revealed a novel mechanism by which IFI16 inhibits PRRSV-2 replication. Initially, we showed that IFI16 could efficiently repress PRRSV-2 replication. Subsequently, we demonstrated that IFI16 markedly enhances MAVS-mediated type I IFN signaling and binds directly with MAVS. Finally, the ability of IFI16 to antagonize the replication of PRRSV-2 is dependent on MAVS. Taken together, these findings suggest that IFI16 plays an important role in response to PRRSV-2 in MARC-145 cells. IFI16 is an intracellular DNA sensor and could mediate the induction of IFN-β by stimulating with single-stranded and double-stranded DNA sequences or DNA virus infection [29]. And IFI16 has also been described as an antiviral restriction factor against DNA viruses since it could directly interact with DNA virus components and block viral replication [44,45]. Besides, IFI16 plays a broader role in the regulation of the type I IFN response to DNA viruses in antiviral immunity [30]. However, the functions of IFI16 in regulating RNA virus including PRRSV replication are largely unknown. In this study, we provide the first evidence that IFI16 could inhibit PRRSV-2 replication in MARC-145 cells. Additionally, IFI16 is widely expressed in endothelial and epithelial cells [46], and although the roles of IFI16 in porcine alveolar macrophages (PAMs) currently could not be investigated, this study may have implications for other RNA viruses. It has been reported that IFI16 was essential for SeV-mediated production of type I IFN [30,37], and SeV-mediated induction of type I IFN comes primarily from RIG-I activation [39,40,47], which indicated that IFI16 may be involved in regulating the RIG-I signal pathway. Here, we have shown that IFI16 could interact with MAVS and promote MAVS-mediated type I IFN signaling, and MAVS was essential for IFI16 to regulate type I IFN signaling. MAVS is a central adaptor protein for RIG-I signaling pathway, which indicted that IFI16 may positively regulate RIG-I signaling pathway. Given that IFI16 transcriptionally regulates ISGs to enhance IFN responses to DNA viruses [29,30], this may be a common mechanism in response to DNA or RNA viruses, which needs further demonstration in other PRRSV strains, and even other RNA viruses. Since PRRSV could be recognized by RIG-I and induce the production of type I IFN [31,38], and IFI16 could positively regulate the RIG-I signaling pathway, we explored the mechanism by which IFI16 inhibits PRRSV-2 replication. We found that IFI16 could facilitate the PRRSV-mediated induction of IFN-β, interact with MAVS, and the anti-PRRSV activity of IFI16 is dependent on MAVS, which indicted that the ability of IFI16 to inhibit PRRSV-2 replication may be relevant to its role in the type I IFN signaling. Interestingly, IFI16 expression was increased after PRRSV infection, while possibly inhibiting virus replication. However, the mechanisms underlying need to be explored further. In fact, in contrast to the results that IFI16 plays an important role in SeV-mediated production of type I IFN [30], several other groups have shown that both knockout and knockdown of IFI16 could not influence the ployI:C and SeV-mediated production of type I interferon [26,29,48]. The reason of these different results may be due to different cell lines they used. Here, we showed another piece of evidence that IFI16 may take part in the RIG-I signaling pathway. Previous studies have shown that STING could interact with RIG-I and MAVS, and there was also a marked reduction in the induction of IFN-β by RIG-I and MAVS in the absence of STING [49,50], while Brunette et al. have shown that the IFN response to poly(dA:dT) was reduced by >99% in STING −/− , MAVS −/− macrophages, and DCs, but not in phagocytes deficient in STING alone [51], which indicated that IFI16 may be a crosstalk between RIG-I-MAVS-type I interferon and STING-DNA-sensing pathways [52]. It is well known that IFI16 recruits STING to induce IFN-β transcription, so the roles of IFI16 in the crosstalk between RIG-I-MAVS-RNA-sensing pathways and IFI16-STING-DNA-sensing pathways need further studies. In conclusion, our data show for the first time that IFI16 could inhibit PRRSV-2 replication in a MAVS-dependent manner, which may have implications for other RNA viruses and contribute to understand the antiviral mechanism, as well as virus-host interactions.
4,993.8
2019-12-01T00:00:00.000
[ "Biology", "Medicine" ]
Wide angle Compton scattering on the proton: study of power suppressed corrections We study the wide angle Compton scattering process on a proton within the soft collinear factorization (SCET) framework. The main purpose of this work is to estimate the effect due to certain power suppressed corrections. We consider all possible kinematical power corrections and also include the subleading amplitudes describing the scattering with nucleon helicity flip. Under certain assumptions we present a leading-order factorization formula for these amplitudes which includes the hard- and soft-spectator contributions. We apply the formalism and perform a phenomenological analysis of the cross section and asymmetries in the wide angle Compton scattering on a proton. We assume that in the relevant kinematical region where $-t,-u>2.5$~GeV$^{2}$ the dominant contribution is provided by the soft-spectator mechanism. The hard coefficient functions of the corresponding SCET operators are taken in the leading-order approximation. The analysis of existing cross section data shows that the contribution of the helicity flip amplitudes to this observable is quite small and comparable with other expected theoretical uncertainties. We also show predictions for double polarization observables for which experimental information exists. Introduction Wide angle Compton scattering (WACS) on a proton is one of the most basic processes within the broad class of hard exclusive reactions aimed at studying the partonic structure of the nucleon. The first data for the differential cross section of this process has already been obtained long time ago [1]. New and more precise measurements were carried out at JLab [2]. Double polarization observables for a polarized photon beam and by measuring the polarization of the recoiling proton were also measured at Jefferson Lab (JLab) [3]. New measurements of various observables at higher energies are planned at the new JLAB 12 GeV facility, see e.g. [4]. The asymptotic limit of the WACS cross section, as predicted by QCD factorization, has been studied in many theoretical works [5,6,7,8]. It was found that the leading-twist contribution described by the hard two-gluon exchange between three collinear quarks predicts much smaller cross sections than is observed in experiments. One of the most promising explanations of this problem is that the kinematical region of the existing data is still far away from the asymptotic limit where the hard two-gluon exchange mechanism is predicted to dominate. Hence one needs to develop an alternative theoretical approach which is more suitable for the kinematic range of existing experiments. Several phenomenological considerations, including the large value of the asymmetry K LL [3] indicate that the dominant contribution in the relevant kinematic range can be provided by the so-called softoverlap mechanism. In this case the underlying quark-photon scattering is described by the handbag diagram with one active quark while the other spectator quarks are assumed to be soft. Various models have been considered in order to implement such scattering picture within a theoretical framework: diquarks Refs. [13], GPD-models [9,10,11,12] and constituent quarks [14] . An attempt to develop a systematic approach within the soft collinear effective theory (SCET) framework was discussed in Refs. [15,16]. The description can be considered as a natural extension of the collinear factorization to the case with soft spectators. In our previous works the factorization of the three leading power amplitudes has been studied and a phenomenological analysis was made. The three amplitudes describing Compton scattering which involve a nucleon helicity-flip are power suppressed and they were neglected in our previous analysis. In the present work we want to include these amplitudes into our description, together with all kinematical power corrections. For that purpose we discuss the factorization of helicity-flip amplitudes assuming that it can be described as a sum of hard-and soft-spectator contributions. We show that the corresponding soft contributions are described by the appropriate subleading so-called SCET-I operators. As a first step towards a proof of the factorization we restrict our attention only to the relevant operators which appear in the leading-order approximation in α s . Assuming that such soft contributions are dominant we estimate their possible numerical impact on the cross section and asymmetries. Our work is organized as follows. In Sec.2 we shortly describe the kinematics, amplitudes, cross sections and asymmetries. In Sec.3 we discuss the factorization scheme for the subleading amplitudes, describe the suitable SCET-I operators and their matrix elements. We also compute the corresponding leading-order coefficient functions and provide the resulting expressions for the amplitudes. Sec.4 is devoted to a phenomenological analysis and in Sec.5 we summarize our conclusions. Kinematics and observables In this paper we follow the notations introduced in Ref. [16]. For convenience, we briefly summarize the most important details. In our theoretical consideration we will use the Breit frame where in and out nucleons (with momenta p and p respectively) move along the z-axis and p z = −p z . Using the auxiliary light-like vectors n = (1, 0, 0, −1),n = (1, 0, 0, 1), (n ·n) = 2, the light-cone expansions of the momenta can be written as follows where m is nucleon mass and the convenient variable W can be expressed through the momentum transfer t as The photon momentum reads q = (q · n)n 2 + (q ·n) with where κ = m 2 /W 2 . In the limit s ∼ −t ∼ −u m 2 these expressions can be simplified neglecting the power suppressed contributions where we assume that W √ −t. For the amplitude we borrow the parametrization from Ref. [17] where e denotes the electromagnetic charge of the proton, N (p) is the nucleon spinor. In Eq.(8) we introduced the orthogonal tensor structures with The scalar amplitudes T i ≡ T i (s, t) are functions of the Mandelstam variables. The analytical expressions for various observables can also be found in Ref. [17]. In our consideration it will be convenient to redefine two helicity-flip amplitudes as The reason for such redefinition will be clarified later. The cross section reads with c.f. with Eq.(3.15a) in Ref. [17]. We also describe the asymmetries which will be considered in this work. We are interested in the beam target-asymmetries with circular photon polarization (R, L). In the case of a longitudinally polarized nucleon target, the corresponding asymmetry A LL reads (in c.m.s) Two further asymmetries describe the correlations of the recoil polarization with the polarization of the photons: where (for more details see [17]) 3 Factorization of the subleading helicity-flip amplitudes T 1,3,5 In Ref. [16], the factorization of the helicity conserving amplitudes T 2,4,6 was considered in the SCET framework [18,19,20,21,22,23]. The helicity-flip amplitudes are power suppressed and were neglected. In the current paper we would like to extend the SCET analysis and also consider the subleading amplitudes T 1,3,5 . Below we are using the same notation for the SCET fields and charge invariant combinations as in Ref. [16]. The factorization of the helicity conserving amplitudes T 2,4,6 is described by the sum of the soft-and hard-spectator contributions. It is natural to expect that the same general structure also holds for the subleading amplitudes T 1,3,5 . Therefore we assume that the T -product of the electromagnetic currents can be presented as where O I denotes the different SCET-I operators associated with the soft-spectator contribution and O These operators are constructed from the collinear quark and gluon fields. The leading-twist operator is given by the three quark operator O (6) n =χ c nχ c nχ c n and is of order λ 6 (twist-3 operator). In order to describe the helicity-flip amplitudes one has to include the subleading operators of order λ 8 (twist-4). Therefore the helicity-flip amplitudes are suppressed by at least a factor λ 14 while the leading power amplitudes are described by the operator O (6) n * T * O (6) n ∼ λ 12 . The explicit calculations of the hardspectator part in Eq. (22) is ill defined, because of end-point singularities in the collinear convolution integrals, see for instance the calculation of the form factor F 2 in Ref. [24]. Only the sum of the soft-and hard-spectator contributions in Eq.(22) provides a well defined result. The soft-spectator contribution is described by the first term on the rhs of Eq. (22) where the operators O I are constructed from the hard-collinear fields in SCET-I. In Ref. [16] it was shown that for the leading power contribution this operator reads The matrix element of this operator gives only the helicity conserving amplitudes whereN Hence in order to describe the soft-spectator contribution of the helicity-flip amplitudes we need the subleading operators. A similar situation also holds for the proton form factors F 1 and F 2 see e.g. Ref. [25]. The matrix element of the required subleading operator must describe the chiral-odd Dirac structures appearing in the amplitudes where A and B are some scalar SCET-I amplitudes. From Eq. (27) it follows that the SCET operator O I can only have an even number of the transverse Lorentz indices. The simplest operator with the required structure and can be built from the gluon fields and is of order The SCET matrix element of this operator can be written as with In SCET-II, the contribution of each collinear sector yields a soft-collinear operator at least of order λ 7 where J n is the hard-collinear kernel (jet function) and the asterisks denotes the appropriate convolutions. The T -product in Eq.(32) can be illustrated with the help of the Feynman diagrams in Fig.1. A similar T -product also describes the second collinear sector. Notice that the collinear operators in this case are the leading-order operators. Nevertheless, the helicity-flip structure of the amplitude is provided by the chiral-odd three-quark soft correlator. The total contribution associated with the operator (28) is of order λ 14 as required. However the hard coefficient function of the gluon operator (28) is subleading in α s . In our further analysis, we restrict our consideration to the leading-order accuracy in the hard coupling α s . Therefore we neglect the contribution of the pure gluonic operator (28). The other suitable operators O I are of order λ 3 and can be built from the quark-gluon combinations χ n (0)γ α ⊥ A n ⊥β (λn)χn(0) andχ n (0)γ α ⊥ An ⊥β (λn)χn(0). We find the following two relevant scalar operators where the index q denotes the quark flavor and The higher order subleading operators of this type can be constructed adding the gluon fields A ⊥ ∼ λ or (A n · n) ∼ λ 2 . Such operators will be suppressed as O(λ 5 ). We find that in SCET-II these operators provide the power suppressed contributions ∼ O(λ 16 ) and therefore can be neglected. We shall not provide a proof of this statement in the present work and accept it as a plausible working assumption. Then at leading-order in the hard coupling α s the power suppressed helicity-flip contribution is only described by the two operators O In order to show the relevance of the SCET-I operators let us demonstrate the mixing of the softspectator contributions described by the operators (33) and (34) with the hard-spectator configuration. Such mixing is provided by the appropriate hard-collinear T -products which describe the matching on the SCET-II soft-collinear operators. In order to simplify this discussion we consider the contractions of the hard-collinear fields in each hard-collinear sector separately (the collinear and soft fields are considered as external) The total soft-collinear operator is given by the suitable soft-collinear combinations from each hardcollinear sector. The T -product of the hard-collinear field χ n,n can be interpreted as a transition of the hard-collinear quark and two soft spectator quarks into three collinear quarks or vice versa, schematically A combination of such T -products yields the soft-collinear operator describing the soft-spectator contribution for the leading amplitudes T 2,4,6 , see details in Ref. [16]. The configurations with the subleading collinear operators can be generated from the hard-collinear sub-operatorχ n / A n ⊥ in Eq.(36). For instance, matching on a twist-4 collinear operator O (8) n ∼ξ c nξ c nξ c n A n ⊥c can be described by the following T -products The diagrams described by these T -products are shown in Fig.1 b) and c), respectively. We also accept that the collinear fields which appear in the SCET-I operators in Eqs. (38) and (39) are generated by the substitution φ hc → φ hc + φ c performing matching onto SCET-II operators. Combining results of the two hard-collinear T -products one obtains a soft-collinear operator which consist of the same collinear operators as the appropriate hard-spectator contribution O n * T * O n . Here we will not study the structure of all possible collinear contributions. We expect that the two presented examples clearly illustrate the presence of the soft-spectator contributions in Eq. (22). In the following discussion we assume that at the leading-order in α s the soft-spectator contribution is only described by the matrix elements of the two operators (33) and (34). Let us consider SCET matrix elements of these operators. They can be described as where on the lhs we defined the required flavor combinations. Dimensionless amplitudes G andG also depend on the factorization scale µ F which is not shown for simplicity. This scale separates contributions from the hard and hard-collinear regions. The SCET-I amplitudes describes the dynamics associated with hard-collinear scale ∼ √ ΛQ and soft scale ∼ Λ. Therefore these amplitudes are functions of the momentum transfer. The fraction τ can be interpreted as the fraction of the collinear momentum carried by the hard-collinear transverse gluon. In order to obtain a formal factorization formula for the amplitudes T 1,3,5 one has to take the matrix element from Eq. (22) and use for the soft-spectator contributions on the rhs the matrix elements defined in Eqs. (25,42) and (41). On the other hand, the nucleon spinors in the parametrization (8) appearing on the lhs must be rewritten in terms of the large components defined in (26). For illustration let us consider the calculation of amplitudes T 1,2 . These amplitudes can be easily singled out using the contraction The lhs can be rewritten as where we usedN The rhs of (43) can be written as Here C 1,2 and H 1,2 denote the momentum space hard coefficient functions in the soft-and hard-spectator contributions, respectively. The asterisks denote the convolution integrals with respect to the collinear fractions, the hard-spectator contributions are shown schematically, Ψ tw3 , Ψ tw4 denote the nucleon distribution amplitudes of twist-3 and twist-4, respectively. Comparing Eqs.(44) and (48) one obtains Using Eq.(11) one also finds This clarify the substitution introduced in Eq.(11): such redefinition removes the kinematical part associated with T 2 from the expression for T 1 in Eq.(50). The soft-spectator contribution of the amplitudē T 1 is only defined by the subleading SCET amplitude G(τ, t). We also keep the power suppressed factors (1 ± κ) in Eqs.(49-51) as the kinematical power corrections. The similar calculations givē The hard coefficient functions C 2,4,6 can be found in Ref. [16]. The subleading coefficient functions C 1,3,5 can be computed from the diagrams in Fig.2 and read where τ is the gluon fraction, 0 < τ < 1 and the hat denotes the partonic (massless) Mandelstam variables related to the scattering angle in c.m.s. as: To calculate the observables of Eq. (17) and (18) the following combinations are needed where ∆(s, t) = 2t sû 1 + 2ŝû The soft-and hard-spectator contributions in the expressions for the amplitudes T i (49)-(55) have endpoint singularities which cancel in their sum. Phenomenology The estimates based on the hard-spectator scattering mechanism predict an order of magnitude smaller cross section for the WACS cross section, see e.g. Refs. [6,7,8]. Therefore we assume that the softspectator contributions dominate over the hard-spectator ones in the relevant kinematical region. It is convenient to introduce the function R(s, t) as: In the kinematical region where the soft-spectator contribution dominates, the introduced ratio R(s, t) must be almost s-independent, i.e. R(s, t) R(t), because the s-dependent term in Eq.(63) is only given by the hard-spectator contribution. The expressions for the other helicity-conserving amplitudes can also be defined in terms of this ratio up to small nextto-next-to-leading order corrections [16] The similar expressions for the amplitudes T 2,4,6 have already been considered in the Refs. [15,16] but without the power suppressed factor −t/(4m 2 − t) in Eq.(66). This factor is part of the full kinematical power correction which was neglected in the previous work. Deriving the formulae (65) and (66) we use that all three amplitudes T 2,4,6 depend on the same t-dependent SCET amplitude F(t) and factorize multiplicatively. For the helicity flip amplitudes the situation is more complicated because in this case one deals with the convolution integrals of the hard coefficient functions with two different SCET amplitudes. This leads to a more complicated structure of power suppressed contributions. In order to proceed further, we introduce the following three amplitudes G 0 (s, t), G 1 (s, t) andG 1 (s, t) Analogously to R(s, t) these new functions are defined using the expressions for the amplitudes obtained in Eqs.(54),(60) and (61). Assuming the dominance of the soft-spectator part we again can expect that the s-dependence of these functions is weak Under such assumption, we obtainT Substituting the obtained expressions for the amplitudes T i in Eq.(13) for W 00 we obtain The rhs of Eq.(74) depends on four unknown t-dependent functions R, G 0,1 andG 1 . Three of these functions are related to the helicity-flip amplitudes. One can expect that at large −t these functions are smaller than R. For instance, for the case of the nucleon form factors, data at large momentum transfer show that G E /G M 1. Let us also assume that the helicity-flip amplitudes G 0,1 andG 1 in WACS are also smaller than R. This assumption is also plausible because the amplitudes G 0,1 are defined by the similar subleading operators as the form factor G E within the SCET formalism, see, e.g. [25]. Neglecting the helicity-flip contributions in Eq.(74) (G 0 ≈ G 1 ≈G 1 = 0) one can use the cross section data in order to extract the ratio R and to check the scaling behavior implied by Eq.(64). We recall, that the leading-order coefficient functions C 2,4,6 read [16] For the scattering angle θ in Eqs.(75) we use the substitution which also includes the power suppressed terms which are considered as a part of the kinematical corrections. The obtained results for R are shown in Fig.3. The left plot in Fig.3 shows the value of R as a function of the momentum transfer for −t ≥ 2.5 GeV 2 . As it was assumed above, see Eq.(64), the extracted values of R are expected to show only a small sensitivity to s when the soft spectator mechanism dominates. From Fig.3 we see that this approximate scaling behavior is observed in the region where −u ≥ 2.5 GeV 2 . Hence we can adopt this value as a phenomenological lower limit of applicability of the described approach. For smaller values of u the extracted values of R (shown by the open squares) demonstrate already a clear sensitivity to s. Thus one can observe that for −u = 1.3 GeV 2 (−t = 3.7 GeV 2 ) the obtained value of R is about a factor 2 larger than the scaling curve. This observation clearly demonstrates that the given approach can not describe the cross section data at small values of u. The solid line in both plots in Fig.3 corresponds to the fit of the points with −t, −u ≥ 2.5 GeV 2 by a simple empirical ansatz where Λ and α are free parameters. For their values we obtain Λ = 1.17 ± 0.01 GeV and α = 2.09 ± 0.06. The shaded area in Fig.3 shows the confidence interval with CL= 99%. On the right plot in Fig.3 we show the effect of the kinematical power suppressed contributions. The empty triangles show the values of R obtained without kinematical power corrections with m = 0. The difference between the values of R extracted with and without power suppressed contributions is about 30% at the lower value −t ≈ 2.5 GeV 2 . Let us notice that the values of R obtained in this work are somewhat larger than ones obtained in Refs. [15,16]. This difference is explained by the incomplete description of the kinematical power corrections in the previous works. The consistent results for the ratio R, extracted in the present framework, indicate that the assumption about the relative smallness of helicity-flip amplitudes is probably correct. We next investigate if one can obtain an estimate of the helicity flip amplitudes from the cross section data. For this purpose, it is convenient to introduce the following ratios In the following discussion we assume that numerically these three quantities are of the same order and small |r 0 | ∼ |r 1 | ∼ |r 1 | < 1. In order to see the relevance of the different subleading contributions let us consider the following ratio of the cross sections at s = 8.9 GeV 2 and −t = 2.5 GeV 2 , which can be expressed as One can see that the largest numerical impact is provided by the contribution proportional to r 0 , the other two contributions in Eq.(80) have very small coefficients and therefore their numerical impact is negligible. This observation also remains valid for other values of s in the region −t, −u ≥ 2.5 GeV 2 . In Fig.4 we show the cross section ratio of Eq.(80) at s = 8.9 GeV 2 as a function of momentum transfer. For simplicity we take the same values for all ratios, i.e. r 0 = r 1 =r 1 = r. We can see that correction from the helicity-flip contributions are largest at small −t and smallest at the boundary where −u 2.5 GeV 2 . For illustration we also show the backward region where −u ≤ 2.5 GeV 2 and our description is not applicable. One can see that in this region the contribution of the subleading amplitudes grows and becomes more and more important. This can also be understood from Eq.(74). The kinematical coefficient in front of R 2 disappears in the backward region because (m 4 − su) → 0. Due to the relative smallness of the contribution proportional to R 2 in the cross section at small −u, the helicity-flip terms become more important. The relative smallness of the contributions with unknown r 1 andr 1 allows one to exclude them from the consideration and perform an analysis of the cross section data in order to extract the values of R(t) and to constrain G 0 (t). Each data point provides an inequality dσ min ≤ αR 2 + βG 2 0 ≤ dσ max where dσ max,min = dσ ± ∆ is the maximal and minimal experimental values of the cross section and α, β are known coefficients. In order to find the restrictions on two unknown quantities R 2 and G 2 0 one needs at least two data points at the same t and different s. The largest effect from G 0 is expected at small momentum transfer, see Fig.4. Therefore we consider three data points at −t 2.5 GeV 2 and s = 6.8, 8.9, 10.9 GeV 2 that provide us with three couples of inequalities. Combining the constraints from each set of inequalities we obtain the following restrictions: R = 0.273 − 0.279 and G 0 = 0.0 − 0.045. The obtained value of R is within the confidence interval shown in Fig.3. This results allows us to estimate the upper bound for the ratio r 0 |r 0 (−t = 2.5 GeV 2 )| ≤ 0.16. From Fig.4 it is also seen that in this case the contribution to the cross section provided by r 0 is below 10%. Such uncertainty is comparable with the theoretical uncertainties such as next-to-leading corrections or the hard-spectator corrections. Hence the result (81) must be understood as a qualitative estimate. Let us study the effect of subleading amplitudes in the asymmetries described in Sec.2. The asymmetries K LL and K LS have already been measured at JLab in two experiments: for large −t but relatively small −u = 1.1 GeV 2 [3] and in the more appropriate kinematical region for the present work [29] (the latter analysis is not yet completed). One more experiment has recently been suggested in order to measure the initial state helicity correlation A LL in WACS [31]. As we concluded above the presented approach is not applicable in the region of small −u < 2.5 GeV 2 . Hence we can not use it in order to describe asymmetries presented in Ref. [3]. Therefore despite the numerical results obtained in Ref. [16], an agreement with K LL should only be interpreted as qualitative. However, the here obtained results can be used for estimates of the asymmetries in the other experiments with more suitable kinematics, see Table 1. [29,31] . K LL at s = 9 GeV 2 , Ref. [29] θ 17), (18) can be presented as follows: Using Eqs. (71), (72) and (73) we also obtain From the given expressions one can easily observe that the contribution proportional to r 0 appears in the numerator of all asymmetries and therefore one can expect that these observables can be more sensitive to this subleading amplitude. By evaluating these asymmetries at −t = 2.5 GeV 2 , we obtain We again observe that the contributions proportional to r 1 andr 1 are practically negligible. In this case, all three asymmetries depend on the same unknown quantity r 0 at fixed momentum transfer. Assuming that r 0 is restricted as in (81) we find where the central number is computed at r 0 = 0. The uncetainty in (89) is smaller than the estimated statistical accuracy ±0.06 in this experiment [29]. It is natural to expect that K LS is more sensitive to the value r 0 because in this observable helicity-flip contributions are not power suppressed. Using (87) we find yielding an uncertainty of around 16% which is smaller than the expected statistical accuracy ±0.05 [29], for preliminary result, see Ref. [30]. If we assume that in the leading-order approximation the combination (T 2 + T 4 )T * 5 is small, see Eq.(83), then the analytical expressions for the two asymmetries K LL and A LL only differ by the combination in Eq.(85) (T 3 +T 1 )T * 5 ∼ r 1r1 . But as one can observe from Eqs.(86) and (88) that the corresponding contribution is numerically small and therefore one obtains that K LL A LL . The uncertainty provided by the ratio r 0 in A LL in Eq.(88) yields: around 11% which is again smaller than the statistical accuracy discussed in Ref. [31]. In order to see the effect of the kinematical power corrections we plot in Fig.5 asymmetry K LL as a function of scattering angle for two different values of energy s = 8, 9 GeV 2 with and without power suppressed contributions. All helicity-flip contributions are taken to be zero r 0 = r 1 =r 1 = 0. The red line in Fig.5 denotes the asymmetry without the power corrections which reduces to the Klein-Nishina result on the pointlike massless target K KN LL = (4 − (1 + cos θ) 2 )/(4 + (1 + cos θ) 2 ). We only consider the angles for which −t, −u ≥ 2.5 GeV 2 . In this region the power corrections do not change the angular dependence but reduce the value of the massless asymmetry by 25%. One can also observe that the values of K LL at both values of s are almost the same. This prediction can be checked by measuring the asymmetry A LL in the new experiment [31] at the same angles as K LL measured in [29]. Discussion In this work we presented a phenomenological analysis of the cross section and asymmetries of wide angle Compton scattering in which we accounted for different power suppressed contributions. For the first time we include in the analysis the subleading helicity-flip amplitudes using the SCET framework. We assume that the dominant contribution to these amplitudes is provided by the soft-overlap configurations described by the matrix elements of SCET-I operators. We only consider the operators which appear in the leading-order approximation. The corresponding hard-coefficient functions were also computed. Within this formalism we estimated the effect due to the power suppressed corrections in different WACS observables . An analysis of existing cross section data allows us to conclude that the developed description can work reasonably well in the region where −t, −u > 2.5 GeV 2 . The contribution from the helicity-flip amplitudes in the cross section is smaller than 10%. We also found that the corresponding effect due to power corrections in the different asymmetries are also relatively small and to a good accuracy A LL = K LL in the relevant kinematical region. Acknowledgements This work is supported by Helmholtz Institute Mainz. N.K. is also grateful to German-US exchange program on Hadron Physics for financial support and to staff of Old Dominion University for warm hospitality during his visit.
6,861.4
2015-04-04T00:00:00.000
[ "Physics" ]
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for human-assisted translation and statistical machine translation. Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time. For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data. Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013;Robinson and Gadelii, 2003). Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013). In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language. Many historically unwritten languages borrow from highly resourced languages. Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc. We thus expect the general method to be applicable for multiple historically unwritten languages. In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words. Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013). Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world. By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist. For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems. The rest of this paper is structured as follows. Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes. Related Work Translation lexicons are a core resource used for multilingual processing of languages. Manual creation of translation lexicons by lexicographers is time-consuming and expensive. There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015). For a relatively small number of these languages there are extensive resources available that have been manually created. It has been noted by others (Mann and Yarowsky, 2001;Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages. For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos. This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates. The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017;Irvine and Callison-Burch, 2013;Kondrak et al., 2003;Schafer and Yarowsky, 2002;Mann and Yarowsky, 2001). Inducing translations via loanwords was specifically targeted in . While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled. For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data. Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages. Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories "V" and "N" from (Mericli and Bloodgood, 2012). The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above. In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant. Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed. These are our candidate loanwords. There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language. For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script. For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables. Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script. Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script. Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string. 1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language). For more information see (Habash, 2010). For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.' character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1. For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration. This is itself a multi-step process (see next paragraph for details). In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable. In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string. The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'. Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc. that were supplied by a language expert. Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable. The entire conversion process is illustrated in Figure 1 for the French word raconteur. At the top of the Figure is the IPA from the French dictionary entry with syllables marked. At the next level, step 1 (syllabification) has been completed. Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed. The next level shows the syllables after step 2.3 has been completed. The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword. Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia. org/enwiktionary. From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1: Example of French to Arabic Process for the French word raconteur. As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space. Note that in the final step the word is in order of Unicode codepoints. Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions. Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings. The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress. com. The crawled user comments contain Moroccan Darija in heavily code-switched environments. While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time. The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983;Redouane, 2005). The total number of tokens in our Hespress corpus is 18,781,041. We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus. Moreover, more than a million (1169087) bourgeoisie . We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources. To test this hypothesis, we ran an MT experiment as follows. First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English. In total, 273 sentences were translated. This served as our test set. We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013. 4 We trained the system using Moses 3.0 with default parameters. This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA. We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data. This time the BLEU score increased to 8.11, a gain of .63 BLEU points. Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages. The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges. Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages. We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons. This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points.
2,863
2017-06-06T00:00:00.000
[ "Linguistics" ]
Integrated analysis of DNA methylation and microRNA regulation of the lung adenocarcinoma transcriptome Lung adenocarcinoma, as a common type of non-small cell lung cancer (40%), poses a significant threat to public health worldwide. The present study aimed to determine the transcriptional regulatory mechanisms in lung adenocarcinoma. Illumina sequence data GSE 37764 including expression profiling, methylation profiling and non-coding RNA profiling of 6 never-smoker Korean female patients with non-small cell lung adenocarcinoma were obtained from the Gene Expression Omnibus (GEO) database. Differentially methylated genes, differentially expressed genes (DEGs) and differentially expressed microRNAs (miRNAs) between normal and tumor tissues of the same patients were screened with tools in R. Functional enrichment analysis of a variety of differential genes was performed. DEG-specific methylation and transcription factors (TFs) were analyzed with ENCODE ChIP-seq. The integrated regulatory network of DEGs, TFs and miRNAs was constructed. Several overlapping DEGs, such as v-ets avian erythroblastosis virus E26 oncogene homolog (ERG) were screened. DEGs were centrally modified by histones of trimethylation of lysine 27 on histone H3 (H3K27me3) and di-acetylation of lysine 12 or 20 on histone H2 (H2BK12/20AC). Upstream TFs of DEGs were enriched in different ChIP-seq clusters, such as glucocorticoid receptors (GRs). Two miRNAs (miR-126-3p and miR-30c-2-3p) and three TFs including homeobox A5 (HOXA5), Meis homeobox 1 (MEIS1) and T-box 5 (TBX5), played important roles in the integrated regulatory network conjointly. These DEGs, and DEG-related histone modifications, TFs and miRNAs may be important in the pathogenesis of lung adenocarcinoma. The present results may indicate directions for the next step in the study of the further elucidation and targeted prevention of lung adenocarcinoma. Introduction Lung cancer is presently the leading cause of global cancer-related death, with an increasing prevalence and mortality. Smoking is the predominant risk factor for lung cancer. While, in East Asia, ~30% of patients suffering from lung cancer were never smokers (1,2), and non-smoking-related lung cancer can also occur in current and former smokers (3). Unfortunately, lung cancer has not been solved in regards to prevention or diagnosis or treatment. In addition to small cell lung cancer (SCLC) accounting for 10-15% of lung cancer cases, non-small cell lung cancer (NSCLC) represents ~85-90% of overall lung cancer cases (4). NSCLC is also subdivided into three histologic types, including adenocarcinoma, squamous cell carcinoma and large cell undifferentiated carcinoma. Lung adenocarcinoma accounts for almost 40% of NSCLC. Due to the relatively high incidence of lung adenocarcinoma, much research has been conducted to elucidate its nature and mechanisms. Previous studies have found various genes related to lung adenocarcinoma. Su et al (5) found that a higher level of cyclooxygenase-2 decreased the survival rate of patients through many mechanisms, such as a corresponding higher level of vascular endothelial growth factor that stimulated the growth and migration of cancer cells (6), a higher lymphatic vessel density that reduced the restriction of cancer cell invasion (7), and enhanced lymph node metastasis that accelerated the metastasis of cancer cells (8). Mutations of oncogene K-ras and tumor-suppressor gene TP53 have a strong link with lung adenocarcinoma (9). Other fusion genes have been further studied concerning the correlation with lung adenocarcinoma. Fusion of the kinesin family member 5B and RET proto-oncogene was found to occur in a subset of NSCLC (10). Fusion genes of echinoderm microtubule associated protein like 4 -anaplastic lymphoma receptor tyrosine kinase and kinesin light chain 1 -anaplastic lymphoma receptor tyrosine kinase were also found in lung adenocarcinoma (11). To date, the pathogenesis of NSCLC and lung adenocarcinoma is difficult to determine. To reduce the enormous morbidity and mortality of lung adenocarcinoma, it is critical to identify lung adenocarcinoma-associated genes and mechanisms. Integrated analysis of full DEGs and the expression of regulatory factors such as methylation, mRNA splicing, transcription factors (TFs) and microRNAs (miRNAs) is an effective method for disease pathogenesis study. In the present study, DEGs, exons and isoforms, as well as DEG-related methylation, TFs and miRNAs were integrated and analyzed. Materials and methods Datasets. The raw experimental data under accession no. GSE 37764 (12) used in the present study are publically available in the Gene Expression Omnibus (GEO) database (http://www. ncbi.nlm.nih.gov/geo). These data, which include expression profiling, methylation profiling and non-coding RNA profiling of 6 never-smoker Korean female patients, were produced by high throughput sequencing. The histologic origins were cancer tissues and adjacent normal tissues of non-small cell lung adenocarcinoma. In the present study, using normal tissues as control, the molecular variations in tumor tissues were identified. The platform of this data is GPL10999 (Illumina Genome Analyzer IIx, Homo sapiens). Methylation profiling and differentially methylated gene screening. Trimmomatic (13) software package, a flexible, pair-aware and efficient preprocessing tool for Illumina sequence data, is often used to remove low quality reads and trim adaptor sequences. In the present study, the methylated DNA immunoprecipitation-sequencing (MeDIP-seq) was preprocessed with the Trimmomatic (13). During the preprocessing of the Illumina reads, a minimum quality cutoff on the first and last bases using LEADING: 3 (trim the leading nucleotides until quality >3) and TRAILING: 3 (trim the trailing nucleotides until quality >3) was imposed, and a minimum sliding window quality using SLIDINGWINDOW: 4:15 (trim the window of size four for reads with local quality below a score of 15) was subjected. In addition, the resulting reads shorter than 25 bases were discarded. Then the Bowtie (14) alignment algorithm (with default parameters) was used to align the Illumina reads to the human reference genome (hg19), and SAM tools (15) was applied to remove PCR duplicates. The differentially methylated regions (DMRs) were identified by MEDIPS (16) in R. with false discovery rate (FDR) <0.1. Each DMR contains multiple methylated loci, and the determination of overlaps between methylation loci and the adjacent genes were computed using the BED Tools (17) software. Briefly, differentially methylated loci between -2,000 and +1,000 bp around transcription start site (TSS) were selected, and the adjacent genes were defined as differentially methylated genes (DMGs). Gene expression profile analysis. RNA-seq reads were cleaned to remove low quality regions and sequencing adaptors utilizing Trimmomatic (13) software package (LEADING: 3, TRAILING: 3, SLIDINGWINDOW: 4:15, MINLEN: 36). These massively parallel short reads were subsequently mapped to a reference genome with TopHat (18) (no >5 bases mismatch). Since multi-exon genes can encode different transcripts and multiple transcript variants encode different isoforms, differentially expressed exons were analyzed by DEXSeq (19) in R. and differential expression analysis of genes and transcript isoforms were performed with Cufflinks (20) algorithm. The parameters of DEXSeq and Cufflinks were default values. The thresholds were q-value <0.05 and fold-change (FC) >2. Comparing the cancer tissues and control, genes and exons with average expression levels >10 FPKM (fragments/kilobase of transcript/ million mapped reads) were defined as differentially expressed. Functional analysis of a variety of differential genes. Gene Ontology (GO) functional enrichment and annotation of differential genes, including differentially methylated genes and DEGs, were computed using the database for annotation, visualization and integration discovery (DAvID) (24). The annotation of miRNA-target DEGs was performed with TarBase 6.0 database (capturing the exponential growth of miRNA targets with experimental support) (25). Then the DIANA miRPath v.2.0 (26) was used to determine molecular pathways potentially altered by differentially expressed miRNAs based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The ChIP-X Enrichment Analysis (ChEA) and the ENCODE ChIP-seq (28) were utilized to search for enriched TFs located upstream of the DEGs. TF-target genes were predicted and combined with differentially expressed miRNAs and DEGs using mirConnX (29) with Pearson's correlation coefficient >0.96 and then, the integrated network of TFs, miRNAs and TF-target DEGs were constructed and analyzed. Results Differentially methylated regions and genes. After comparison of the MeDIP-Seq data between cancer tissues and paracarcinoma tissues of 6 non-small cell lung carcinoma patients, DMRs and DMGs were obtained. Most of the DMGs (>90%) were detected in one patient only (Fig. 1A). Only 82 genes were found in 2 or more patients and ~1/3 of the DMGs (34) were located in the mitochondrial genome. The numbers of DMGs in 5 patients were similar except in 1 patient (P3) (Fig. 1B). In patient P3, the hypermethylated genes were significantly more than in the others. Functional GO analysis showed that the DMGs were mostly associated with metabolic pathways (Fig. 2A). The most commonly enriched GO terms were cell morphogenesis, mitochondrial ATP synthesis coupled electron transport and ATP synthesis coupled electron transport. Gene expression profile analysis. DEG screening analysis found that a total of 1,498 genes were differentially expressed between the cancer tissues and adjacent normal tissues, and 1207 isoforms of 1,103 genes were differentially expressed. Additionally, 1,286 exons of 916 genes were also differentially expressed. Functional GO analysis showed that most of these differential genes were related to cell migration and apoptosis ( Fig. 2B-D). The most commonly enriched terms of genes were response to wounding, vasculature development and blood vessel development. The most commonly enriched terms of the exons were regulation of Rho protein signal transduction, regulation of small GTPasse mediated signal transduction and regulation of Ras protein signal transduction. The most commonly enriched terms of transcripts were cell adhesion, biological adhesion and vasculature development. A total of 541 genes among these differential genes possessed only 1 differentially expressed exon. As shown in Figs. 3 and 4, the expression levels of different exons corresponding to the same one gene had dozens of time variations. Several exons with a lower expression level appeared almost unanimously between the cases and control, while others displayed a significant difference between these two groups. There were 94 common genes differentially expressed at the levels of genes, isoforms and exons (Fig. 3D). Nine differentially expressed miRNA-target genes with methylation were differentially expressed at levels of isoforms (Fig. 3B) and 14 at the level of exons (Fig. 3C). Integrated analysis of the differential genes. Thirteen differentially expressed miRNA-target genes included differentially methylated genes, and also differentially expressed exon-related genes. Furthermore, they were regulated by differentially expressed miRNAs (Table I, Fig. 3A). Yet, differential methylation of these 13 genes was observed only in one patient. In addition, ribosomal protein S kinase, 90 kDa, polypeptide 2 (RPS6KA2), DOT1-like histone H3K79 methyltransferase (DOT1L) and thrombospondin 2 (THBS2), included in these 13 genes, were detected with both hypomethylation and hypermethylation. The overlapping genes among the differentially methylated genes, DEGs and isoforms, differentially expressed exon-related genes, differentially expressed miRNA-targeted genes were v-ets avian erythroblastosis virus E26 Oncogene Homolog (ERG), StAR-related lipid transfer domain containing 8 (STARD8) and THBS2. Isoforms and expression levels of these 3 genes are shown in Fig. 4 The possible regulatory networks of methylation, miRNA expression and gene expression are shown in Fig. 5. There were 6 overexpressed and 5 downregulated miRNAs, 5 upregulated and 8 downregulated genes, 2 genes with hypomethylation, 8 genes with hypermethylation and 3 genes with contradiction. In the 11 miRNAs, miR-9 (degree= 8) and miR-182 (degree=5) had more degrees than the others. Genes including DOT1L, apoptosis-associated tyrosine kinase (AATK), syndecan 1 (SDC1) and THBS2 had more degrees. Transcription analysis of DEGs. ChEA2 analysis results indicated that the screened DEGs were modified and regulated by multi-cancer cell line histones including tri-methylation of lysine 27 on histone H3 (H3K27me3) and di-acetylation of lysine 12 or 20 on histone H2 (H2BK12/20AC), which pertained to the ENCODE database (Fig. 6A). The upstream TF binding patterns were not as clustered as that of the histone modification; they were enriched in different ChIP-seq clusters of TFs in different cell lines (Fig. 6B), such as GATA2 and CJUN in human umbilical vein endothelial cells (HUvECs), glucocorticoid receptors (GRs) and estrogen receptor (ER)α in endometrial cells (ECC1), while, P300, signal transducer and activation of transcription (STAT1) and JUND in HeLaS3 cells. Discussion NSCLC accounts for ~85% of all lung cancer cases (30) and remains the leading cause of cancer-related death worldwide Figure 7. Transcription factor-microRNA regulatory network of the differentially expressed genes. Differentially expressed genes, transcript factors and differentially expressed miRNAs are shown in yellow, green and blue, respectively. (31). Lung adenocarcinoma, the major subtype of NSCLC responsible for more than 500,000 mortalities/year worldwide (32), is associated with a poor prognosis. In the present study, differentially methylated regions, differentially expressed miRNAs and transcriptomics between different tissues from 6 non-small cell lung adenocarcinoma patients were analyzed. Several DEGs, miRNAs and TFs were screened, which were expected to be associated with metabolism, cell apoptosis or various diseases; thus, they may be important in the progression of lung adenocarcinoma. Kim et al (12) identified various novel genetic aberrations, gene network modules and miRNAtarget interactions within the same dataset, yet, it is distinct from ours. In addition, the pathogenesis of lung adenocarcinoma is far from clear. With the different bioinformatics tools, the results of the same analysis were slightly different from those of Kim et al (12). The new information obtained from the present study may help to illuminate the molecular mechanisms of this disease. The methylation analysis results of the 6 patients were relatively diverse. Since a relatively large number of methylation sites are concentrated in the mitochondrial genome, it is difficult to research the connections between methylation and gene expression. Additionally, RPS6KA2, DOT1L and THBS2 were detected with both hypomethylation and hypermethylation. This may be relatively related to the great individual differences of the methylated regions in the patients. However, previous research has confirmed that DNA methylation is critical in lung cancer (33). Combined with the subsequent analysis of differential expression of genes, isoforms and exons, screening of miRNA-related genes, three overlapping genes were obtained. ERG, a member of the ETS oncogene family (34), is intimately involved in the development of multiple cancers including prostate cancer (35). The TMPRSS2-ERG gene fusion is now a specific biomarker of prostate cancer (36). There is little research on ERG and lung adenocarcinoma. As cancer develops from glands which are the same as prostate cancer, ERG may also be related with lung adenocarcinoma. STARD8 was found downregulated and highly methylated in the present study. Durkin et al (37) suggested that STARD8 is a tumor-suppressor gene encoding DLC-3 to suppress tumor cell growth. It has a higher level of methylation in colorectal cancer than other types of cancer (38). THBS2, an upregulated gene, encodes a protein belonging to the thrombospondin family. This protein has been shown to function as a potent inhibitor of tumor growth and angiogenesis, and it may be involved in cell adhesion and migration (39). THBS2 also has CpG island methylation in malignant ovarian tumors (40). Therefore, STARD8 and THBS2 may also be involved in lung adenocarcinoma. Gene expression is under the elaborate control of interrelated factors including TFs and histone modification. In the present study, comparative analysis of histone modifications in tumor and normal tissues was conducted. This revealed that DEGs were centrally regulated by H3K27me3 and H2BK12/20AC in several cancer cell lines. H3K27me3 is regarded as related to gene silencing (41). The H3K27me3 marker is associated with promoters of all hypermethylated genes associated with tumor suppressors in cancer cells (42). With the prevalent regulation executed by H3K27me3 on DEGs screened in the present study, abnormal modification of H3K27me3 may play an important role in lung adenocarcinoma. Studies have shown that high expression of histone H3K27me3 is related with a good prognosis of patients with NSCLC; namely, the higher expression of histone H3K27me3, the better the prognosis of patients (43). Functional analysis of DEGs and isoforms revealed that they were enriched in the process of hormonal responses. Enrichment analysis of ChIP-seq presented that DEGs were enriched in different ChIP-seq clusters including GR and ERα of TFs. This indicated that regulation of ER and GR may be associated with lung adenocarcinoma. Studies have shown that ERα and ERβ, especially ERβ, are expressed in NSCLC to induce tumor cell proliferation (44). It was found that midkine plays a pivotal role in epithelial-mesenchymal transition in lung adenocarcinoma (45). Enhanced ERβ-mediated estradiol dysregulates midkine expression (46). In previous research, GR, a member of the nuclear hormone receptor family, mediated cancer cell apoptosis and thereby slowed tumor growth (47). GR is downregulated by increased promoter methylation, which is similar to mechanisms associated with common tumor-suppressor genes (48). The constructed TF and miRNA regulatory networks showed that hub nodes including miR-126-3p, miR-30c-2-3p, HOXA5, MEIS1 and TBX5 were markedly different in two separate TF and miRNA enrichment analyses, and their levels were significantly decreased in cancer tissues. As crucial upstream genes, their significant change in the expression level may affect a plurality of downstream target genes. Expression of most of the HOX family member are altered in NSCLC cells significantly (49). Contradictory results were found by Abe et al (49) who detected the downregulation of HOXA5 in NSCLC. Whether HOXA5 regulates various lung cancer-related genes or what changes it undergoes in lung adenocarcinoma, remains to be elucidated. The relationships of two other TFs including MEIS1 and TBX5 with lung cancer are unclear. It is known that MEIS1 is one of the co-factors of the HOX family (especially for HOXA7 and HOXA9) (50). Together, they are involved in human leukemogenesis (51). TBX5 regulates cell proliferation during cardiogenesis (52) and it is related to cell migration as well as cell proliferation in cancers (53). miR-126 has a clearer relationship with NSCLC and could inhibit the proliferation and invasion of NSCLC cells (54). Downregulation of miR-30c promotes cell migration and invasion of NSCLC cells (55). Despite the fact that altered expression of these two miRNAs has long been known, the exact regulatory mechanisms remain to be studied. The present integrated analysis found that possible target expression levels of these two miRNAs may also undergo significant changes. This indicates one direction for further study. Differentially methylated regions, differentially expressed miRNAs and transcriptomics of normal and cancer tissues were analyzed. Three possible lung adenocarcinoma-related DEGs including ERG, STARD8 and THBS2 were identified. Moreover, DEG-related histone modifications and TFs were screened and underwent integrated analysis. Lung adenocarcinoma-related DEGs may be under comparable regulation of histones. Moreover, several TFs and miRNAs may play critical roles in the tumorigenesis of lung adenocarcinoma. These results provide the foundation for further lung adenocarcinoma research, and these results must be confirmed through additional experiments.
4,209.8
2015-05-29T00:00:00.000
[ "Medicine", "Biology" ]
Positioning and determining pupils being open and closed and its role in accident reduction Despite wide spread advances in the car manufacturing industries around the world, the death toll from car accidents is worrying. This concern is heightened when reports of a road accident website reports that the main reason in 25% of road accidents in particular and 60% to road accidents resulting in death or injury. Therefore, there has been a great deal of researches and functions for the automatic and machine detection of drowsiness that of them have reached the production stage. The present study is a method for positioning the pupil of the eye, determining the opening and closing of the pupil in real time and in unlimited environments, using the color feature in the first step, in Ybcbr color space and with the help of Gaussian function and Euclidean distance detection and then the area of the eye is positioned using the Viola jones algorithm. Finally, in order to locate the pupil and detect its openness, we have used two parallel Kalman filters. If the pupils are closed, they follow the Harris Eye Detection algorithm to identify the drowsiness of the driver and use this system to minimize the deaths caused by fatigue (tiredness) while driving. Detection is performed more precisely in the case of face rotation, different lighting conditions, and the eyes being closed or missing one of the eyes, the presence of glasses, beards, makeup, hijab, or obstruction of the eyes. The low computational complexity and maximum stability, real-time and unrestricted environment are other advantages of this method, all because the filters operate in parallel and do not even require high-resolution images. INTRODUCTION One of the most important contributors to traffic accidents, especially on intercity roads, is fatigue, drowsiness and lack of focus [1-3]. Fatigue and drowsiness reduce the driver's perception and ability to control the car. Researches show that the driver is usually tired after one hour of driving. But in the early afternoon after lunch, and also at midnight, the driver feels drowsy(tired) for less than an hour. Of course, in addition to natural causes, alcohol consumption, drugs and medications that lead to decreased alertness can also affect on the driver's drowsiness [4]. Most of the accidents that are caused by fatigue or lack of concentration occur on intercity roads and for heavy vehicles. Most of these accidents occur around 2-6 or 15-16 o'clock. Drowsiness is one of the factors affecting the severity of road accidents that cause a large number of casualties to the community every year. Recognizing the drowsiness is very important and failure to detect it in time will not work [5,6]. Drowsiness detection methods can be classified into two types of surveillance and vehicle-based approaches. In the surveillance mode, the main means are sensors and cameras that record the physical signs of the driver. These signals are then sent to a computer and after processing, the driver's consciousnes gets estimated. The physiological index sensor, based on the natural physiological effects of humans are divided in two groups. Changes in physiological signals such as brainwaves, palpitations, blink of an eye and measurement of physical changes such as tilting, leaning and tilting, anteing, eyes being open and closed [7-9]. The camera of these systems is usually positioned at the top of the steering wheel to get a good picture of the driver's face. The figure below shows an overview of a driver's face monitoring system and how the camera is positioned inside the vehicle. Of course, because of its simpler and more accurate mathematical and engineering analysis, this method is more expansive and versatile than vehiclebased methods [10]. But since these sensors are connected directly to the driver's body, they cause driver annoyance. In addition, for long periods of time, the transpiration of the driver's body reduces the accuracy of the sensors. The eye is an important and sensitive part of the human body and exhibits many forms such as fatigue and drowsiness, sorrow, gloom, drunkenness, and so on. So, it can be used in cars to detect driver's drowsiness. RESEARCH METHODS eye is the most important member of the face, where symptoms of fatigue and lack of focus appear. For this reason, many driver face monitoring systems only detect driver's fatigue and lack of focus based on features extracted from the eye. Steps of an Eye Tracking system: 1)Imaging 2) Face decryption 3) Eye decryption 4) Face tracing (detection) Imaging Imaging section includes lighting, camera and optical filter, image card and controller if necessary. The camera captures an image with a camera with a PHILIPS video input in a Omnidirectional way with a resolution of 160 x 120. And all clips are made at 15fps (frame rate 15 frames per second). In the clips, situations such as blindfolding, driver's head condition, repeated yawning of the driver's face Software Basics Recognize these situations by applying image processing techniques and video frame processing. Its hardware includes an electronic board that, as soon as it receives visual data showing the driver's unconscious, failure to recognize the driver's face by lowering the head, repetition of the yawning, it warns. Face decryption Step The most common way to decrypt (detect) faces is to use color attributes [11][12][13]. Because the color is constant compared to the face rotation, and on the other hand, the facial color is processed faster than other facial features. Rgb color space is a standard and commonly used color space for rendering color images, but due to the use of factors such as brightness and saturation to determine the color code of a pixel one should look for a space that is not dependent on these components to be used for facial recognition. As we know, the intensity of light and facial light varies from person to person and from environment to environment. Ycbcr chromatic color space is obtained from Rgb using Equation 1: RGB Cr Cb YCBCR Figure 1. Image of a person in a different color space Getting a facial skin color model: Having a collection of different skin types with different colors and textures and converting them into YCbCr color space is necessary. Figure 2 -Examples of different facial skin colors We obtain the skin color model using covariance and mean values according to Equation 2: We then use the Gaussian function and the Euclidean distance to reveal the face model to obtain Euclidean equation 3: The input image used here has a complex background and it has no limitations and the cervical region can rotate in any way and at any angle. Or the eyes can be closed and the person can be with glasses, beard, hijab, makeup Also, the ownership of each pixel from the image to the obtained model is specified here. 3) Eye detection method Eye detection methods [14,15] are: 1) Shape and structure-based methods 2) Feature-based methods 3) Apparatus based methods 4) Combined method Shape-based methods: It is about having a particular shape and form that has an eye This model can be oval or have a complex structure [16]. Feature-based methods: They use a number of eye features such as pupil and light reflection, eyelashes, sclera [17] The method based on the appearance or pattern matching, also known as the general method, is based on the appearance of the eye, directly detecting and tracing the eyes. These methods are independent objects and can model any other object next to the eye [18]. Combined techniques whose main purpose is to combine the different advantages of eye models in one system to overcome the relative limitations of each other. [19] Due to the differentiation of eye color from other parts of the face, we can use horizontal and vertical facial image based on the color feature. In the horizontal image, we obtain the summation of the pixel intensities in each row and in the vertical image the summation of the pixel intensities in each column [20]. We use the Viola-Jones method to diagnose: This algorithm is proposed by Paula and Michael Jones, It is generally used to detect objects, but its use in Its use to find the location of the eye is very pervasive. Training process of this algorithm is very time consuming but its detection process is very fast. We use cascading classification to expedite the detection. Extracting the eye range This range starts from the top of the eyebrows to the bottom of the eyelid It includes two eyebrows and their distinctive features such as eyelids, corners of the eyes Using the horizontal face image obtained from Equation 5: On the other hand, the horizontal image also has two vertices or peaks. The first peak is forehead (No. 1). Second Peak -Intervals of the nasal septum to the under eye (bottom of the eye area NO. 5) Because of the more brightness than the eyes themselves. We use these peaks to extract the eye area Since the second peak is located above nose septum it does not cause problems even if one person has a beard. Vertical and horizontal points can be used to divide the eye area into the left and right areas. Figure A Analysis There are two valleys above. The first valley is number 2 that is related to the eybrows and the second valley number 4 is related to the pupil. This subject is true for both the left and right regions, and This is due to the common property of the darkening of the eyebrows and pupil, which results in the formation of valley. Finding pupil One of the most common usages of image processing is pupil positioning. Pupil positioning has many usages in medicine, behavioral science, biometrics, and human-computer interaction. By having a pixel of the iris or pupil and having the pupil and using the Harris algorithm and the corners of the eye properties, you can calculate the pupil center. We use the intersection method to find the center of the pupil In this section, only one row and column are processed instead of the entire eye area. The pupil center is obtained from Equation 7: Yc= ; = 4)Eye tracking There are 3 categories available for eye tracking [13] -Knowledge-based methods: Based on the laws obtained from the results of studies and researches on the components of the face that are defined and developed, they must establish rules that define the characteristics of the face and the relationships between them (Y.Tian, 2000). -Educate training-based techniques in three categories of neural networks, Adaboost clusters, and support vector machines [14] Estimation and positioning of eyes in consecutive frames according to cervical movements, eye movements, independent eyelid movements. If the first stage is correctly estimated, the accuracy and efficiency of the next steps will increase [20] The Kalman filter method is more suitable for this step because of its lower computational complexity and higher speed. The Kalman filter is a recursive algorithm that estimates the location of the eye in subsequent time frames. That it means where to look for pupil and how much of the image in the next frame should be searched around estimated locations to find pupil with high confidence. This is accomplished by the state vector (matrix of equation 9) of the system, which contains the various components of motion and the components of velocity (pace) in the x and y direction. + = + (8) Wt Gaussian white noise with mean zero, xt + 1 state vector at time t + 1, xt is the state vector at time t (acceleration), t the distance between frames = (9) Since the method used in this article is a combination of two Kalman filters in parallel and the detection of the corners of the eye that will reveal each eye separately, if both eyes are closed, the Kalman filter is not capable of accurate tracing and it becomes disordered. To fix this problem, in addition to the Kalman, specifications of the corners of the eyes should also be used For this purpose, the two corners of the left and right should be recognized. We use the Harris algorithm (Equation 10) to find the corners In this algorithm, At each point in the image, we move the frame that by moving it the average change in the brightness of the image in each window compared to the main window gets calculated and the minimum value of this variable is considered as the response. Depending on the movement and displacement of the frame, we have three impressions: 1)If the framed area is homogeneous in brightness, all displacements lead to a small change and the corner response has an small amount. 2)If the boxed area is located on an edge, the displacement in the direction perpendicular to the edge has the highest change and the displacement along the edge has the least change, so the function in this case is also small. 3)If the boxed area is on one corner, the displacement in all directions will cause a big change So the corner response function on the edges of the image will have the maximum value. , = ∑ , , | + , + − , | , (10) W is the frame applied to the image that is considered as circle with a factor of 1and I is the image gradient and E are the changes made to the (X, Y). Proposed Algorithm Results If the Kalman filter algorithm estimates the position of the eye, the estimated position is compared with the previous position, and their coordinates exceed an empirically obtained threshold level, it indicates that the tracing is wrong and that the position of the corners of eyes needs to be found. However, the hardware used in this method has increased compared to previous methods Due to an added filter for each eye, but tracing in the face-to-face rotation, different lighting conditions and the eyes being closed or missing one of the eyes, the presence of glasses, beards, makeup, hijab, eye obstruction in this method is more accurate Another advantage of this method is the low computational complexity and maximum stability, realtime environment without limitation. And all are because of the filters that work in parallel and don't even need high-resolution images.
3,316.6
2020-09-29T00:00:00.000
[ "Computer Science" ]
ANALYSIS OF INTEREST IN SAVING IN ISLAMIC BANKING WITH A SOCIAL ENVIRONMENT AS A MEDIATOR IN THE RELATIONSHIP OF RELIGIOSITY AND STUDENT KNOWLEDGE Saving is a good habit when it comes to financial arrangements, including for college students. The rapid development of sharia economy makes the emergence of various Islamic banking products for this saving activity. This is of course beneficial for students who should have started making financial arrangements. Although the interest in saving is still lacking, the use of Islamic bank products is quite profitable for their future. The purpose of the study was to analyze the factors that influence students' interest in saving in Islamic banking. Linear regression is an analysis technique with hypothesis testing to support the conclusions on RStudio. The results of analysis show that the direct impact between religiosity and knowledge of the social environment has a significant effect simultaneously, although religiosity cannot be a predictor of partial saving interest. While the results of the track analysis showed that social environment was able to be a mediator for religious relationships to saving interests, but not for knowledge relationships towards student saving interests in Islamic banking. INTRODUCTION Saving is one of the positive activities for all walks of life, as well as for students who still have to prepare for their future. This is driven by several factors, one of which is interest or belief. Confidence is a considerable motivation in encouraging individuals to consume a product (Andespa, 2017a). Interest is the liking or inclination of the heart to an attention or desire (Pusat Bahasa Kemdikbud, 2016). A mental device consisting of a mixture of feelings, expectations, the establishment of prejudices or other tendencies that lead the individual to a certain choice (Musruroh, 2015). Interest always develops along with the basic needs of the community as well as in terms of saving. Along with the higher public interest in saving, the more varied banking products, both conventional and sharia. Banking is everything related to the bank, covering institutions, business activities and ways of carrying out its business activities. The main function of Indonesian banking is to collect and distribute public funds and aims to support the implementation of national development in order to increase equitable development and its results, economic growth and national stability, towards improving the standard of living of many people. (Otoritas Jasa Keuangan, 2016). Indonesia itself, which is a developing country, has a very rapid banking development. This can be seen both in bank office facilities and all banking products offered to the public, conventional and sharia. Islamic banks are financial institutions that carry out intermediary functions in collecting public funds and distributing financing to the public in accordance with sharia principles (Fajar Mujaddid, 2019). Banking is everything related to a bank. The main function of Indonesian banks is as a collector and distributor of public funds and aims to support the implementation of national development (Fajar Mujaddid, 2019;Otoritas Jasa Keuangan, 2016). As a developing country, Indonesia has a very rapid banking development. This can be seen both in bank office facilities and all banking products offered to the public, conventional and sharia. Islamic banks are financial institutions that carry out intermediary functions in collecting public funds and distributing financing to the public in accordance with sharia principles (Fajar Mujaddid, 2019). This makes various Islamic banking products run based on a system that is far from usury'. This is one of the things that affects the increase in interest in the use of Islamic banking products in the community, as well as among students. The concept of Islamic sharia-based finance has grown into a trend in the world economy, including in Indonesia. Based on Islamic banking data, the growth of conventional banks is smaller than that of Islamic banks where Islamic banks have experienced relative growth of around 40% per year in the last ten years while conventional banks are 20% Indonesian (Direktorat Perbankan Syariah, 2011). Along with the development of time, it is possible that the growth of Islamic banking will exceed the conventional development system. Based on the Indonesian Sharia Finance Development Roadmap (Otoritas Jasa Keuangan, 2018), The development of Islamic finance has produced various achievements, from the increasing number of products and services, to the development of infrastructure that supports Islamic finance. Even in the global market, Indonesia is among the top ten countries that have the largest Islamic financial index in the world. Nevertheless, the growth of Islamic finance has not been able to keep up with the growth of conventional finance. This can be seen from the market share of Islamic finance, which as a whole is still below 5%. However, when viewed from each type of Sharia product, until the end of December 2016, there were several Sharia products whose market share was above 5%, including Islamic banking assets amounting to 5.33% of all banking assets, state sukuk which reached 14.82% of the total government securities outstanding, Sharia financing institutions amounting to 7.24% of total financing, special Islamic financial service institutions amounting to 9.93%, and Islamic microfinance institutions at 22.26%. Meanwhile, Sharia products whose market share is still below 5%, include corporate sukuk in circulation amounting to 3.99% of the total value of sukuk and corporate bonds, net asset value of Islamic mutual funds of 4.40% of the total net asset value of mutual funds, and sharia insurance of 3.44%. In addition to the financial products above, shares of issuers and public companies that meet the criteria as Sharia stocks reach 55.13% of the stock market capitalization listed on the Indonesia Stock Exchange. The figures mentioned above show that Indonesia's Islamic finance still needs to be developed so that it can keep pace with the growth of conventional finance in order to grow the financial industry as a whole. Based on the description above, this study aims to analyze the determinants of students' interest in saving in Islamic banking. Widya Gama Lumajang Institute of Technology and Business is one of the universities located in Lumajang Regency. Along with the development of the sharia concept in the field of economics, this Private University made several curriculum adjustments by adding several courses that are in accordance with the concept of sharia economics. Research assumes that the basis for the use of sharia-based products is religion, but for students, knowledge should also be the basis for decision making. Therefore, this study was conducted to determine the influence of religiosity and knowledge on the interest in saving in Islamic banking with the social environment as the mediator. These variables are considered the most basic factors in determining students' interest in saving in Islamic banking. METHODS This research uses quantitative concepts with a survey approach where the research does not make changes or there is no special treatment of research variables. The purpose of this study is to explain the causal relationship between one variable and another through hypothesis testing (Setyobakti, 2018). This method involves active students of the Accounting Study Program at the Widya Gama Lumajang Institute of Technology and Business for the 2019/2020 academic year as research objects. Sampling was carried out using the purposive sampling method with the criteria being (a) active students and (b) students already mastering sharia economics courses so that 39 respondents were obtained as samples in this study. The data were collected using a research instrument in the form of a questionnaire with a likert type measurement scale. After the data were collected, the questionnaire was tested using testing research instruments using validity and reliability. Validity testing using data processing through RStudio with product moment correlation. Meanwhile, the reliability measurement method used is the Alpha Cronbach concept by showing the extent of respondents' consistency in answering the instruments assessed (Tavakol & Dennick, 2011). Furthermore, to determine the influence of each variable in this study,hypothesis and path analysis tests were used on RStudio. RESULTS AND DISCUSSION The collection of research data was carried out by providing questionnaires to research respondents. Furthermore, the collected data is analyzed with the help of the computer program RStudio version 1.4.1103. Instrument Test The research instrument test consists of validity and reliability tests. Validity tests are carried out to find out to what extent an item in the questionnaire can dig up the necessary data or information. The result of this test is said to be valid if the correlation value of each item is more than 0.5 . Meanwhile, reliability tests are carried out to measure the extent to which the proposed questionnaire can provide results that are no different from using the Cronbach Alpha concept. The results of instrument testing are given in Table 1, which shows that all statement items on each variable have a correlation value of more than 0.5, so it can be concluded that each item in the questionnaire used to dig up data is valid so that it can dig up the necessary data or information. Meanwhile, the results of the questionnaire reliability test in this study showed that all variables were very reliable as shown in the results of Cronbach's Alpha > 0.7. So it can be concluded that all the measuring concepts of each variable from the questionnaire used in this study are reliable questionnaires. Hypothesis Test The linear regression equation in data processing in this study consists of: These three structural equations have met the classical assumption that the normality test using the Kolmogorov-Smirnov concept with the ks.test menu, pmulticollinearity test is carried out by ols_vif_tol on packages(' olsrr') and heteroskedasticity is carried out using the Breusch-Pagan test on packages(' lmtest'). While the coefficient of determination (R Square) in equation (1) is 0.5859 equation (2) of 0.7415, y and equation (3) is 0.6027, which means that for independent variables in each equation of the regression structure is quite strong in affecting its dependent variables. The results of the hypothesis test show the signification of the research regression model. Equation (1) shows that the partial test p-value values are 0.276 and 7.79e-07 for each independent variable of the equation. From these results, it is concluded that variables X 1 do not have an effect on variable Y while on the contrary variables have a significant effect. Furthermore, for X 2 F-statistical testing, a p-value of 1.28e-07 was obtained, whose value was less than 0.05 so that a significant model was concluded simultaneously. Equation (2) shows the relationship between the variable independen and the mediator directly. In hypothesis testing, p-values were obtained of 4.69e-10 and 0.105 for each independent variable of the equation (2). From these results, it is concluded that the variable X 1 has no effect on variable Z while on the contrary variable X 1 has a significant effect. Furthermore, for F-statistical testing, a p-value of 2,663e-11 was obtained, whose value was less than 0.05 so that a significant model was concluded simultaneously Equation (3) shows the relationship between variables in this study. In the hypothesis t test, p-values were obtained of 0.731, 3.97e-06 and 0.232 for each independent variable equation (2). From these results, it is concluded that the variable X 1 and have no effect on variable Y while on the contrary variable X 2 have a significant effect. Furthermore, for F-statistical testing, a p-value of 3,669e-07 was obtained, whose value was less than 0.05 so that a significant model was concluded simultaneously Path Analysis This analysis is used to test a model of the relationship between variables in the form of causation. Through this path analysis, it will be determined which path is the most appropriate and short for an independent variable to a dependent variable. This analysis uses hypothesis test analysis and the concept of Sobel Test on packages('bda') Equation (1) shows the direct influence between religiosity and knowledge on the interest in saving. Based on equation (1) it is known that religiosity has no significant effect on the interest in saving with a pvalue of 0.276 (>5%). Meanwhile, knowledge has a p-value of 7.79e-07 (<5%) so it is concluded that knowledge has a significant effect on the interest in saving partially. However, the results of the hypothesis test simulation simultaneously showed religiosity and knowledge had a significant effect on saving interest with a p-value of 1.28e-07 Equation (2) shows the influence of religiosity and knowledge on the social environment which is the mediator in this study. In the case of the application of the mediator, the independent variable should have a significant effect on the mediator. This must happen if you want the mediation process to take place in accordance with the initial assumptions of this study. The results of the simulation of equation (2) showed that religiosity had a significant effect on the research mediator, namely the social environment, with a pvalue of 4.69e-10 (<5%). Meanwhile, knowledge has no significant effect on the social environment with a large p-value of 0.105 (>5%). Hypothesis tests show simultaneously, religiosity and knowledge have a significant effect on the social environment as mediators in this study. Equation (3) shows the influence of the mediator on the dependent variables of this study. The simulation results showed that the social environment did not have a partial significant effect on saving interest with a p-value of 0.232 (>5%). The same thing happens for religiosity where the p-value is 0.731 (>5%), while for knowledge it has a partial significant effect with a p-value of 3.97e-06 (<5%). The results of simultaneous hypothesis tests for religiosity, knowledge and social environment showed a significant influence with a p-value of 3,669e-07 (<5%) Based on the simulation results, it was found that there was a mediation process in the research model, where the social environment mediated the relationship between independent and dependent variables in this study. This is inferred based on the p-value of equation (1) which increases in equation (3) (Baron, R. M., & Kenny, 1986). Furthermore, the results of the Sobel Test show that variable X 1 with Z as a mediator variable that produces a p-value of 0.04307284 so that it can be concluded that variable Z can mediate the relationship between variables X 1 and Y. Whereas in variable X 2 where with a p-value of 0.1429638 it is concluded that Z cannot mediate variables X 2 and Y because the p-value is less than 0.05 Discussion The simulation results in the previous subsection were used as a basis for the discussion of hypotheses in this study. The first hypothesis states that religiosity and knowledge have a significant effect on students' interest in saving in Islamic banking. This hypothesis refers to equation (1) which shows the direct influence between religiosity and knowledge on students' saving interests. Based on the results of the hypothesis test, it was concluded that religiosity did not have a partial effect on students' interest in saving. This is different from knowledge where with a p-value of 7.79e-07 (<5%) it is concluded that it has a significant positive effect on students' interest in saving. However, although the partial test has different conclusions for both independent variables, the results of the simultaneous test state that there is a significant influence on the interest in saving from religiosity and knowledge. This explains that for the religiosity of the object of study is not enough to make him increase interest in saving in Islamic Banking. However, this will have a significant effect if it is accompanied by knowledge in the process of learning the object of research. Together, religiosity and knowledge encourage the object of research to increase interest in saving in Islamic banking in a positive manner. This is in line with previous research on (Wahyuning, 2021) where religiosity is not capable of being a predictor for students' saving interests. The second hypothesis states that the social environment can mediate the relationship between religiosity and knowledge to the interest in saving. This hypothesis is proven by the results of a path analysis that states that there is a mediation process in this independent and dependent variable. Based on the results of the sobel test, it was concluded that the social environment can be a mediator for religiosity relationships to students' interest in saving. However, different things are concluded for the relationship of knowledge to students' saving interests. The social environment is not able to mediate the relationship between knowledge and interest in student savings. This is also supported by equation (2) where knowledge cannot partially be a predictor for the social environment who is the mediator in this study. The results of this study are in line with the research (Rahmawati, 2019) and (Lestari, 2015) where the higher the student's religiosity value, it will encourage him to seek knowledge related to Islamic banking, both things that are prohibited and those allowed by religion. In this study, it was proven that knowledge variables are able to mediate religiosity towards the interest in saving in Islamic banking. The same is true of the social environment variables where (Andespa, 2017b) states that the cultural and family environment has a significant effect on the interest in saving. In general, social environments such as families, communities and schools will always contribute thinking to every individual decision. Likewise for students where they are social creatures who will always interact with their surroundings.
4,011
2023-06-11T00:00:00.000
[ "Economics" ]
Bifidobacterium adolescentis (DSM 20083) and Lactobacillus casei (Lafti L26-DSL): Probiotics Able to Block the In Vitro Adherence of Rotavirus in MA104 Cells Rotavirus is the leading worldwide cause of gastroenteritis in children under five years of age. Even though there are some available vaccines to prevent the disease, there are limited strategies for challenging diarrhea induced by rotavirus infection. For this reason, researchers are constantly searching for other approaches to control diarrhea by means of probiotics. In order to demonstrate the ability of some probiotic bacteria to interfere with the in vitro rotavirus infection in MA104 cells, strains of Lactobacillus sp. and Bifidobacterium sp. were tested in MA104 cells before the viral infection. As a preliminary assay, a blocking effect treatment was performed with viable bacteria. In this screening assay, four of initial ten bacteria showed a slight reduction of the viral infection (measured by percentage of infection). L. casei (Lafti L26-DSL), L. fermentum(ATCC 9338), B. adolescentis (DSM 20083), and B. bifidum (ATCC 11863) were used in further experiments. Three different treatments were tested in order to evaluate protein-based metabolites obtained from mentioned bacteria: (i) cell exposure to the protein-based metabolites before viral infection, (ii) exposure to protein-based metabolites after viral infection, and (iii) co-incubation of the virus and protein-based metabolites before viral infection to the cell culture. The best effect performed by protein-based metabolites was observed during the co-incubation assay of the virus and protein-based metabolites before adding them into the cell culture. The results showed 25 and 37% of infection in the presence of L. casei and B. adolescentis respectively. These results suggest that the antiviral effect may be occurring directly with the viral particle instead of making a blocking effect of the cellular receptors that are needed for the viral entrance. Background Worldwide, acute diarrheal disease (ADD) remains as one of the most common diseases affecting people of all ages, but its frequency and severity are higher in children under the age of 5 [1]. About 600,000 children die every year as a consequence of rotavirus infection, with more than 80% of all rotavirusrelated deaths occurring in low-income countries in south Asia and sub-Saharan Africa [2]. Globally, rotavirus-related deaths represent approximately 5% of all deaths in children of this age. However, in other regions of developing countries, mortality is not so high but important rates of morbidity still remain in spite of the availability of a polyvalent vaccine worldwide [3]. Probiotics are defined as Blived microorganisms that, when administered in adequate amounts, confer a health benefit on the host^ [4,5]. Some of these benefits have been established on human health in the infectious disease field: (i) interaction and enhancement of the immune system, (ii) production of antimicrobial substances, (iii) enhancement of the mucosal barrier function, and (iv) competition with enteropathogens [6][7][8]. Some studies have demonstrated the beneficial effect of probiotics against rotavirus infection [9,10]. Most of them are clinical assays proving that the use of probiotics can lessen severity and duration of rotavirus diarrhea [11], whereas other studies are performed in vitro directed to the understanding of molecular and biochemical pathways associated with the mechanism employed by probiotics to accomplish the antiviral activity [12][13][14]. Even if most of these studies show the effectiveness of probiotics or their metabolic products on viral multiplication (both clinical and in vitro assays), some strategies have been established associated to the antiviral effect such as blocking effect, intracellular regulation, and immune response modulation [15][16][17]; however, evidence is not enough to clarify the mechanisms by which these processes occur, giving way to further studies in order to understand better the antiviral effect mediated by probiotic bacteria. As a manner of fact, so as to advance the knowledge related to the strategies used by probiotics against viruses, we proposed as a hypothesis in which the probiotic bacteria were able to block in vitro rotavirus infection by altering the process of adhesion of the virus onto the cells. The aim of this study was to determine if the antiviral effect of probiotics was given by the competition of receptors on culture cells and whether this effect was caused by the whole and viable bacteria and/or was due to its protein-based metabolites. Cell Lines and Virus The MA-104 cell line (embryonic Rhesus monkey kidney cells) were grown in advanced DMEM supplemented with 4% fetal calf serum (Gibo, Invitrogen), L-glutamine (2 mmol/L), antibiotic, and antimycotic, at 37°C in 5% CO 2 atmosphere in tissue culture flasks until confluency. The cell culture medium was regularly changed. For the assays, 150,000 cells per well were placed in 24-well plates and incubated at the same conditions. After 24 h and 90% of confluence, each well contained about 500,000 cells. Rotavirus RRV strain (Rhesus monkey) was kindly donated by Dr. Carlos Guerrero of the Universidad Nacional de Colombia. The infection was performed with a MOI of 5 and the virus was previously activated with trypsin (10 μg/ mL). The MOI of 5 was used in order to saturate the MA104 cell culture, simulating what could occur during a natural viral infection in mature enterocytes, where the amount of virus is greater than the amount of cells. Bacteria were maintained under anaerobic conditions by streaking method in MRS solid culture media with AnaeroGen sachets (OXOID) in anaerobic jars. For experiments, bacteria were grown in MRS broth under anaerobic conditions as previously described by Hungate in 1969 [19]. Briefly, MRS broth was heated until boiling, and then gas exchange was performed with a constant flux of nitrogen (N 2 ) gas. A final gas exchange was performed with a mixture of 80:20 of nitrogen (N 2 ) and carbon dioxide (CO 2 ) in order to remove remaining oxygen present in the media. Recovery of Bacteria Protein-Based Metabolites Bacterial culture supernatants were obtained from growing bacterial cultures in 250 mL MRS broth under anaerobic conditions until the exponential growth of each bacterium was accomplished (recollection times differ between 8 and 10 h of bacterial growth) at 37°C. Bacteria were removed by centrifugation at 3000×g for 10 min. Supernatants were recovered and filtered through a pore of 0.22 μm. Protein-based metabolites were precipitated with 10% PEG (w/v) overnight. After that, concentric centrifugations at 16,000×g, 4°C for 30 min were performed with the aim of concentrating the protein-based metabolites present in the supernatants. Proteins were resuspended in 2.5 mL of sterile PBS and stored at −20°C until use. The proteins were quantified using the BCA kit Thermo Scientific. For both cytotoxicity and antiviral assays, a free bacteria broth control was considered, which was also precipitated with PEG and compared with metabolites for viability and with the positive control in the antiviral assays. Cytotoxicity Assays To perform biological assays, MA-104 cells were separately seeded in 96-well plates until confluence. Cytotoxic effect was tested for each probiotic bacterium and protein-based metabolites by tenfold serial dilutions added to the confluent cells. Cells exposed to bacteria or protein-based metabolites were incubated for 90 min at 37°C. After that, cells were washed twice with PBS and were incubated for 24 h at 37°C with 5% CO 2 . Cytotoxicity effect was determined by 0.4% trypan blue visualized in a light microscope for viable bacteria. For protein-based metabolites, cytotoxicity was tested using MTT salts (SIGMA-Aldrich, Saint Louis), in order to determine the formation of formazan products detected in a Multiskan MCC/340 (Thermo Fisher Scientific, Waltham) spectrophotometer at a wavelength of 540 nm. Cytotoxicity assays for whole and viable bacteria were tested at concentrations between 10 6 and 10 8 FCU/mL as reported by Botic et al. [18]. Protein-based metabolites were tested in ranges between 10 and 1000 μg/mL diluted in DMEM as well as pure metabolite. Antiviral Assays Inhibition of viral infection by the whole and viable bacteria: Ten probiotic bacteria [20] were tested against RV infection by the principle of blocking the viral entrance to the cells. For these experiments, cells were first incubated with viable probiotic bacteria (500 μL, 10 8 CFU/mL) for 90 min at 37°C and 5% CO 2 . After incubation, cell cultures were washed to remove unattached bacteria from the cells with DMEM without supplements and monolayers were challenged with RV infection for 10 h at 37°C and 5% CO 2 . Percentage of viral infection was measured by flow cytometry. These experiments were the preliminary tests to select the bacteria with major activity with the aim of using them in further assays. Inhibition of viral infection by bacterial protein-based metabolites (pre-treatment): Cells were first incubated with 100 μg/mL of each metabolite for 90 min. After this time, the unbounded protein-based metabolites were washed out from the cells with DMEM without supplements and monolayers were challenged with RV infection for 10 h at 37°C, 5% CO 2 . Inhibition of viral infection due to a co-incubation assay: Bacterial protein-based metabolites were first co-incubated with RV (previously activated with 10 μg/mL trypsin) in DMEM for 60 min at 37°C and 5% CO 2 . After this time, the mixture was placed in contact with the MA104 cells for 1 h at 37°C and 5% CO 2 , and then the excess of inoculum was washed and assays were incubated until 10 h of post-infection. Intracellular effect of the protein-based metabolites of probiotic bacteria against viral infection (post-infection assay): MA104 cells were first infected with the virus as previously described, and after removing the viral inoculum, cells were washed with PBS and were exposed to bacterial protein-based metabolites (100 μg/mL) per 1 h. Unbounded protein-based metabolites were washed, and cells were incubated until 10 h as mentioned before. For all the antiviral assays, positive and negative controls were included. Positive controls were MA104 cells infected with RRV at a MOI of 5 without any treatment. Negative controls were MA104 cells grown simultaneously in the experiments with DMEM without supplements in the culture. Antiviral assays were performed by three independent assays and duplicate each. Flow Cytometry: Detection of Viral Infection The viral growth was determined by flow cytometry in all the antiviral assays. Cells were dissociated with trypsin, placed in 1.5-mL conical tubes, centrifuged at 3000×g, and resuspended in 500 μL sterile PBS-EDTA [21]. Cells were fixed for 15 min with 2% paraformaldehyde. After removing this paraformaldehyde, cells were washed with PBS and permeabilized with Triton X-100 0.3%. For intracellular viral detection, an anti-TLP polyclonal antibody produced in rabbit, with a titer of 1/ 3000 (kindly donated by Dr. Carlos Guerrero, Universidad Nacional), directed to VP6 proteins of the virus, was used. Then, as a secondary antibody, an anti-Rabbit IgG Alexa Fluor 488 (Invitrogen) diluted 1/2500 was used; staining was performed at room temperature in dark conditions. Cells were washed twice with PBS and were resuspended in FACSflow until analysis. A FACS Aria II cytometer (Becton Dickinson) was used for analysis, where percentages of positives cells were included. Further analyses were performed with Flow-Jo software. Statistical Analysis ANOVA and Dunnett's tests were used as a parametric statistical analysis in order to find if the percentage of infected cells and the presence of the viral antigen inside the cells treated with probiotics or its protein-based metabolites were significant compared with the positive control (p < 0.05). Cytotoxicity Tests Cytotoxicity tests were performed to determine the maximum concentration at which bacteria or protein-based metabolites can be used on MA104 cell line. For whole and viable bacteria, it was found that a concentration of 1 × 10 8 CFU/mL of bacteria did not show toxicity over the MA104 cell line, shown by the trypan blue stain, where cell viability was higher than 90% in all the cases. Viable bacteria results show that the ten probiotic strains were not toxic for MA104 cell line and could be used in further experiments. In the case of the bacterial protein-based metabolites, folded dilutions in DMEM were analyzed with MTT technique, where in general, a concentration of 100 μg/mL for all the metabolites was atoxic for the MA104 cell culture. Viability was obtained between 90 and 100% in the presence of the four metabolites. In contrast, when testing pure metabolites (without dilutions in DMEM), high toxic effects were obtained (Fig. 1). Preliminary Screening of Bacteria with Potential Antiviral Activity Growth conditions of the ten probiotic strains are shown in Table 1. In the preliminary assay against RV infection, non-significant results were obtained; however, the 1 0 t e s t e d b a c t e r i a , L . c a s e i , L . f e r m e n t u m , B. adolescentis, and B. bifidum, were the ones with highest antiviral activity by a tentative blocking effect of the viral entrance. A reduction of the viral infection of 31, 37, 42, and 24%, respectively, measured by flow cytometry, was obtained. In contrast, viral infection in the positive control (MA104 cells infected with the virus) was around of 90% of positive cells. These four bacterial strains were selected to continue further inhibition experiments using primary protein-based metabolites derived from their growth (Fig. 2, viable bacteria). After selection of the four strains with the best antiviral effect, the metabolites of each bacteria were recovered. After precipitation with PEG, quantification was performed by BCA technique. Each metabolite was obtained in three independent culture batches for the antiviral assays. Results of the amount of protein could be seen in Table 2. Inhibition of Viral Infection by Bacterial Protein-Based Metabolites: Pre-Treatment Assay Pre-treatment period of 90 min on MA104 cells with probiotic protein-based metabolites followed by viral infection did not show any differences between the treatments and the positive control. Percentage of infection in the treatments was between 70 and 75%, which was the same as that in the positive control (71%). These results suggest that the protein-based metabolites were not significant in performing a blocking effect of cellular receptors in the viral infection (Fig. 2, pre-treatment). Reduction of the Viral Infectivity: Co-Incubation Assay between Virus and Protein-Based Metabolites To determine whether probiotic protein-based metabolites were able to interact directly with the virus, and thus affect the viral attachment to the cells, the co-incubation assay was performed. The results showed a significant decrease in the viral infection. The percentage of infected cells in the presence of the B. adolescentis (26%) decreased markedly in comparison with the positive control (80%). A similar behavior was found in the presence of L. casei, where the percentage of infected cells was significantly reduced to a 38%. Likewise, the protein-based metabolites of the other bacteria also Fig. 1 Cytotoxicity effect of bacterial metabolites tested in MA104 cells. a L. casei. b L. fermentum. c B. adolescentis. d B. bifidum. Notice 100% of cell viability when using 100 μg/mL with the metabolites of the four bacteria showed a significant result (P < 0.05) decreasing the percentage of infected cells to 54 and 50% (Fig. 2, co-incubation). Effect of Protein-Based Metabolites after Viral Infection In the last strategy analyzed, it could be observed that the single metabolite that achieved a significant reduction in the viral infection was the one obtained from B. adolescentis with a P value =0.001. The other three protein-based metabolites did not show any activity by this assay (Fig. 2, post-treatment). Discussion Taking into account that rotavirus infection is still one of the most important diseases in developing countries that affect children under the age of five [22,23], all the possible alternatives directed to improve the life quality of children should be considered. Thus, the use of probiotics to counteract the effect of rotavirus in the human population arises as an important strategy to manage the disease. Although probiotics have been reported in several studies against rotavirus infection [13,14,17,18], the specific mechanism by which the antiviral effect is mediated remains unclear. Even if probiotics are also used in the prevention and therapy of diarrhea, non-standardized conditions have been established in clinical trials, as well as the definition of the probiotic strains with best results [11]. Particularly in this study, the objective was to determine whether probiotics or their protein-based metabolites had the ability to interfere with the first steps of the viral cycle, which are viral adhesion or penetration, being fundamental steps for the viral infection [24]. The first approach of this work was evaluated with whole and viable bacteria in an early stage of the viral cycle. It was expected that viable bacteria attached to the cell surface in order to colonize the MA104 cell monolayers and thus cellular receptors involved in in vitro adhesion [18]. In the preliminary results of the screening assays, a blockage of the viral entrance was observed and could agree with the proposal of some other studies regarding the antiviral effect of probiotics [25][26][27][28][29]. Even if specific receptors were not tested, these preliminary results showed a reduction in the viral infection measured by flow cytometry in four out of the ten probiotic strains tested. One of these four bacteria was B. adolescentis, which has been previously reported as a potential microorganism with an antiviral activity against different viruses [14,[30][31][32]. Now, in spite of the wide use of probiotics in many fields, there are few cases where the use of viable bacteria could lead to opportunistic infections, allergic reactions, or autoimmune responses [33]. This is the reason why the use of probioticderived products has arisen in a new field denominated metabiotics [34]. Experimental approaches tested with these probiotic-derived products have also shown interesting results [35]. In this study, the probiotic strains chosen from the screening assay were L. casei, L. fermentun, B. adolescentis, and B. bifidum, where protein-based metabolic products were tested in further experiments. From the three different strategies evaluated (pre-treatment, co-incubation, and post-infection), the co-incubation assay of viral particles and protein-based metabolites showed the best results in the antiviral activity approach in comparison with the other treatments. It prevented the viral adhesion and/or penetration into the MA104 cells maybe because of a direct interaction of the protein-based metabolites with the external viral proteins such as VP7 or VP4. It is important to take into account that all experiments were performed with trypsin-activated viral particles, a fundamental process for a cleavage step of the viral proteins necessary for viral entrance to the host cell [36]. On the other hand, in the case of pre-treatment and postinfection assays, it was expected that the antiviral activity was mediated by a mechanism directed to the cells instead of affecting the viral particle. The hypothesis is that direct interactions with the cellular receptors or intracellular regulating processes could be happening. Here, in the first case, according to the suggested mechanism of probiotics [18,37], it was expected that the interaction with cellular receptors of protein-based metabolites could block the attachment of the virus to the cell surface, and thus, the viral entrance could not be performed. In the second case, it was proposed to produce an antiviral activity associated with intracellular regulation as it was previously reported for another strain of B. adolescentis [38]. With this study, it could be said that protein-based metabolites obtained from L. casei and B. adolescentis were able to block rotavirus entrance by a direct effect on the viral particle, in contrast to the proposed hypothesis. It is possible that adhesion process to the MA104 cell receptors could not be efficiently performed due to an alteration in the external viral proteins; results were observed in comparison with the positive control. Hence, results obtained in this study are a preliminary approach in order to continue analyzing the possible mechanism exerted by probiotic bacteria. An important fact to take into account in further studies is to evaluate if it could be a dosedependent activity of probiotic metabolites, taking into account that in this study, we used high concentrations of the metabolites. Therefore, an optimization of the process could be obtained if lower concentrations are also able to perform antiviral activity. These results contribute to strengthening the knowledge that supports the activity of probiotic bacteria against gastrointestinal viral infections. Several mechanisms have been proposed, but co-infection strategy could be considered as a novel approach against rotavirus. This strategy could be a possible alternative directed to crops that could be contaminated with RV on field. Further studies are needed to completely understand the specific mechanisms involved in the antiviral activity of probiotics; in vivo and clinical approaches are necessary in order to verify its antiviral activity. In the future, it could be proposed a simple and inexpensive biological product, with potential antiviral activity dispensed as a dietary supplement or maybe in a drug formulation. Data is shown as mean of three independent measures for each bacteria/ batch Ethical Statement This article does not contain any studies with human participants or animals performed by any of the authors. Funding This study was funded by Colciencias in the research project entitled BBúsqueda y caracterización preliminar de moléculas obtenidas a partir de bacterias probióticas para usarlas como posibles inhibidores de la infección in vitro por rotavirus y astrovirus^in the national call 629 from 2009 and by Pontificia Universidad Javeriana. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
4,971.4
2017-04-21T00:00:00.000
[ "Biology" ]
An experimental investigation on the effect of nano-graphene oxide on the maximum peeling force of Al/GFRP laminates The effect of adding different percentages of nano-graphene oxide (NGO) into the epoxy resin on the mechanical properties of NGO/epoxy and maximum peeling force of fiber-metal laminates (FMLs) with composite cores are investigated in this paper. Three different percentages of NGO including 0.1, 0.35 and 0.50 weight percentages (wt. %) are used to fabricate the tensile test samples of NGO/epoxy. Hand lay-up method is used to fabricate FML specimens with glass fiber reinforced plastic (GFRP) core and aluminum faces. Tensile tests are examined and the Young’s modulus and ultimate stress of NGO/epoxy samples are determined. It is found that after adding of 0.1 wt. % of NGO into the epoxy resin, the ultimate tensile strength decreased by 13.4% from 25.3 to 21.9 MPa. Moreover, using peeling test, the separation force between aluminum faces of Al/GFRP laminates are obtained. Based on the results, when 0.1 wt. % of NGO is mixed with the epoxy, the maximum peeling force of FML increases by 16% and amounts to 4 N/mm. Based on the results, when NGO is used next to micro fibers in the construction of composite core, they have an effective role on the interlayer adhesion of fibrous composites. Introduction Fiber-metal laminate (FML) materials are a group of composite family that have sandwich structures and usually made of a fiber-reinforced plastic layer as a core and two metal layers as upper and lower faces. Because two different materials are employed in FML structures, they are also known as hybrid composites. 1 FMLs are significant parts in the structure of aircrafts, automobiles, rail transport and many other means of human life. They have excellent mechanical, thermal and electrical properties along with high corrosion resistance and outstanding strength to weight ratio compared to conventional composite laminates. 2 Shanmugam et al. 3 combined the titanium alloy (Ti6Al4V) and ultrahigh molecular weight polyethylene fiber and made a thermoplastic FML. Myung et al. 4 investigated the mechanical properties and low-cycle fatigue behavior of woven-type glass-fiber reinforced plastic (GFRP) coated on one side of an Al 6061 aluminum alloy plate base on the GFRP layer thickness. Hua et al. 5 investigated interlaminar fracture toughness of glass reinforced aluminum laminates with different fiber orientation using experimental and finite element methods. Khalid et al. 6 studied the interlaminar shear strength (ILSS) of carbon fiber reinforced aluminum laminates (CARALL), glass-reinforced aluminum laminates (GLARE) and aramid-reinforced aluminum laminates (ARALL) under different displacement rates. Their results indicated that the ILSS of CARALL was higher than GLARE and ARALL at all displacement rates. Jakubczak et al. 7 experimentally investigated the effect of the thermal cycles on ILSS and microstructure of CARALL modified by additional glass interlayer. They stated that the strength is not dependent on thermal cycles. Bienias´et al. 8 presented the influence of hygrothermal conditioning of hybrid fiber metal laminates, consisting of alternating layers of a 2024-T3 aluminum alloy and carbon fiber reinforced polymer, on the delamination and ILSS of FMLs. Banat and Mania 9 studied the nonlinear buckling and failure of thin-walled FMLs under axial pressure. Li et al. 10 investigated the multilayers with combinations of different fiber directions and fiber types to face the complicated stress distributions in applications. Lopes et al. 11 developed a numerical code to predict the effects of manufacturing-induced porosity on the interlaminar shear strength of FMLs. They modeled porosity in the geometry of the interface by setting some of these elements to a pre-delaminated state. Yang et al. 12 investigated the effect of adding nanoparticles of polydopamine on mechanical properties of epoxy resin. Megahed et al. 13 experimentally investigated the effect of adding different nanoparticles on mechanical properties of epoxy resin. Khalil et al. 14 examined the effect of adding aluminum nanoparticles on interlayer adhesion between aluminum and epoxy resin. Jojibabu et al. 15 investigated the effect of carbon nanotube addition on epoxy resin and concluded that by adding nanoparticles, shear strength increased by 26%. Aghamohammadi et al. 16 investigated the effect of several surface treatment of aluminum alloy on the flexural behavior of an FML. Hosseini Abbandanak et al. 17 studied the effect of graphene nanoplatelets (GNPs) on the flexural and charpy impact properties of FML plates. They observed that by adding 0.1 wt. % of GNPs, the flexural strength, modulus and impact strength improved and also adding 0.25 wt. % and 0.5 wt. % GNPs reduced these attributes compared to the FMLs containing 0.0 wt. % GNPs. Shifa et al. 18 shew that adding multi-wall carbon nanotubes to CARALL, due to the weakening of the interface between the face and core, worsens the interlayer properties of CARALL. Su¨sler et al. 19 examined the effect of surface treatments on hot-pressed GLARE laminates. They applied sandpapering, degreasing and both degreasing and sandpapering on AA6061-T6 sheets in order to contribute the improvement of the ILSS. The results of their work stated an increase between 29% and 37% for surface-treated laminates against untreated one. The effect of nano-graphene oxide (NGO) dispersion into the epoxy resin on mechanical characteristics and interlayer adhesion of FMLs are investigated in the current research. Different weight percentages of NGO (0.1, 0.35, and 0.5) are added in to the epoxy resin and tensile test specimens are fabricated. Young's modulus and ultimate tensile strength of NGO/epoxy specimens are measured. Moreover, FML specimens including 0.1% NGO/glass/epoxy, 0.35% NGO/glass/epoxy and 0.50%NGO/glass/epoxy composites cores are manufactured. Universal tensile testing machine is utilized to determine maximum peeling force of FMLs with aluminum faces. Materials LR 630 epoxy resin and LH 630 hardener are used. T100 Glass fibers are acquired from JSC Refractory (Belarus) Co., Ltd. The fiber properties are listed in Table 1. Aluminum alloy 2024-T3 with 0.7 mm thickness is utilized. Multilayer NGO has been used in this research. SEM (scanning electron microscopy) and TEM (transmission electron microscopy) images of NGO layers are illustrated in Figure 1(a) and (b) respectively. As the figure shows, the thickness of nanolayers is less than 100 nm. Fabrication process Dispersion of NGO into the epoxy resin. NGO is added into the epoxy resin as nanofiller reinforcement. To achieve a homogeneous dispersion of NGO, mechanical mixing along with ultrasonic bath are utilized. Epoxy resin is mixed with graphene oxide for 10 min by a magnetic stirrer (Figure 2(a)), then the mixture is placed into the 70 W ultrasonic bath for 22 min (Figure 2(b)). These processes are repeated two times to achieve a homogeneous mixture. Manufacturing of NGO/epoxy tensile test specimens. Resin tensile test samples were made according to ASTM D638 standard. 21 The samples were fabricated with different weight percentages of NGO (0.1, 0.35, and 0.50 wt. %) and without NGO. To make tensile test samples, a two-piece aluminum mold was employed as shown in Figure 3(a). Epoxy tensile test specimens and NGO/epoxy samples are illustrated in Figures 3(b) and 4 respectively. Manufacturing of FML plates. In order to prevent separation, and to achieve proper bonding between metal faces and composite core, the aluminum sheets surfaces were cleared of any contamination including oil and grease. This is done in four steps by solutions containing acetone, NaOH, and sulfo-chromic acid. After each step the aluminum sheets are rinsed with hot water. Figure 5(a) to (d) show these steps. The FML specimens for peeling tests were made according to the ASTM D1876. 22 The interlayer composite cores of peeling test samples consist of four layers of woven glass fibers and epoxy resin. Four FML plates including 0.1%NGO/epoxy, 0.35%NGO/epoxy and 0.50%NGO/epoxy cores are fabricated by hand lay-up method. Further, a vacuum pump was employed in order to compress the aluminum faces and composite core layers of FML together. FML specimens were cured under vacuum for 24 h at room temperature. Figure 6 shows the steps of manufacturing process of FML plates. Tensile test of neat epoxy and NGO/epoxy specimens Instron 8802 universal testing machine is used to examine tensile tests (Figure 7(a)). Specimen under tensile load is shown in Figure 7(b). An extensometer is employed to measure length change of gage length of the sample. In order to determine the average values of the results, three identical samples of each material type were tested. Peeling test Generally, the goal of a peeling test is to determine the adhesive strength of the material or the strength of the adhesive bond between two materials. An Instron 8802 universal testing machine is used to execute the peeling tests. Three T-peel specimens are extracted from each FML plate. The bent and unbounded ends of the fabricated test specimen (Figure 8(a)) are fixed in the test grips of the tension testing machine. Schematic of T-peel test is shown in Figure 8(b). Tensile test Neat epoxy and NGO/epoxy specimens with different wt. % of NGO are subjected to tensile test and their Young's modulus and ultimate stress values are extracted. Three samples of each type of material are tested. The average values along with the standard deviation for Young's modulus and ultimate stresses are shown in Figures 9 and 10 respectively. For a better comparison, these values along with the comparison of the parameters with respect to those of the sample without graphene are summarized in Table 2. As the results show, addition of nano-graphene to epoxy does not have much effect on the value of Young's modulus. Also, adding 0.1 wt. % of NGO into the epoxy resin is reduced the ultimate stress by 13.6%. It seems that in these samples, nanoparticles play the role of impurity and cause stress concentration. By adding a higher percentage of nanoparticles into the resin, the ultimate stress values show increases of 6.6% and 11.7% compared to the sample without nanoparticles and reaches values of 27 and 28.3 MPa for 0.35%NGO/epoxy and 0.50%NGO/epoxy, respectively. Peeling test Having performed the peeling tests on the FML beams with different core materials, separation force per unit of width (T) versus displacement between tow aluminum faces of FML samples are obtained. Force-displacement graphs for FML specimens including glass/epoxy, 0.1%NGO/glass/epoxy, 0.35%NGO/glass/epoxy and 0.5%NGO/glass/epoxy cores are illustrated in Figures 11 to 14, respectively. Maximum peeling force per unit of width (T max ) is extracted from each graph and the average values along with the standard deviation error bars are illustrated in Figure 15. According to the results, addition of 0.1 wt. % of NGO into the epoxy resin of core material increases the value of T max of FML, and by adding more weight percentage of NGO into the resin, the interlayer adhesion is decreased. Maximum peeling forces of all FML specimens along with the change percentage of this parameter are summarized in Table 3. According to the results, by adding 0.1 wt. % of NGO into the epoxy resin of glass/epoxy core, T max is increased by 16% and amounts to 4 N/mm. As the table shows, adding more percentages of NGO into the epoxy resin of the glass/epoxy decreases the maximum peeling force. T max equals 2.64 N/mm for FML samples with 0.5%NGO/glass/epoxy core which indicates 24.5% decrease with respect to that of FML with glass/epoxy core. Although, according to the Table 2, adding 0.1 wt. % of NGO into the epoxy resin decreases the ultimate stress of the NGO/epoxy material, adhesion between 0.1%NGO/glass/epoxy layers of FML develops the increase of the maximum peeling force of these specimens with respect to that of FML including glass/epoxy core. After separation of aluminum faces of FMLs, some pictures of the separated surfaces are prepared. Upper and lower aluminum faces of FML specimens including glass/epoxy core and 0.1%NGO/glass/epoxy are demonstrated in Figure 16. As the figure shows, in both cases (Figure 16(a) and (b)), adequate adhesion is observed between the core and the aluminum faces, and the separation happened from the area between the layers of composite core. As it can be seen in Figure 16(a), the separated surface has a uniform state in FML with glass/epoxy core, while in Figure 16(b), FML with 0.1%NGO/glass/epoxy core, the separated surface has a non-uniform state. This can be related to improving the adhesion between the layers of the 0.1%NGO/ glass/epoxy composite core in some regions. Conclusion In this study, tensile properties of epoxy, reinforced with different percentages of nanographene oxide Figure 11. Separation force per unit of width (T) versus displacement between tow aluminum faces for FML specimens including glass/epoxy core. (NGO) is presented. Neat epoxy, 0.1%NGO/epoxy, 0.35%NGO/epoxy and 0.50%NGO/epoxy tensile specimens are examined. Based on the results, 13.6% reduction of ultimate stress is observed for 0.1%NGO/ epoxy with respect to that of neat epoxy. Further, employing the peeling test, the adhesion between upper and lower faces of of FMLs with aluminum faces and composite cores is investigated. FML specimens including glass/epoxy, 0.1%NGO/glass/epoxy, 0.35%NGO/ glass/epoxy and 0.5%NGO/glass/epoxy cores are tested. As the results indicates, maximum peeling force of FML with 0.1%NGO/glass/epoxy is increased 16% with respect to that of FML including glass/epoxy core. It is concluded that although the addition of NGO to the epoxy resin reduces the ultimate stress of the NGO/ epoxy material, when this nanofiller is used in the NGO/glass/epoxy composite structure, it can improve the interlayer adhesion.
3,060.4
2023-06-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Evidence-based software portfolio management: a tool description and evaluation Context: In this paper we describe and evaluate a tool for Evidence-Based Software Portfolio Management (EBSPM) that we developed over time in close cooperation with software practitioners from The Netherlands and Belgium. Objectives: The goal of the EBSPM-tool is to measure, analyze, and benchmark the performance of interconnected sets of software projects in terms of size, cost, duration, and number of defects, in order to support innovation of a company's software delivery capability. The tool supports building and maintaining a research repository of finalized software projects from different companies, business domains, and delivery approaches. Method: The tool consists of two parts. First, a Research Repository, at this moment holding data of for now 490 finalized software projects, from four different companies. Second, a Performance Dashboard, built from a so-called Cost Duration Matrix. Results: We evaluated the tool by describing its use in two practical applications in case studies in industry. Conclusions: We show that the EBSPM-tool can be used successfully in an industrial context, especially regarding its benchmarking and visualization purposes. INTRODUCTION Benchmarking is an important part of learning how and where to innovate for software companies. In this paper we describe a software project benchmark tool that compares the performance of software projects in terms of cost, duration, number of defects, and size with a measurement repository of finalized software projects from different companies. For a period of seven years and ongoing we collected performance data of finalized software projects in industry, in close cooperation with a number of large banking and telecom companies in The Netherlands and Belgium. Based on this we built a research repository of core metrics data of, by now, approximately 500 software projects. We noticed four shortcomings During the collecting and analyzing of project data in industrial practice we experienced four major shortcomings. First, many software companies and software researchers look upon success and failure of software projects in a way that is strongly related to realizing the estimated plan, as for example described in Standish CHAOS research [2]. In recent research we argue that this focus might be wrong because finalizing a project according to its plan does not imply that the achieved performance is good too (maybe the plan was just bad?) [3]. Second, although many software benchmark repositories are available -Jones [4] identifies in 2011 a number of no-less than twentyfive sources of software benchmarks; Menzies and Zimmermann [5] mention thirteen repositories of software engineering data -we experience in practice that many practitioners and researchers struggle with the question on 'how to convince decision makers based on facts and evidence from benchmark repositories'. Although some more or less open solutions are found (e.g. ISBSG, Promise), the majority of the benchmark sources are data collections that are available from commercial companies. Most benchmark sources give no insight in the raw data; especially commercial benchmarks tend to offer trends and aggregated data only. Third, almost none of the benchmark sources include cost data of software projects as a source for productivity. All benchmarks use effort as a core metric for productivity indicators. In itself, the choice of effort as a core metric for productivity is correct. In the practice of collecting data for our research repository, however, we noticed that it is challenging to collect reliable effort data of software projects. Fourth, in practice this vast variety of benchmark sources makes it difficult for practitioners to decide on how to implement a mature measurement and analysis capability that suites the needs of a company's decision makers. As an effect of on the one hand great variety of benchmark approaches and on the other low degree of standardization, we observe that a change of a measurement and analysis approach often can be linked with changes in decision makers and with changes in a primary development approach. As an example: a misunderstanding that apparently lives in many companies is that "going agile means opting for a new measurement approach too". A conceptually new solution In order to deal with these shortcomings, and driven by the somewhat lagging evidence-based software engineering capability in both research and industry [6], and our deeply felt importance of software companies to get more attention for evidence-based software engineering from a software portfolio point of view, we developed a conceptually new instrument, the Cost Duration Matrix. The primary goal of this matrix is to identify good practice software Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from<EMAIL_ADDRESS>deliveries and bad practice software projects within the scope of a company's software delivery portfolio as a whole, or within the scope of a broader, industry-wide portfolio. Subsequently from this subdivision we performed further analysis on factors that could be related strongly to good practice and factors that could be linked to bad practice [1]. In our analysis approach we compare all sorts of software projects, whether these are plan-driven projects, repeating iterations in a release, or deliveries after one or more sprints in Scrum, as part of a software portfolio as a whole. In earlier research we showed how analyzing software delivery portfolios in a binary way (e.g. better than average or worse than average), yet from the angle of different metrics (e.g. cost, duration, defects) can help software companies to define success or failure and to understand where realistic improvements are achievable [1]. We named our approach Evidence-Based Software Portfolio Management (EBSPM). Hence, we refer to our tool as the EBSPM-tool. In particular, the main contributions of the tool are twofold. First, it positions a Cost Duration Matrix as core instrument within a Performance Dashboard for analysis of good practice and bad practice in company-wide portfolios of software projects. Second, it provides a research repository, holding data of industrial software projects from different companies, on a standardized set of metrics: size, cost, duration, and defects. The remainder of this paper is organized in the following way: In Section 2 we outline relevant prior work. In Section 3 we describe the EBSPM-tool and its functional components. In order to evaluate the EBSPM-tool we describe in Section 4 how the tool was used in two scenarios in industrial practice. In Section 5 we discuss the applicability of the tool for research purposes, in a practical setting (1) in industry, and in education. Finally, Section 6 includes a summary of the current status the EBSPM-tool. RELATED WORK Although many sources for benchmarking of software engineering are available [4] [5], the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous [7]. There is a lack of successful and agreed upon standards metrics set and selection processes [8]. We assume that especially follow up on studies with regard to software estimation techniques and algorithms [ THE EBSPM-TOOL The EBSPM-tool offers two basic (initial) features, a Research Repository, and a Performance Dashboard, including a Cost Duration Matrix. These initial features are described in the following paragraphs. Research Repository An important feature of the EBSPM-tool is the availability of a research repository, including data of approximately 500 finalized software projects from different software companies in The Netherlands and Belgium, which is the source for the benchmark functionality. Table 1 gives an overview of the research repository, including the different aspects that are analyzed in our research. The data in the research repository is stored in a MS Excel file. All software deliveries within the repository are measured over a period of nine years in four different companies (banking, telecom, supplier of a business-to-business software solution). Deliveries are measured by experienced, often certified, specialists. Delivery data was based on formal project administrations and reviewed by stakeholders (e.g. project managers, product owners, finance departments, project support). All deliveries were reviewed thoroughly by the lead author before they were included in the research repository. An important difference of the data in the EBSPM research repository with other software engineering repositories is that we collected were possible a company's software project portfolio as a whole. Where other repositories focus at projects, we focus at portfolios instead. This enables us to analyze good practice versus bad practice. A second major difference is that instead focus on collecting effort data of software projects, we focus at collecting cost data besides effort data too. Performance Dashboard In order to visualize the outcomes of the analysis we build a Performance Dashboard in the Business Intelligence (BI) solution Figure 1. The performance dashboard in the EBSPM-tool showing a sample of 172 projects from the ISBSG [27] repository plotted against the EBSPM-research repository. The size of the data points indicates the size of a software project (bigger circles indicate bigger projects in FPs). The color of the data points indicates the quality (redder circles indicate more defects per FP). Tableau. The core component within the Performance Dashboard (see Figure 1) is a Cost Duration Matrix. Each software project is depicted as a circle; the larger the circle, the larger the project is (in function points), and the 'redder' the project is, the more defects per function point are found. The position of each project in the matrix represents the cost and duration deviation of the delivery relative to the benchmark, expressed as percentages. The horizontal and vertical 0%-lines represent zero deviation, i.e. projects that are exactly consistent with the benchmark. A delivery at (0%, 0%) would be one that behaves exactly in accordance with the benchmark; a delivery at (-100%, -100%) would cost nothing and be ready immediately; and a delivery at (+100%, +100%) would be twice as expensive and takes twice as long as expected from the benchmark. Based on this percentage all deliveries from the repository are plotted in a Cost Duration Matrix, resulting in four quadrants: 1. Good Practice (upper right): projects that score better than the average of the total repository for both cost and duration. 2. Cost over Time (bottom right): projects that score better than the average of the total repository for cost, yet worse than average for duration. 3. Bad Practice (bottom left): projects that score worse than the average of the total repository for both cost and duration. 4. Time over Cost (upper left): projects that score better than the average of the total repository for duration, however worse than average for delivery cost. The overall performance of the portfolio is furthermore summarized through the two red 'median' lines. For each project in the research repository three Key Performance Indicators are calculated as a measure for the performance with regard to Cost per FP, Duration per FP, and Defects per FP. EVALUATION In order to evaluate the usefulness of the EBSPM-tool, we describe its application in two recent case studies [3] [26] that we performed in industry. Within these studies we used the EBSPM-tool for analysis, benchmarking and visualization purposes. In both case studies we used the tool to analyze and visualize a specific subset of software projects against the tools repository. In the first study [3] we analyzed a sample of 22 software projects that were performed within a Belgian telecom company. All sampled projects were in scope of an electronic survey among stakeholders from IT and business departments that were involved in a project. In the survey we assessed stakeholder satisfaction and perceived value. In the subsequent analysis we compared the outcomes of the survey with quantitative project metrics such as project size, project cost, project duration, number of defects, and Estimation Quality Factor (EQF) for both duration and cost. We calculated a Cost Duration Index, based on the relative position of a project in the Cost Duration Matrix of the EBSPM-tool [3]. In the second study [26] we performed causal analysis on a sample of nine software releases and eight single once-only projects, that all performed on the same CRM-system in a Belgian telecom company. In order to analyze the performances of each project we interviewed eleven stakeholders from both IT and business on the backgrounds of the projects. The case study resulted in a number of observations from both quantitative and qualitative analysis [26]. In both studies we used the EBSPM-tool to visualize the applicable research sample against the EBSPM research repository as a whole. We used the Performance Dashboard in two ways. First, we included print screens of the dashboard with a sample of projects in the applicable research papers [3] [26]. Secondly we made intensive use of the Performance Dashboard as an interactive visualization during presentations about our case studies to stakeholders within the Belgian telecom company. The fact that the dashboard enables fast interactions and fast selections in company-in-scope, business domain, project name, and development method helped us to gain management commitment and to advice on actionable follow-up actions. Evaluation of Validity We observe large differences between both repositories. The difference in average size of projects in both repositories might partly explain these. However, within the somewhat limited scope of this tool description we cannot argue on any causes that might explain these dissimilarities. Further research is needed to find out any backgrounds of these remarkable differences. Impact / Implications What can we do with the EBSPM-tool and the results from this short study? We try to answer this question looking from three angles; research, practice, and education. Research. We offer our research repository available for other researchers for research and education purposes. We are currently discussing how we can make the repository publicly available via Promise with regard to compliancy issues with cooperating companies. Based on the example given in Figure 1, we argue that further research is needed on the differences found between both ISBSG data and our research repository. Practice. Although already a vast amount of benchmarking sources is to be found, we observe that collecting historic data of software deliveries in itself is a task that learns data analysts and decision makers in software companies a lot about their software delivery capability. The outcomes of the evaluation of the EBSPM-tool in an industrial context learns that companies should be careful in adopting one single source of benchmarking, especially when collected outside their own environment, as the truth. As a spin-off from our research on EBSPM, we are currently cooperating with a number of software companies in order to further improve our analysis approach and characteristics of metrics to be collected (e.g. we focus on value and satisfaction too), and to enlarge and further evaluate the content of our research repository. Education. Software engineering economics, especially from an evidence-based point of view, is usually only as part of other disciplines included in the vocabulary of universities. We think that the EBSPM-tool can help to clarify the importance of economic aspects of software development to students and teachers as it visualizes the somewhat diffuse and large content of a companies' software portfolio repository in a conveniently arranged way. CONCLUSIONS In order to evaluate the EBSPM-tool we looked at the applicability of both the research repository and the Performance Dashboard in an industrial context. We found that the tool supports analysis of software project and portfolio performances. Besides that, the tool enables both internal and external benchmarking. A major difference with other tools is that the EBSPM-tool values project cost data above project effort.
3,652.6
2016-06-01T00:00:00.000
[ "Computer Science" ]
Magnetogravitational instability in strongly coupled rotating clumpy molecular clouds including heating and cooling functions Molecular cloud (MC) formation is caused by the gravitational collapse mechanism and is significantly affected by radiative heating and cooling processes. This paper analyzes the gravitational instability in strongly coupled clumpy molecular clouds (MCs), under the effects of uniform rotation, magnetic field, and heat-loss functions. The generalized hydrodynamic equations coupled with the modified energy equation (which incorporates the heating and cooling effects due to cloud-cloud collisions) are used to describe the mathematical model. Following Jeans stability analysis, it is found that the value of the critical Jeans wavenumber decreases due to the strong coupling between the plasma particles (coupling parameter) and clump stirring processes (heating rate), so both have a stabilizing influence on the onset of gravitational collapse in clumpy MCs. The influence of various parameters on the growth rate of the instability is discussed numerically, and it is found that the cooling rate parameter that describes cloud-cloud collisions shows a destabilizing effect. The region of instability is observed to be smaller in the strongly coupled clumps (kinetic limit) than in the weakly coupled (hydrodynamic limit) clumps. The results are helpful in understanding the role of heating and cooling mechanisms in the MC formation. Introduction The radiative transfer mechanism in astrophysical systems plays a decisive role in the pre-stage star formation process (Juvela 2011).In the interstellar medium (ISM), the thermal processes allow the gas to become denser and hotter, providing the necessary conditions for gravitational collapse.The ISM observations report that the radiative heat-loss mechanism via thermal instability plays an important role in the star formation process in molecular clouds (MCs) (Fukue and Kamaya 2007).Wurster et al. (2018) studied the radiation and non-ideal magnetohydrodynamic (MHD) effects, e.g., ambipolar diffusion, Ohmic resistivity, and the Hall effect, on the collapse of rotating and magnetized molecular cloud cores.Hegmann and Kegel (1996) presented a radia-tive transfer model in the form of a Fokker-Planck equation, considering the density fluctuations in the system.Patrick and Shu-ichiro (2019) explored the influence of magnetic fields on the formation of pre-stellar dense cores and stellar clusters.Molaro et al. (2016) suggested a new method for probing global properties of the clump and core population in giant molecular clouds (GMCs), based on the study of their overall effect on the reflected X-ray signal.Bethell et al. (2007) calculated the photoionization rates in clumpy MCs.Micic et al. (2013) studied the influence of cooling function on the formation of MCs, using high-resolution three-dimensional simulations of converging flows. MCs are the main sources of star formation in which the equilibrium temperature is maintained due to the balance of heating and cooling mechanisms (Hollenbach 1988).In the ongoing radiative processes in the star formation mechanism, Jeans instability plays a fundamental role in the gravitational collapse of a gas cloud.Jeans (1902) described the conditions under which a region of gas or plasma can collapse gravitationally and form a new structure, such as a star or a planet.The minimum perturbation wavenumber for which Jeans instability is excited in a system is expressed by the Jeans instability criterion as k < k j = (4πGρ 0 ) 1/2 /c s , where G is the gravitational constant, ρ 0 is the gas density, and c s is the sound speed.If the wavenumber of perturbations in the system is smaller than the critical Jeans wavenumber, then the system will be unstable and collapse under its own gravity, leading to the formation of structures such as stars and galaxies.In other words, if the size of the perturbation is larger than some critical wavelength of the order λ j = 2π/k j , then the enhanced self-gravity can overpower the excess pressure so that the perturbation grows.Motivated by the works of Jeans (1902), many authors have shown keen interest in exploring the mechanism of star formation and studied Jeans instability in various astrophysical systems, such as gaseous clouds (Chandrasekhar 1961), clumpy MCs (Elmegreen 1989), strongly coupled medium (Janaki et al. 2011;Prajapati and Chhajlani 2013;Dhiman and Sharma 2014;Dasgupta and Karmakar 2019;Dolai and Prajapati 2020), radiative MCs (Prajapati 2022), and self-gravitating optically thick gas-and-dust medium (Kolesnichenko 2023). A strongly coupled medium refers to a state of matter where the interactions between charged particles, such as ions and electrons, are so strong that they begin to interact with each other through long-range electric and magnetic forces, which can result in the formation of complex structures and patterns.The present work aims to examine the role of heating and cooling effects on the growth rate of the Jeans instability in strongly coupled fluid (for which j 1, i.e., when average potential energy dominates the kinetic energy of the particles).Previous works include the effects of viscoelastic coefficients (Janaki et al. 2011), electrical resistivity (Prajapati and Chhajlani 2013), rotation (Dhiman and Sharma 2014) and quantum corrections (Sharma and Chhajlani 2014) on the Jeans instability in strongly coupled viscoelastic fluids.Sharma et al. (2015) investigated the Jeans instability of rotating viscoelastic fluids including magnetic fields.Dhiman and Mahajan (2023) studied the gravitational instability in strongly coupled viscoelastic clumpy MCs considering dissipative effects.However, the effects of the magnetic field and rotation were ignored in the considered configuration.Recently, Yang et al. (2023) studied Jeans instability in viscoelastic astrofluids using Eddington-Inspired-Born-Infeld (EiBI) gravity and found that the positive EiBI gravity parameter and effective generalized fluid viscosity act as stabilizing agents whereas the negative EiBI parameter and viscoelastic relaxation time act as destabilizing factors.From the above studies, it is evident that the strong coupling effects in terms of viscoelastic coefficients play a dominant role in investigating the gravitational instability in viscoelastic fluids. The presence of magnetic fields in MCs is essential for understanding their role in the evolution of dense clouds and star formation process (Braine et al. 2020;Crutcher 2012).Körtgen and Banerjee (2015) showed that dense cores of MCs could build up under all the conditions; however, the star formation process in these cores is either delayed or completely suppressed if the initial field strength is B > 3μG.Ibáñez-Mejía et al. (2022) discussed the competition between gravity versus the magnetic fields in the starforming region of the MCs.Recently, Patidar et al. (2023) investigated the influence of the magnetic field on the Jeans criteria for quantum plasma and found that highly magnetized white dwarfs have a different mass-radius relation than their non-magnetic counterparts, which results in a modified super-Chandrashekhar mass limit.In addition, rotation is a unique characteristic of astrophysical systems such as accretion disks, circumstellar disks, and MCs.The spin angular momenta of the clouds may be due to the orbital rotation of the gas.The observational data show that GMCs have a spin rotation with angular momenta of the order of 100 km s −1 pc per unit mass (Chernin and Efremov 1995).In the gravitational collapse and subsequent star formation processes, the role of the magnetic field and rotation has been studied previously by many authors.However to the best of our knowledge, none of them has studied the combined effects of heating and cooling functions, magnetic field and rotation on the gravitational instability of strongly coupled clumpy MCs. In this paper, we have analyzed the gravitational instability of strongly coupled MCs and have investigated the effects of rotation, magnetic field, and heat-loss functions on the onset of gravitational collapse.Here, we have considered the modified energy equation for an ideal fluid with heating and cooling functions (Elmegreen 1989) along with standard fluid equations.The paper is organized as follows: Sec. 1 is the introductory in nature and also includes a review of the literature.In Sec. 2, the mathematical model is constructed for the rotating and magnetized clumpy MCs, using the generalized hydrodynamic equations, which include the effect of the Coulomb coupling parameter j .In Sec. 3, the linearized perturbed equations are obtained, the general dispersion relation is derived using the plane wave solutions, and the results are discussed in parallel wave propagation mode for both the strongly (kinetic) and weakly (hydrodynamic) coupled limits.In Sec. 4, the outcomes of the present work have been summarized. Mathematical model Consider an infinite homogeneous, self-gravitating clumpy MCs in which the uniform rotation ( (0, 0, z )) and uniform magnetic field (H(0, 0, H 0 )) act simultaneously.The considered medium is strongly coupled in nature and hence exhibits both viscous and elastic behaviour in terms of the viscoelastic coefficients.Following Kaw and Sen (1998), the basic equations for the considered system using the generalized hydrodynamic (GH) model are given by (Elmegreen 1989;Janaki et al. 2011;Dhiman and Sharma 2014): The equation of continuity; The equation of motion; (2) The magnetic induction equation; The Poisson equation for the gravitational field; Here, g = −∇φ, φ is the gravitational potential, u, P and ρ represent the fluid velocity, fluid pressure, and fluid density, respectively.τ m is the relaxation time or memory parameter, which depends on the viscoelastic coefficients There has been a great deal of concern among researchers about analyzing the role of the radiative heat-loss mechanism in MCs due to its significance in structure formation.Field (1965) studied the effects of temperature and density-dependent radiative heat-loss functions on the thermal stability of a dilute, non-gravitating gas.Later, this problem was extensively studied by many authors, considering the effects of finite Larmor frequency and electrical resistivity (Bora and Talwar 1993), uniform rotation and magnetic field (Aggarwal and Talwar 1969), ion-neutral collisions (Fukue and Kamaya 2007), Hall effect and electron inertia (Prajapati et al. 2010), dust temperature and viscoelastic effects (Sharma et al. 2020) and porosity of the medium (Kaothekar et al. 2022).In these works, the temperature and density-dependent radiative heat-loss functions are considered and it is found that the fundamental thermal instability criterion remains unchanged due to the considered effects.However, when we consider cloud-cloud collisions, the heating and cooling functions are considered to be independent of time (Elmegreen 1989) and these functions are different than those considered by Field (1965) in his work.In the present work, we shall focus on the physical conditions of molecular cloud clumps to analyze the Jeans instability of strongly coupled fluids in the presence of rotation and magnetic fields, considering the heating and cooling effects. Thus, the energy equation is therefore considered as (Elmegreen 1989); where, γ is the ratio of specific heat.The term represents the cooling function, which is attributed to cloud-cloud collisions with an isotropic, Maxwellian distribution of cloud velocities and is defined as; Here, M c , R c and σ c respectively denote the average masses, radii, and column densities of the colliding clouds.Further, = e ρ( 0 ρ 0 ) describes the heating rate where e , 0 are the values of heating and cooling rate in the equilibrium states. Dispersion relation The dispersion relation for the considered configuration is obtained using the first-order perturbation method.The basic equations are linearized, considering infinitesimal small perturbations; where, ρ 1 , g 1 , P 1 , u 1 (v x , v y , v z ), and H 1 (h x , h y , h z ) denote the perturbations in fluid density, gravitational field, fluid pressure, fluid velocity and magnetic field, respectively.All the above quantities with subscript "0" represent the value of that quantity at equilibrium.In the stability analysis of gravitating fluid, the streaming effects are neglected; hence we put u 0 = 0. To check the stability of the perturbed system, the arbitrary perturbations can be decomposed into a complete set of normal modes and are analyzed individually.Therefore, assuming the plane wave solutions of each of the perturbed quantity as; where, ω = −iω is the perturbation frequency of the harmonic disturbance and k z is the wavenumber.This allows us to substitute ∇ = ik z ẑ and ∂/∂t = ω in the perturbed equations obtained after substituting the perturbed quantities (7) into equations ( 1)-( 5), we obtained the following linearized equations; (1 where ω c = (γ − 1) 0 /(2ρ 0 c 2 ) represents the cooling rate (Elmegreen 1989) and is approximated as the inverse of the cloud-cloud collision time.Equation ( 13) can be simplified as Using the values of ρ 1 , h x , h y , g 1 and P 1 from equations ( 10)-( 12) and ( 14) in equation ( 9), we obtain the following dispersion relation after some simplifications and setting k z = k; (1 Equation ( 15) is the modified form of the dispersion relation for the clumpy cloudy medium in the presence of uniform rotation, magnetic fields, and viscoelastic effects. If the effects of rotation and magnetic field are neglected, we obtain the same dispersion relation as obtained by Dhiman and Mahajan (2023) neglecting dissipative effects in that case.Further, if we neglect the viscoelastic coefficients (ζ = μ = 0), roational frequency ( z = 0), magnetic field intensity (H 0 = 0), the equation ( 15) reduces to the same dispersion relation obtained by Elmegreen (1989).Also, if the clump stirring processes and cloud-cloud collisions are neglected (i.e.= e = ω c = 0), then we obtain the same dispersion relation ( 17) obtained by Prajapati and Chhajlani (2013) for an infinitely conducting medium without rotation. The dispersion relation ( 15) is useful to discuss the Jeans instability in a molecular cloud clump undergoing heating and cooling mechanisms.To obtain the instability criterion and effects of the various considered parameters on the growth rate, let us analyze these two factors, separately. Kinetic limit The first factor of the dispersion relation ( 15) under the kinetic limit (known as the strongly coupled limit) i.e. τ m ω 1 reduces to where 3 )/ρ 0 τ m is defined as the square of the velocity of the compressional viscoelastic mode.Equation ( 16) clearly represents non-gravitating, rotating, damped viscoelastic, Alfvén wave mode that is dynamically stable.In the absence of rotation (i.e.z = 0), equation ( 16) reduces to The above equation represents the magnetosonic mode in the compressional viscoelastic fluid.It is clear that in the absence of rotation, the usual magnetosonic mode is modified due to the presence of viscoelastic coefficients.Thus, due to the presence of uniform rotation, the phase speed of the MHD mode will be significantly affected. The second factor of the dispersion relation (16) under the kinetic limit reduces to Equation ( 18) is the modified form of the dispersion relation obtained by Elmegreen (1989) in the presence of viscoelastic effects.If the parameters; e , γ, , and ω c representing the clumpy cloudy medium are neglected, we obtain the same dispersion relation obtained by Janaki et al. (2011).It is very difficult to measure the real values of the viscoelastic coefficients for astrophysical systems.Thus, to study the effects of various parameters on the growth rate of the Jeans instability, we replace v c in terms of the Coulomb coupling parameter j , the ion mass and the ion temperature.Following Dolai and Prajapati (2017), the simplified expression of compressional wave velocity (v c ) in terms of coupling parameter ( j ) is given by Here, we have used the following relation between the memory parameter, coupling parameter and compressibility of the medium Substituting the value of compressional velocity in terms of the coupling parameter from ( 19) in ( 18) and after simplifying the resulting equation, we obtain The dispersion relation ( 21) shows the influence of the Coulomb coupling parameter and heating and cooling functions on the Jeans instability of clumpy MCs.Using the Guillemin (1949) criterion for the sign of roots, we obtain the following condition for the onset of gravitational instability from the constant term of equation ( 21) and is given by Condition ( 22) is termed as Jeans instability criterion.The expression for the critical Jeans wavenumber is given by The above condition of Jeans instability and expression of the Jeans wavenumber show the modified form of the Chandrasekhar (1961) instability criterion c 2 s k 2 − 4πGρ 0 < 0 (or k J < ω j /c s ) and is modified due to the presence of the Coulomb coupling parameter ( j ) and heating rate parameter ( e ).It should be noted that the cooling rate parameter does not affect the Jeans instability criterion or the expression of the Jeans wavenumber.Therefore, it is clear from expression (23) that the Jeans wavenumber decreases with increasing values of j and e in the clumpy cloudy medium, which yields that the coupling effects and heating rate postpone the gravitational collapse. The above theoretical expressions are used to calculate the length scale and Jeans frequency for the considered systems.We now consider the real physical conditions of the MCs for estimating the fundamental plasma parameters.In the region of the molecular cloud clumps, we consider the following parameters; number density of H 2 molecules n = 10 3 cm −3 (Bergin and Tafalla 2007), mass of H 2 molecules m i = 3.32 × 10 −24 g, ion temperature T i = 10K and Coulomb coupling parameter j = 2.0 (Prajapati 2022).With these values of the parameters, the calculated value of the Jeans frequency is measured to be ω j 0.52 × 10 −13 s −1 , and the corresponding Jeans wavenumber (using expression ( 23)) k j 1 0.46 × 10 −17 cm −1 .Hence, the Jeans length of the molecular clouds λ j 5 0.44 pc which is comparable to the length scale of cores within the cloud clumps (Bergin and Tafalla 2007).It is also observed that the calculated value of the Jeans wavelength is larger than the value calculated by Dhiman and Mahajan (2023) for the clumpy MCs with dissipative effects. The growth rate of the Jeans instability describes how quickly small perturbations or fluctuations within the cloud grow and amplify over time, leading to the eventual collapse of the cloud.The exact behavior of the growth rate depends on the balance between gravitational forces and pressure forces within the cloud.The growth rate of the Jeans instability depends on several factors, including the density of the gas, heating, and cooling functions, its temperature, and the wavelength of the perturbations.To study the impact of the Coulomb coupling parameter and the heating-cooling rate parameter on the growth rate of instability, let us use the following non-dimensional parameters defined in terms of the Jeans frequency; the dispersion relation ( 21) assumes the following nondimensional form; The normalized growth rate of Jeans instability is calculated from ( 25) for different values of wavenumber and is depicted in the following figures.Figure 1.represents the normalized growth rate of the Jeans instability (Re( ω)) versus the normalized wavenumber ( k), for different values of the Coulomb coupling parameter j = 0.5 for weakly coupled plasma (WCP), 1.0 (for viscoelastic fluid), and 2.0 for strongly coupled plasma (SCP).It is clear from the curve corresponding to j = 0.5 that due to the weak coupling between the plasma particles, the growth rate for WCP is observed to be larger than that of SCP ( j = 2.0) and viscoelastic fluid ( j = 1.0).Further, the growth rate of the instability region is decreased due to an increase in the coupling parameter, and the region is further decreased in the case of SCP with j = 2.0.This is because it represents high-frequency oscillations with strong coupling between the plasma particles.The Coulomb coupling parameter thus stabilizes the growth rate of the Jeans instability. Figure 2. depicts the change of the normalized growth rate with normalized wavenumber for the different values of the cooling rate parameter; ωc = 0.3, 0.6, 1.0, in the presence ( e = 1) of cloud stirring of the molecular cloud.The variation reveals that the growth rate increases with increasing values of the cooling rate ωc , which means that the onset of gravitational instability sets early in the diffused molecular cloud, as compared to dense clouds, which can be validated from the analysis of Elmegreen (1989).Hence, the cooling rate parameter ( ωc ) shows destabilizing character against the self-gravitating collapse dynamics by increasing the rate at which structure fomation takes place.It is also clear that the cutoff wavenumber at which the growth rate becomes zero remains the same for each value of the cooling parameter ( ωc ). In Fig. 3., the growth rate of Jeans instability is depicted for the different values of the heating rate parameter, e = 0.0, 1.0, 2.0.From the figure, it is observed that the values of the normalized growth rate of gravitational insta-Fig.3 The normalized growth rate of Jeans instability (Re( ω)) versus normalized wavenumber k for different values of heating rate e = 0, 1.0, 2.0 for fixed values of j = 2.0, ωc = 0.2 and γ = 5/3 bility are higher in the absence of heating ( e = 0.0) rate parameter as compared to the values in the presence of heating ( e = 1.0, 2.0) rate parameter.Thus, it is concluded that the heating rate, which arises due to dense cloud stirring, has a stabilizing influence on gravitational instability.In this case, the cutoff wavenumbers at which the growth rate becomes zero are different for the different values of e .In the absence of the heating rate parameter ( e = 0.0) the cutoff wavenumber is much larger than that in the remaining cases.The suppression of the growth rate with e = 0 lies in the range of finite perturbation wavenumbers.Thus, we conclude that both coupling and heating rate parameters delay the onset of collapse of MCs and the structure formation. Hydrodynamic limit Let us now discuss the dispersion properties in the hydrodynamic limit or weakly coupled limit corresponding to τ m ω 1.The weakly coupled limit refers to a regime where the interactions between individual particles in the plasma are relatively weak compared to their kinetic energies.In this limit, the first factor of the dispersion relation (15) reduces to Equation ( 26) represents a non-gravitating stable mode that includes the effect of rotational frequency, Alfvén wave velocity, and viscoelastic coefficients.It is independent of the heating and cooling parameters of the clumpy MCs.The stability of the system represented by the above equation can be discussed using the Routh-Hurwitz criterion.Accordingly, all the coefficients of the polynomial must be positive, which satisfies the necessary condition of stability.To satisfy the sufficient condition, all the principal diagonal minors must be positive.We calculate the principal diagonal minors, which are given by; We found that the values all the are positive, hence the system represented by equation ( 26) is a stable system. The second factor of dispersion relation (15) under the weakly coupled limit reduces to Clearly, equation ( 27) represents the dispersion relation for weakly coupled plasma in a rotating and magnetized clumpy molecular cloud, which is modified due to the presence of viscoelastic coefficients, and heating and cooling functions.The onset of gravitational instability occurs when, Furthermore, the value of the critical Jeans wavenumber is given by We find that the above Jeans instability criterion is independent of the cooling and Coulomb coupling parameters and is different from the instability criterion obtained in the kinetic limit.In order to observe the Jeans instability and get real values of the critical wavenumber, 2 e − 1 > 0 or e > 0.5.Elmegreen (1989) suggested the significance of the heating rate parameter in the cloud clump formation.He mentioned that regions without much star formation activity (i.e. e = 0) will spontaneously clump into cloud complexes on a variety of scales.This scale ranges from the collisional mean free path up to the scale of the conventional Jeans instability in the interstellar gas disk.The smallscale clumping occurs at approximately the same rate as the large-scale clumping if the component clouds are mildly self-gravitating with ω c ω j ∼ 1.Once star formation begins and energy is available to stir the cloud population, thermal equilibrium becomes possible ( e = 1).Under these circumstances, the Jeans instability is observed, which grows continuously for sufficiently large molecular cloud clumps. Let us now illustrate the growth rate of the Jeans instability in the hydrodynamic limit.The dimensionless form of dispersion relation ( 27) is derived using the non-dimensional parameters; ( ω = ω/ω j , k = kc/ω j , τm = τ m ω j and ωc = ω c /ω j ), as; ω3 + 3 ωc + 0.16 j τm k2 ω2 + k2 (γ + 0.48 j τm ωc ) Figure 4. shows the variation in normalized growth rate (Re( ω)) against normalized wave number k for various values of cooling rate parameter; ωc = 0.3, 0.6, 1.0, in the presence of heating function e = 1.0 and relaxation time τm = 1.0.The nature of the curves indicates that the growth rate increases with increasing values of the cooling rate ωc , which is the same as that obtained in Fig. 2. Notably, in the kinetic limit (Fig. 2.), the cutoff wavenumber at which the growth rates become zero is kc1 = 1.25, while in the hydrodynamic limit, it has been increased to kc2 = 1.75.Thus, the cooling function ( ωc ) shows destabilizing behaviour on the growth rate of self-gravitational instability and encourages the gravitational collapse of clouds to start the structure formation process. Figure 5. represents the variation in the normalized growth rate (Re( ω)) versus the normalized wavenumber k for different values of heating rate e = 0.0, 1.0, 2.0 in the presence of the memory parameter ( τm = 1.0) and cooling Fig. 6 Comparison of the normalized growth rate of Jeans instability (Re( ω)) versus normalized wave number k between kinetic limit (with j = 2.0) and hydrodynamic limit (with j = 0.5) for fixed values of ωc = 0.2, e = 1.0, τm = 1.0 and γ = 5/3 rate parameter ( ωc = 0.2).The curves show that the growth rate of gravitational instability decreases with the increasing values of heating rate and wavenumber.Hence, it stabilizes the collapse dynamics of the clumpy MCs.Additionally, in the presence and absence of cloud stirring, the cutoff wavenumber is very large in the hydrodynamic regime as compared to the kinetic regime. The normalized growth rate of Jeans instability in the hydrodynamic regime is calculated from (30) for different values of wavenumber and depicted in figures.In Fig. 6., the normalized growth rate of the Jeans instability (Re( ω)) is plotted against the normalized wavenumber ( k) using ( 25) and (30) for kinetic and hydrodynamic limits, respectively.From the curves, it is clear that the growth rate is larger for WCP with the Coulomb coupling parameter j = 0.5 as compared to SCP with the Coulomb coupling parameter j = 2.0 in the presence of cooling and heating func-tions.This might be because weakly coupled limits exhibit a slower decrease in the growth rate of unstable Jeans modes than strongly coupled limits.The kinetic limit corresponds to the high-frequency limit (ωτ m 1) thus, the growth rate decreases faster than the low-frequency hydrodynamic limit. Conclusions The radiative heating and cooling mechanisms play a crucial role in the energy transfer and subsequent gravitational collapse in dense clumps of MCs.This paper studies the influence of heating and cooling parameters on the gravitational instability of rotating and magnetized strongly coupled plasma in the clumps of MCs.A dispersion relation for the considered configuration has been derived analytically using the normal mode analysis and discussed in hydrodynamic and kinetic limits.In both cases, the Jeans criteria are modified due to the presence of the Coloumb coupling parameter and heating rate, but uniform rotation and magnetic field do not affect the instability criterion.However, the presence of magnetic field and uniform rotation significantly modify the compressional Alfvén viscoelastic mode.Furthermore, we calculated the growth rate of instability against the wavenumber for both cases and found that the normalized growth rate decreased with increasing wavenumber. The graphical illustrations show that the Coulomb coupling parameter ( j ) and heating rate parameter ( e ) stabilizes the growth rate of Jeans instability.The cooling rate parameter (ω c ) shows a destabilizing character against the self-gravitating collapse dynamics.The dynamical stability of the system has been discussed using the Routh-Hurwitz criterion.The variation of growth rate is also plotted in the hydrodynamic limit to study the effects of heating, and cooling parameter and to compare the growth rates with kinetic limits.We found that the growth rate decreases faster in the kinetic limits than in the hydrodynamic limit.Thus, the collapse rate is slower in the hydrodynamic or weakly coupled limit. The role of strong coupling effects is ubiquitous in many astrophysical systems, such as white dwarfs, neutron stars and in some regions of MCs, as they exist in strongly coupled states.The numerical parameters are calculated for clumpy MCs and measure the Jeans frequency to be ω j 0.52 × 10 −13 s −1 , Jeans wavenumber k j 1 0.46 × 10 −17 cm −1 .The Jeans length for the MCs is calculated to be λ j 5 0.44 pc which is comparable to the length scale of cores within cloud clumps.The present results are helpful to discuss the gravitational collapse in dense clumps of MCs consisting of strongly coupled magnetized plasma with rotation.The inclusion of finite electrical resistivity and the Hall effect is a possible future scope of the present work.
6,769
2023-10-01T00:00:00.000
[ "Physics" ]
Schrödinger functional boundary conditions and improvement for N >3 The standard method to calculate non-perturbatively the evolution of the running coupling of a SU(N ) gauge theory is based on the Schrödinger functional (SF). In this paper we construct a family of boundary fields for general values of N which enter the standard definition of the SF coupling. We provide spatial boundary conditions for fermions in several representations which reduce the condition number of the squared Dirac operator. In addition, we calculate the Oa\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}(a) $$\end{document} improvement coefficients for N >3 needed to remove boundary cutoff effects from the gauge action. After this, residual cutoff effects on the step scaling function are shown to be very small even when considering non-fundamental representations. We also calculate the ratio of Λ parameters between the MS¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overline{\mathrm{MS}} $$\end{document} and SF schemes. Introduction Asymptotically free theories, such as gauge theories coupled to fermionic matter fields, 1 are characterized by having a coupling which becomes small at short distances. This property enables reliable perturbative calculations of physical quantities at large energies. A dimensionful scale is dynamically generated through the process of dimensional transmutation. Typically, this scale is associated in perturbation theory with the Λ parameter, i.e. a multiplicative constant of the integrated beta function. The non-perturbative evolution of the running coupling in different gauge field theories from the low energy sector to the high energy regime has been the central goal of many studies. The standard approach is the use of a finite size scaling technique based on the Schrödinger functional (SF), in which the size of the system is associated to the renormalization scale [1]. This method was successfully used to calculate the scale evolution of the coupling in the SU(2) [2] and SU(3) [3] Yang-Mills theories and in QCD [4]. Motivated by ideas of physics beyond the standard model (BSM), in the last decade this method has JHEP11(2014)074 also been applied to study the SU(4) pure gauge theory [5] and several theories containing matter transforming under higher dimensional representations of the gauge group or a large number of fermions in the fundamental representation [6][7][8][9][10][11][12][13][14]. However, for lattices accessible in typical numerical simulations SF schemes are affected by lattice artifacts arising from the bulk and from the boundaries of the lattice. These can be removed, following Symanzik's improvement program, by adding the corresponding counterterms to the action at the bulk and the boundaries. Symanzik's program was successfully carried out in [1,[15][16][17][18], where the improvement coefficients necessary to remove O(a) effects from the coupling were calculated in perturbation theory. For theories beyond QCD the situation is still inconclusive. A program for the nonperturbative study of SU(N ) gauge theories in the large N limit [19] started in the last decade driven by interest from string theory. As part of that program, in [5] the ratio Λ MS / √ σ between the lambda parameter in the MS scheme and the string tension σ was calculated for the SU(4) theory aiming to obtain extrapolations of the N dependence of the Λ MS / √ σ in the large N limit. There, the dominant systematic errors are due to the lattice artifacts present by using an unimproved action. For the case of theories with non-fundamental fermions, although the O(a) improvement coefficients are known, the remaining higher order cutoff effects have been reported to be very large if the standard setups, which work fine for QCD, are naively exported. In the last few years a new renormalized coupling based on the gradient flow (GF) has been proposed for step scaling studies [20][21][22][23][24]. Compared to the original SF coupling based on a background field (see bellow), the gradient flow coupling has the advantage that considerably smaller statistics are required for obtaining a similar accuracy. However, there are some situations where the original SF coupling is superior compared to the gradient flow. First of all, it has been observed that while the GF coupling works better at large physical volumes, at small volumes the SF coupling fares better than the gradient flow [25]. Also, in the pure gauge theory, relevant for the large N limit, the generation of configurations is so fast compared to the measurement of the gradient flow that the reduced accuracy can be overcome with increased statistics. In addition, in BSM lattice studies one is often interested in the existence of a nontrivial infra-red fixed point. The value of the coupling constant g at the fixed point is a renormalization scheme dependent quantity and it differs between Schrödinger functional and gradient flow schemes. Therefore, it is possible that in a specific scheme the coupling is too strong at the fixed point or it is on the wrong side of a bulk phase transition. This is true even if the fixed point is visible in other schemes. The only study, we are aware of, that compares these two methods with the same action found the gradient flow coupling to be about twice the Schrödinger functional coupling [26]. Moreover, due to the property of continuum reduction [27], at large N it is possible to do simulations at small lattice volumes where the SF coupling is known to perform well. This work completes the Schrödiger functional framework to study the phase diagram of strongly interacting gauge theories [28] with any N or representation. In the paper we generalize the boundary conditions for the gauge fields in the SF to obtain a family of schemes useful for arbitrary N with a good signal to noise ratio in lattice simulations. JHEP11(2014)074 Moreover, the O(a) improvement coefficients are obtained to one loop order in perturbation theory. For this, we calculate the one loop running coupling in our family of SF schemes following closely the discussions in [1,15] and adapting them to arbitrary N . The values obtained for the boundary improvement coefficients are valid for any choice of Dirichlet boundary conditions at the temporal boundaries. With this knowledge we relate the Λ parameters between our SF schemes and the more widely used MS scheme. Another appealing property of the present family of schemes is that, together with an appropriate choice of spatial boundary conditions for the fermions, they lead to a setup for which higher order cutoff effects due to fermions are very small even for non-fundamental representations. Preliminary results of this work have been published in [29]. The paper is organized as follows: in section 2 we recall some concepts concerning the Schrödinger functional and collect a set of formulas useful for the remaining discussion. In section 3 the generalized boundary conditions are provided. The calculation of the improvement coefficients is presented in section 4, where we also discuss the effect that the fermionic spatial boundary conditions have on the residual higher order cutoff effects. The matching of the Λ parameters to the MS scheme is done in section 5. We conclude in section 6. Schrödinger functional In this section we briefly recall the ideas introduced in [1,15,30] and collect the expressions necessary for the subsequent discussion. We refer the interested reader to the original articles for further detail. The Schrödinger functional is the euclidean propagation amplitude between a field configuration C at time 0 and another field configuration C at time T , which has a path integral representation given by with Dirichlet boundary conditions specified for the gauge fields U and the fermion fields ψ andψ. In the present work, we are interested in the O(a) improved Wilson action The pure gauge part is the standard SU(N ) Wilson gauge action The spatial components of the gauge fields at the temporal boundaries (t = 0 and t = T ) satisfy nonhomogeneous Dirichlet boundary conditions JHEP11(2014)074 The boundary gauge fields W k and W k can be parametrized as W k (x) = exp(aC k (η)), W k (x) = exp(aC k (η)), (2.5) where C k (η) and C k (η) are taken to be homogeneous, abelian and spatially constant [1], and they depend on a dimensionless parameter η. A specific form for these boundary matrices is derived in section 3 for gauge group SU(N ) with arbitrary N . In the spatial directions the gauge fields are taken to be periodic U µ (t, x) = U µ (t, x + Lk). The weight w(P ) = 1 except for the spatial plaquettes at the boundaries for which w(P ) = 1 2 . Due to the particular choice of boundary conditions for the gauge fields, the spatial boundary plaquettes give only a constant contribution to the action and can be ignored. It is well known that within SF schemes, the mere presence of temporal boundaries constitutes an extra source of lattice artifacts. Removal of these effects has first been studied in [1,16,17], where it was shown that the O(a) lattice artifacts coming from the boundaries can be canceled by tuning the weight w(p) = c t (g 0 ) for the temporal plaquettes attached to the boundaries, where c t is the coefficient of a dimension 4 counterterm localized at the boundaries [1]. The perturbative expansion of c t is where c where D WD is the improved Wilson-Dirac operator The operator F µν (x) is the symmetrized lattice field strength tensor, σ µν = i 2 [γ µ , γ ν ] and the operators D µ and D * µ are the covariant forward and backward derivatives which are defined in eq. (A.8). The improvement coefficient c sw can be determined perturbatively [16,31] and non-perturbatively [32,33]. To the lowest order in perturbation theory c sw = 1 [18]. The removal of O(a) effects arising from the interplay between fermions and the SF boundaries requires the addition of another dimension 4 counterterm at the boundaries. Since this does not contribute to the observables studied further in this work at the present order in perturbation theory, we ignore it from now on and refer the reader to the original literature [17] for further details. The fermionic fields satisfy the following boundary conditions where P ± = 1 2 (1 ± γ 0 ). The boundary conditions in the spatial directions are periodic up to a phase [30]: (2.11) JHEP11(2014)074 The phase is usually chosen so that the smallest eigenvalue of the squared Dirac operator is large [30]. In this situation, the condition number (i.e. the ratio between the highest an lowest eigenvalues) is small, which improves the speed of the known inversion algorithms. However, the value of θ also has an effect on the convergence of the 1-loop perturbative coupling to its continuum limit. This is discussed in subsection 4.3. The boundary conditions for the gauge fields in eqs. (2.4) and (2.5) induce a constant chromo-electric background field V µ (x) in the space-time. The variable η in the boundary fields eq. (2.5) parametrizes a curve of background fields. A renormalized coupling can be defined [1] as a response of the system to a deformation of the background field with the effective action Γ = − ln Z. The normalization constant is defined so that g 2 = g 2 0 to the lowest order of perturbation theory. One of the central quantities in numerical simulations is the step scaling function (2.14) This is required for reconstructing non-perturbatively the scale evolution of the running coupling. In presence of a lattice regulator, the deviations of the lattice counterpart of the step scaling function Σ(u, L/a) from the continuum σ(u) can be used to monitor the size of cutoff effects (see subsection 4.3). 1-loop expansion The renormalized coupling eq. (2.12) is suitable for both perturbative and non-perturbative evaluation. The 1-loop calculation of eq. (2.12) was done in [1,3] for the pure gauge theory in SU (2), and extended to accommodate fermions in [30]. 2 Non fundamental fermions have been considered in [34][35][36]. In the present work we extend the previous calculations to arbitrary N . Although the main strategy of the calculation follows closely previous works, some care has to be taken to generalize those ideas without complicating the calculation. In the present subsection we collect some formulas necessary for the subsequent discussion and leave all technical details on the calculation to appendices A and B. After going through the gauge fixing procedure [1], the effective action is expanded to 1-loop as Here Γ 0 is the classical action. The 1-loop term Γ 1 in the effective action can be written as JHEP11(2014)074 where ∆ 0 , ∆ 1 and ∆ 2 are the quadratic ghost, gluonic and fermionic operators respectively. The explicit forms of the operators ∆ i are given in appendix A. The renormalized coupling eq. (2.12) is also expanded in perturbation theory According to eq. (2.15), the 1-loop coefficient receives an independent contribution from ghost, gauge, and fermionic fields The gauge and fermionic contributions to eq. (2.19) can be calculated independently. The gauge part is given by The calculation of h 0 (L/a) and h 1 (L/a) has been described in great detail for SU(2) in [1] and the calculation has been done for N = 3 in [3]. In appendix A we give the generalization of the calculations to N ≥ 3. The calculation of the fermionic part p 1,1 (L/a) is straight forward to generalize to any boundary fields and to any representation of the gauge group. One just needs to replace the link variables in the Wilson Dirac operator eq. (2.8) with their counterparts in the desired representation. Thus we will refer the interested reader to the original paper [15]. The continuum and lattice step scaling functions are given to first order in perturbation theory by with σ 1 = 2b 0 ln(2). The 1-loop coefficient b 0 of the beta function is given in an arbitrary representation by where the color group invariants are defined as in the representation R of SU(N ). JHEP11(2014)074 Similarly as in eq. (2.19), the step scaling functions can be separated into a gauge and a fermionic part, This allows us to study separately cutoff effects due to gauge and fermion fields independently. Boundary fields for N > 2 In this section we present a generalization of the boundary fields for N > 2. The selection of the boundary fields is only limited by the requirement that there is a unique and stable classical solution to system. In practice, this limits us to Abelian boundary fields W k and W k which can be written as in eq. (2.5), where Now the classical solution, i.e. the background field, can be written as It is shown in [1] that the solution V µ (x) is absolutely stable if the vectors φ and φ satisfy eq. (3.2) and These conditions define a fundamental domain, which is an irregular (N − 1)-simplex and has vertices at points . . . (3.11) To define a renormalized coupling we can choose any two different points inside the fundamental domain to set up the boundary fields. A different choice leads to a different renormalization scheme, which can be matched to each other using perturbation theory (see section 5). However, there are practical considerations in selecting the boundary fields, namely the signal to noise ratio in the Monte Carlo simulations and the size of higher order lattice artifacts. Our choice is based on the attempt to maximize the signal to noise ratio as in practice the minimization of the higher order lattice artifacts often leads to a low signal, which neglects the gains of a better continuum extrapolation. To obtain a maximal signal strength we have two competing requirements. We need to twist the gauge fields as much as possible while staying away from the boundaries of the fundamental domain. This is because the coupling is proportional to the twist and because closeness of the instability of the classical solution increases noise. According to these considerations we choose φ to be in the middle of a line connecting X 1 and the centeroid of the fundamental domain To determine φ we find a transformation which is a map from the fundamental domain to itself and mirrors the vertices. First we define a simple map R i,j (φ) that reflects the points in the fundamental domain with respect to a (N − 2) dimensional hyperplane. The hyperplane R i,j (φ) goes through vertices X k , k = i, j and intersects the line connecting X i and X j at the middle. For N > 3 the function R i,j (φ) is not in general a mapping from the fundamental domain to itself, but we can define a composite mapping is a mapping from the fundamental domain to itself and written in components it has a simple form To define the coupling we choose a one parameter curve of background fields φ + t(η). We select it in a way that the results are equivalent to those of the SU(3) theory given in [3], 4 i.e. we select t(η) so that it changes sign under the mapping M(φ) and points JHEP11(2014)074 In the lattice computations it is advisable not to set t(η) beforehand, but to measure a complete N − 1 dimensional basis which can be used to construct a generic curve. Each curve corresponds to a different renormalization scheme. Fermionic spatial boundary conditions Recalling the spatial boundary conditions for the fermion fields eq. (2.11), we still have to choose a particular value for the angles θ k . For simplicity, we consider the same angle in all spatial directions θ = θ k , k = 1, 2, 3. We then fix θ, following the criteria introduced in [15], so that the minimum eigenvalue λ min of the fermion operator ∆ 2 is as large as possible. This leads to a small condition number which optimizes the speed of the numerical inversion of the operator. The values of θ leading to a maximum λ min depend on the background field and also on the fermion representation being considered. For the fundamental representation, the profile of smallest eigenvalues λ min as a function of θ is shown in figure 2 for the different gauge groups considered in this work excluding the case of SU (2). Although the maximum of λ min is achieved at different values of θ for every gauge group considered, the choice θ = π/2 is always close to the maximum and hence leads to JHEP11(2014)074 a small condition number. For homogeneity in the definition of a renormalization scheme in the subsequent calculations, we will fix θ = π/2 for all values of N . As we will show in subsection 4.3, this choice of θ together with the family of background fields defined in this work will lead to a setup for which higher order cutoff effects are highly suppressed even for non-fundamental representations. Although the choice of θ = π/2 is taken considering the fundamental representation, this value also leads to reasonably small condition numbers for the symmetric and antisymmetric representations. For the adjoint representation the smallest condition number is obtained for θ = 0. However, we decide to stick to the choice θ = π/2 also in this case since it leads to a situation were higher order lattice artifacts are highly reduced. 5 For the case of SU(2), we choose θ = 0 for the fundamental representation but leave θ = π/2 for the symmetric/adjoint. In order to extract the coefficients r n,i as accurately as possible we first evaluate p 1,0 (L/a) and p 1,1 (L/a) adapting the strategies in [1,15] to general N (see appendices A and B for details on the calculation). Once the series of data for p 1,i (L/a) is produced, the coefficients r n,i can be extracted using a suitable fitting procedure. In the pure gauge case, we calculated p 1,0 (L/a) for values of L/a ∈ {6, 8, . . . , 100} and then used the "Blocking" method described in [39] to obtain the values of the asymptotic coefficients. The calculation was done using floating point precision with 50 decimal places for 2 ≤ N ≤ 8 and with quadruple precision for N > 8. To control the error we compared the results and errors obtained with different level of accuracy. Since the asymptotic form eq. (4.1) is expected to be valid as a/L → 0, we consider only values of L/a ⊂ [28,100] when extracting the coefficients r 0,0 and r 0,1 . This choice produced the most reliable values for the coefficients and their relative errors. As a check we also reproduced the known value of s 0,0 = 2b 1,0 to a similar degree of accuracy. Concerning the fermionic part, values for p 1,1 (L/a) were produced at quadruple precision in the range L/a ⊂ [4, 64] (for even and odd values) for all gauge groups and representations considered in this work. This was enough to obtain the asymptotic coefficients in eq. (4.1) to very high precision (see tables 2 and 3). Results In table 2 we give the values for the coefficients r 0,0 and r 1,0 . From r 1,0 we can extract the gauge contribution to the boundary improvement coefficient c where C 2 (R) is the quadratic Casimir operator in the representation R. For completeness, we also include here the value of the fermionic part c This was calculated for the fundamental representation in [15] and later extended to other represesentations in [35]. In the present work we have been able to reproduce the value of c (1,1) t with similar accuracy, which is a further check on the correctness of the whole calculation. Residual cutoff effects The determination of the gauge and fermion contributions to c This is true for the family of background fields defined in section 3 and for the value of θ chosen in subsection 4.1. A different choice of parameters, however, can lead to very high residual cutoff effects even after boundary O(a) improvement is implemented. 6 In order to check this, we study the dependence of δ 1,1 on the parameter θ for different values of N in a range θ ⊂ [0.45π, 0.57π]. The cases of N = 3 and 6 are displayed in figure 6. Other gauge groups show a very similar behavior. The residual cutoff effects δ 1,1 depend strongly on θ. Clearly, a poor choice of θ might lead to situations with very large higher order cutoff effects. 7 It is remarkable that the value θ = π/2, established in subsection 4.1 to obtain a condition number as small as possible, also leads to a situation where higher order cutoff effects are highly suppressed. A very similar picture is observed when considering any of the 2-index representations. large if particular care is not taken in the choice of BF [34][35][36]. The magnitude of δ 1,1 for the 2-index representations strongly depends on the angle θ in a very similar way as it is shown in figure 6 for the fundamental representation. It is then possible to tune θ to minimize cutoff effects without the need of modifying the BF [35,36]. What is remarkable of the family of background fields proposed in this work is that for the fundamental, symmetric and antisymmetric representations, values of θ which lead to small condition numbers also lead to small higher order lattice artifacts in the step scaling function. This is not true for the adjoint representation since, as discussed in subsection 4.1, the condition number is minimized for θ = 0. It is also remarkable that cutoff effects for all the representations considered are minimized for the same value θ = π/2. Matching the Λ parameter to MS In this section we calculate the relation Λ SF /Λ MS of Λ parameters in our family of SF schemes and the MS scheme. This relation is essential for obtaining the ratio Λ MS / √ σ from SF simulations. We provide numerical values of Λ SF /Λ MS for the pure gauge theories and for the theories with 2 fundamental fermions. For completeness, we derive an expression (see eq. (5.12)) for the ratio Λ SF /Λ MS as a function of N , N f and the representation R, which might be useful also for future BSM studies using the SF. The Λ parameter is a renormalization group invariant and scheme dependent quantity given by (in a generic scheme X) It is a dimensionfull scale dynamically generated by the theory. In subsection 4.2.2 we have performed the computation of the SF coupling 8 g SF in the Schrödinger Functional scheme to one loop order in perturbation theory, i.e. we have calculated the renormalized coupling as an expansion in terms of the bare coupling g 0 g 2 SF (L) = g 2 0 + p 1 (L/a)g 4 0 + O(g 6 0 ), (5.2) where, after doing a continuum extrapolation, To be able to compare the results at different values of N , we are interested in a relation between α SF = g 2 SF /4π and some scheme where N is only a parameter. For that we choose the usual MS scheme, defined at infinite volume and at high energies. The relation between the running coupling in the two schemes can be written as an expansion where s is a scale parameter and c i (s) are the coefficients relating the couplings in the two schemes at each order in perturbation theory. JHEP11(2014)074 The relation between the Λ parameter in the SF and MS scheme is given by where c 1 (1) is the coefficient of the 1-loop relation (5.4). Note that eq. (5.5) is an exact relation even though it depends on the 1-loop coefficient relating the couplings in two different schemes. For determining the coefficient c 1 (s) in eq. (5.4), we first use the known relation between α MS in the MS and the bare coupling α 0 [17,30,33,40], which at 1-loop is given by The 1-loop coefficient d 1 (s) is given for generic N and fermionic representation R by where The coefficient k 1 of the gauge part is taken from [17,30,33] and reads The coefficient K 1 is a representation independent function of the tree level coefficient c with r 0,i being the continuum coefficients in the series (4.1). Knowing this, the relation between Λ parameters in eq. (5.5) can be given as a function of the parameters N , N f and T R and of the coefficients r 0,i Finally, in table 5 we collect the values of the ratio of Λ parameters for the schemes studied in this work and for the pure gauge theory (N f = 0) and for 2 flavors of fundamental fermions. 9 Ratios of lambda parameters for 2 index representations can be recovered using eq. (5.12) and the corresponding coefficients from table 3. Table 5. Ratios between Λ parameters in the SF and MS schemes, for the pure gauge theory and for 2 flavors of fundamental fermions. Conclusions We have studied the Schrödinger functional boundary conditions and the perturbative O(a) improvement for SU(N ) gauge theories with general N . The improvement coefficient c is obtained also for all values of N . Additionally we provide the matching between the SF and MS schemes for a wide range of theories including fermions in various representations. This enables a precision study of the coupling and the determination of Λ MS in the large N limit. The fermionic twisting angle θ is also studied and we found out that the value θ = π/2 is a good compromise between the simulation speed and the minimization of the O(a 2 ) lattice artifacts in the perturbative 1-loop lattice step scaling function. JHEP11(2014)074 of different variables with our choice of background field and basis for SU(N ) generators are shown in appendix B. In the following we will work in lattice units i.e. the lattice spacing a = 1. Additionally repeated Latin indices a, b, c, . . . are not summed over and repeated Greek indices α, β, γ, . . . are always summed over unless otherwise stated in the formula. Latin indices run from 1, 2, 3 and Greek ones from 0, 1, 2, 3. The operators we are interested in are defined as There is no summation over µ in the r.h.s. of the eq. (A.2). The star product in eq. (A.2) which maps an N ×N matrix M and an SU(N ) matrix X to an SU(N ) matrix is defined as In eq. (A.3) the operator D WD is the same as in eq. (2.8) with c sw = 1. The first step is to find a suitable basis for the SU(N ) generators. This is a basis that is invariant under the star product defined in eq. (A.4). In practice we want to find generators X a that satisfy cosh G 0k X a = χ c a X a , sinh G 0k X a = χ s a X a , with arbitrary coefficients χ c a and χ s a . The hyperbolic sine and cosine of the non-zero elements of the field strength tensor are A basis that satisfies eq. (A.5) for the non-diagonal generators are the ladder operators defined as where n and m are the matrix indices and a(j, k) is the color index. The properties of a(j, k) are given in table 6. The generators X a are normalized as Tr X a X b = − 1 2 δ a,b . The diagonal generators can be chosen in any way that satisfies eq. (A.5). The boundary conditions generate a background field V µ (x) defined in eqs. Table 6. The values of the color index a(j, k) as a function of the dummy indices j and k. When 1 ≤ a ≤ (N 2 − N )/2 the generators X a have a non-zero element in the upper and for (N 2 − N )/2 < a ≤ N 2 − N in the lower triangle. and through them to the operators ∆ s . The next step is to calculate the covariant derivatives with the background field V µ (x) when q(x) = q a (x)X a is proportional to a generator X a . The covariant derivatives can then be written in a general form is the three momentum. Next we will show how one can calculate the determinant of operators ∆ s . In [1] it has been shown that for an operator ∆ that satisfies for matrices A, B and C and an eigenvalue equation We will next use these properties of the ∆ s operators. Since the operator ∆ 0 is invariant under spatial translations and constant diagonal gauge transformations its eigenfunctions are of the form Operating with ∆ 0 on ω a (x) we get Clearly the operator ∆ 0 is similar to the operator in eq. (A.14) and thus the strategy shown can be used. Using the eq. (A.15) with ξ = 0 i.e. We will then move on to the more challenging case of ∆ 1 . The eigenfunctions of the operator ∆ 1 have the general form where normalization 10 matrix R a µν (t) is a diagonal 4 × 4 matrix with R a 00 (t) = −i, R a kk (t) = e i(p k +fa(t))/2 , k = 1, 2, 3. (A.26) Again we can operate with ∆ 1 on the eigenfunction eq. (A.25) which yields 10 Adding R a µν ensures that the matrices A a µν (t), B a µν (t) and C a µν (t) in the recursion relation are real. JHEP11(2014)074 where the matrices A a µν (t), B a µν (t) and C a µν (t) are We have used the following short handed notation The operator ∆ 1 is also similar to the case in eq. (A.14) and the same strategy can again be exploited. Additionally the boundary conditions of ψ a µ (t) in eq. (A.27) are With this we can first calculate the determinant of ∆ 1 in the more general case where the boundary conditions are given by eq. (A.33). Setting ξ = 0 in eq. (A.15) we get With the help of these equations we find F a µν (t) which has the property , (A.37) JHEP11(2014)074 are the first nonzero components of ψ a µ (t). The matrix F a µν (t) is where the projection operator P µν is With F a µν (t) we will be able to construct a matrix M a µν that couples v a µ from eq. (A.37) and the boundary condition eq. We can then move on to the case of ∆ 1 where a > N 2 − N i.e. for diagonal generators X a and when p = 0. In this case the boundary conditions are given by eq. (A.32) and ψ a 0 (t) and ψ a k (t) components decouple since the matrices A a µν (t), B a µν (t) and C a µν (t) are diagonal B Chosen basis for the diagonal generators and the values of the coefficients which depend on the background field In appendix A we showed how the 1-loop coupling can be calculated for a generic background field and basis of generators. In here we will specify the basis that we have selected as well as the values of the coefficients χ c a , χ s a and f a (t). We have chosen a basis given by
7,837.2
2014-11-01T00:00:00.000
[ "Physics" ]
Aberrant Expression of Histone Deacetylases 4 in Cognitive Disorders: Molecular Mechanisms and a Potential Target Histone acetylation is a major mechanism of chromatin remodeling, contributing to epigenetic regulation of gene transcription. Histone deacetylases (HDACs) are involved in both physiological and pathological conditions by regulating the status of histone acetylation. Although histone deacetylase 4 (HDAC4), a member of the HDAC family, may lack HDAC activity, it is actively involved in regulating the transcription of genes involved in synaptic plasticity, neuronal survival, and neurodevelopment by interacting with transcription factors, signal transduction molecules and HDAC3, another member of the HDAC family. HDAC4 is highly expressed in brain and its homeostasis is crucial for the maintenance of cognitive function. Accumulated evidence shows that HDAC4 expression is dysregulated in several brain disorders, including neurodegenerative diseases and mental disorders. Moreover, cognitive impairment is a characteristic feature of these diseases. It indicates that aberrant HDAC4 expression plays a pivotal role in cognitive impairment of these disorders. This review aims to describe the current understanding of HDAC4’s role in the maintenance of cognitive function and its dysregulation in neurodegenerative diseases and mental disorders, discuss underlying molecular mechanisms, and provide an outlook into targeting HDAC4 as a potential therapeutic approach to rescue cognitive impairment in these diseases. INTRODUCTION Histone deacetylases (HDACs), accompanying with histone acetyltransferases (HATs), are implicated in chromatin remodeling and subsequent transcription regulation by controlling the status of histone acetylation. Histone acetylation makes chromatin conformation more relaxed, facilitating gene transcription, whereas histone deacetylation induces a condensed chromatin conformation repressing gene transcription. By controlling the status of histone acetylation, HDACs are involved in diverse physiological and pathological processes. Moreover, the function of HDACs is not limited to the histone deacetylation. Recent evidences suggest that HDACs may also contribute to the deacetylation of non-histone proteins (Lardenoije et al., 2015). In addition, HDACs also have deacetylase-independent functions, such as histone deacetylase 4 (HDAC4) (Lardenoije et al., 2015;Han et al., 2016). HDAC4 is highly expressed in brain (Grozinger et al., 1999;Bolger and Yao, 2005;Darcy et al., 2010). It plays a key role in the maintenance of cognitive function and its alteration is associated with cognitive impairment in both age-related neurodegenerative diseases (e.g., Alzheimer's disease, AD) and development-related mental disorders (e.g., autism). Therefore, the role of HDAC4 in cognitive function, its dysregulation in cognitive impairmentrelated neurodegenerative diseases and mental disorders, and underlying mechanisms are discussed in this review. HDACs Classification Eighteen human HDACs are identified and classed into four groups based on their homology to yeast HDACs (Didonna and Opal, 2015). Class I HDACs, consisting of HDAC1, 2, 3, and 8, are homologous to yeast RPD3 while class II HDACs have high identity to yeast HDA1, consisting of HDAC4,5,6,7,9,and 10. According to the protein structure and motif organization, class II HDACs are further divided into two subclasses, class IIa with HDAC4, 5, 7, and 9, and class IIb with HDAC6 and 10. Class III HDACs, named sirtuins, including SIRT1-7, are homologous to yeast SIR2. Compared with zinc-dependent HDACs of class I and class II, class III HDACs are nicotinamide-adenine-dinucleotide (NAD)dependent. HDAC11 is the only member of Class IV, which is also a Zn-dependent HDAC. The HDAC4 Gene and Protein The human HDAC4 gene, located on chromosome 2q37.3, spans approximately 353,480 bp encoding HDAC4 protein with 1084 amino acids. HDAC4 shuttles between cytoplasm and nucleus depending on signal transduction-related phosphorylation status of HDAC4 (Mielcarek et al., 2013). Normally, phosphorylated HDAC4 retains in the cytoplasm, while dephosphorylated HDAC4 is imported into the nucleus (Nishino et al., 2008). Histone deacetylase 4 protein consists of a long N-terminal domain and a highly conserved C-terminal catalytic domain. The deacetylase activity of HDAC4 is almost undetectable although it has a conserved C-terminal catalytic domain, which might be caused by a substitution of tyrosine to histidine in the enzyme active site (Lahm et al., 2007). However, HDAC4 does play an important role in the regulation of gene transcription via different ways (Figure 1). First, HDAC4 interacts with multiple transcriptional factors [e.g., myocyte enhancer 2 (MEF2), runt related transcription factor 2 (Runx2), serum response factor (SRF), heterochromatin protein 1(HP1), nuclear factor kappa B (NF-κB)] regulating gene transcription (Sando et al., 2012;Ronan et al., 2013). Although HDAC4 per se lacks deacetylase activity, it may be involved in histone deacetylation-mediated transcriptional regulation via interacting with HDAC3, another member of the HDAC family with deacetylase activity (Grozinger et al., 1999;Lee et al., 2015). For example, Lee et al. (2015) showed that HDAC4 is crucial for HDAC3-mediated deacetylation of mineralocorticoid receptor, which could be inhibited by class I HDAC inhibitor but not class II HDAC inhibitor, indicating that HDAC4 is implicated in protein deacetylation via the deacetylase activity of HDAC3. Moreover, the deacetylase activity of HDAC4 needs to be further investigated by multiple approaches as it is not convincing by the in vitro assay from one study (Lahm et al., 2007). As the nuclear localization of HDAC4 is regulated by its interaction with14-3-3, it is possible that the alteration of nuclear HDAC4 mediated by tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein (14-3-3) is involved in transcriptional regulation by its deacetylase activity (Nishino et al., 2008). A recent study suggests that HDAC4 may function to regulate protein SUMOylation via interacting with SUMOconjugating enzyme Ubc9 (Ubc9), a SUMO E2-conjugating enzyme, contributing to memory formation (Figure 1) (Schwartz et al., 2016). FIGURE 1 | Histone deacetylase 4 (HDAC4) in cognitive function and molecular mechanisms. HDAC4 is a global regulator of the transcription of genes involved in synaptic plasticity, neuronal survival, and neurodevelopment by interacting with multiple proteins, which is essential for the maintenance of normal cognitive function. Moreover, HDAC4 may function to regulate protein SUMOylation via interacting with Ubc9 contributing to the maintenance of cognitive function. Solid line and dash line represent confirmed and possible mechanisms, respectively. HDAC4 IN COGNITIVE FUNCTION AND MOLECULAR MECHANISMS A growing body of evidence indicates that the homeostasis of HDAC4is crucial for the maintenance of cognitive function by regulating genes involved in synaptic plasticity, neuronal survival and neurodevelopment (Figure 1) (Schwartz et al., 2016). HDAC4 and Synaptic Plasticity Histone deacetylase 4 interacts with multiple transcription factors (e.g., MEF2, Runx2, SRF, HP1), 14-3-3, HDAC3 etc. regulating the transcription of genes involved in synaptogenesis, synaptic plasticity and neurodevelopment, such as activity regulated cytoskeleton associated protein (Arc) and protocadherin (Pcdh10) (Figure 1) (Grozinger et al., 1999;Sando et al., 2012;Ronan et al., 2013;Rashid et al., 2014;Lee et al., 2015;Sharma et al., 2015;Krishna et al., 2016;Nott et al., 2016). First, nuclear HDAC4 represses the expression of constituents of synapses leading to the impairment of synaptic architecture and strength in mice (Sando et al., 2012). In addition, mice carrying a gain-of-function nuclear HDAC4 mutant exhibit deficits in neurotransmission, learning and memory (Sando et al., 2012). On the other hand, silencing HDAC4 expression does result in the impairment of synaptic plasticity, and learning and memory deficits in both mice and Drosophila (Kim et al., 2012;Fitzsimons et al., 2013). A proteomics analysis indicates that HDAC4 is a regulator of proteins involved in neuronal excitability and synaptic plasticity, which are differentially expressed in normal aging subjects and AD patients and associated with memory status (Neuner et al., 2016). A recent study showed that HDAC4 interacts with Ubc9 during memory formation, while the reduction of Ubc9 in adult brain of Drosophila impairs longterm memory, suggesting that the role of HDAC4 in memory formation may be associated with the regulation of protein SUMOylation (Figure 1) (Schwartz et al., 2016). Above evidence suggests that HDAC4 homeostasis is crucial for the maintenance of synaptic plasticity and cognitive function, i.e., both HDAC4 elevation and reduction lead to cognitive deficits. It is not surprised that both up-regulation and downregulation of HDAC4 impairs synaptic plasticity and memory function as previous studies have demonstrated that a number of molecules play a dual role in synaptic plasticity and memory function. For example, both overexpression and disruption of regulator of calcineurin 1 (RCAN1) leads to synaptic impairment and memory deficits in Drosophila and mice (Chang et al., 2003;Hoeffer et al., 2007;Chang and Min, 2009;Martin et al., 2012). Moreover, the bidirectional alterations of HDAC4 may differentially disrupt the balance between HDAC4 and its interacting partners leading to synaptic impairment and memory deficits as HDAC4 is implicated in multiple signaling pathways by interacting with many functional proteins (Figure 1). However, the underlying mechanisms need to be further investigated. HDAC4 and Apoptosis Neuronal apoptosis is a major mechanism linking to cognitive deficits. In addition to synaptic plasticity, HDAC4 is also involved in neuronal apoptosis. For example, HDAC4 interacts with NF-κB repressing proapoptotic gene expression, and it also inhibit ER stress-induced apoptosis by interacting with activating transcription factor 4 (ATF4), a key transcriptional factor in ER stress response (Figure 1) (Zhang et al., 2014;Vallabhapurapu et al., 2015). Moreover, Majdzadeh et al. (2008) showed that HDAC4 overexpression protects mouse cerebellar granule neurons (CGNs) from apoptosis by inhibiting cyclin dependent kinase 1 (CDK1) activity. Consistently, upregulation of HDAC4 by a NMDAR antagonist protects mouse hippocampal neurons from naturally occurring neuronal death, whereas HDAC4 reduction promotes neuronal apoptosis during development (Chen and Cepko, 2009). Sando et al. (2012) further demonstrated that HDAC4-C-terminal is crucial for rescuing HDAC4 knockdown-induced cell death and reduction of synaptic strength in mouse brains. However, Bolger and Yao (2005) showed that increased expression of nuclear-localized HDAC4 promotes neuronal apoptosis in mouse CGNs, while down-regulation of HDAC4 protects neurons from stressinduced apoptosis. The conflicting results may be caused by different cell types and culture conditions. Majdzadeh et al. (2008) cultured CGNs for 4−5 days before transfection, while Bolger and Yao (2005) transfected CGNs immediately after cell isolation. Although CGNs were used in both studies, the maturation status of neurons when they were transfected may have significant effects on the conflict results. Moreover, short-term and longterm protein overexpression may have opposite effects. For example, a previous study showed that RCAN1 plays an opposite role in neuronal apoptosis at different culture stages, which may be associated with aging or maturation processes (Wu and Song, 2013). HDAC4 and Brain Development Histone deacetylase 4interacts with multiple transcriptional factors, repressing the transcription of genes involved in neurodevelopment (Figure 1) (Sando et al., 2012;Ronan et al., 2013). Moreover, HDAC4 may be implicated in neurodevelopment via interacting with HDAC3 which is necessary for brain development (Norwood et al., 2014). In human, both HDAC4 deletion and duplication lead to mental retardation and intelligence disability, suggesting that HDAC4 plays an important role in neurodevelopment which directly links to cognitive function (Shim et al., 2014). ABERRANT HDAC4 EXPRESSION/LOCALIZATION IN NEURODEGENERATIVE DISEASES Cognitive decline, in particular, learning and memory deficits, is a characteristic of neurodegenerative diseases [e.g., Alzheimer's disease (AD), Huntington's disease (HD), and Parkinson's disease (PD)] which are associated with synaptic dysfunction and synaptic and neuronal loss. A large body of evidence indicates that aberrant HDAC4 expression and subcellular distribution may contribute to the cognitive decline in patients with neurodegenerative diseases (Table 1). First, increased HDAC4 expression was observed in prefrontal cortex of aged individuals, while aging is the major risk factor of neurodegenerative disorders (Sharma et al., 2008). In addition, a more recent study showed that HDAC4 is a global regulator of memory deficits with age (Neuner et al., 2016). Moreover, HDAC4 is involved in the regulation of SIRT1 which is implicated in both aging and memory process in rats (Sasaki et al., 2006;Sommer et al., 2006;Quintas et al., 2012;Han et al., 2016). Furthermore, as mentioned in the section 2.1, HDAC4 homeostasis is crucial for the maintenance of cognitive function, i.e., both HDAC4 elevation and reduction lead to cognitive deficits. HDAC4 in Alzheimer's Disease Alzheimer's disease is the most common form of neurodegenerative disorders in the elderly leading to dementia. Progressive memory loss is the clinical characteristics of AD. Neuritic plaques, neurofibrillary tangles and neuronal loss are the neuropathological hallmarks of AD. Amyloid β (Aβ) and phosphorylated microtubule associated protein tau (Tau) are the major components of neuritic plaques and neurofibrillary tangles, respectively, while apoptosis is a major mechanism of neuronal loss (Wu et al., 2014). The nuclear expression of HDAC4 is markedly increased in brains of AD patients, while the alteration of total HDAC4, including both cytoplasmic and nuclear HDAC4, is not conclusive (Shen et al., 2016). However, the expression of HDAC4 was significantly increased in AD model mice (Anderson et al., 2015). Moreover, ApoE4, the only confirmed genetic risk factor of late onset AD, increases nuclear HDAC4 levels compared with the ApoE3 in transgenic mice (Sen et al., 2015). It suggests that increased HDAC4 expression or its nuclear localization may contribute to learning and memory deficits in patients with AD. HDAC4 in Frontotemporal Lobar Degeneration Frontotemporal lobar degeneration (FTLD) is a heterogeneous neurodegenerative process resulting in frontotemporal dementia. Progressive difficulties in planning, organizing and language are the major characteristics of FTLD. The atrophy of frontal and temporal lobe and inclusions containing abnormal accumulation of Tau, TAR DNA binding protein (TDP-43) or FUS RNA bindind protein (FUS) are the characteristic pathological features of FTLD. In FTLD patients, cytoplasmic HDAC4 is increased in granule cells of the dentate gyrus, while HDAC5, the other member of class IIa HDACs, is not altered, suggesting that HDAC4 may have a specific role in the pathology of FTLD (Whitehouse et al., 2015). HDAC4 in Huntington's disease Huntington's disease is a common autosomal dominant neurodegenerative disease, which is caused by the expansion of polyglutamine repeats in huntingtin (HTT) protein, named as mutant HTT (mHTT) Myers et al., 1993;Kremer et al., 1994). The characteristic clinical features are chorea, progressive cognitive decline, and psychiatric symptoms, while the cognitive problem is often the earliest symptom in patients with HD (Walker, 2007). mHTT impairs fast axonal transport, disrupts mitochondrial function and inflammatory response and promotes apoptosis, which may contribute to the cognitive decline (Szebenyi et al., 2003;Beal and Ferrante, 2004;Trushina et al., 2004). Growing evidence suggests that increased HDAC4 is implicated in HD pathology, such that reducing HDAC4 expression has beneficial effects. First, overexpression of miR-22 has a protective effect on mHTT model cells, which may be mediated by HDAC4 reduction as HDAC4 is a target gene of miR-22 (Jovicic et al., 2013). Second, HDAC4 interacts with microtubule associated protein 1S (MAP1S) resulting in MAP1S destabilization and reduction, subsequently suppressing the clearance of mHTT aggregates and potentiating the toxicity of mHTT to cultured cells (Yue et al., 2015). Moreover, HDAC4 is associated with HTT in a polyglutamine-lengthdependent manner and co-localized with cytoplasmic aggregates. However, reducing HDAC4 expression delays the formation of cytoplasmic aggregates, restores BDNF expression, and rescues synaptic dysfunction in HD mouse models (Mielcarek et al., 2013). In addition, suberoylanilide hydroxamic acid (SAHA) promotes HDAC4 degradation, suggesting that reducing HDAC4 expression may contribute to SAHA's rescue effects on HD model mice via multiple HDAC4-associated pathways (Figure 1) (Mielcarek et al., 2011). However, SAHA is also an inhibitor of class I HDACs and HDAC6, suggesting that its rescue effects may also be mediated by inhibiting the deacetylase activity of class I HDACs and HDAC6. Although Quinti et al. (2010) showed that the reduction of HDAC4 is associated with the progression of HD in HD model mice, Mielcarek et al. (2013) did not observe the reduction in same HD model mice. However, they found that reducing HDAC4 expression has beneficial effects on HD mice (Mielcarek et al., 2013). As the alteration of HDAC4 in HD model mice remains inconclusive and it still lacks the evidence from HD patients, the alteration of HDAC4 in HD and its role in the pathology of HD need to be further investigated. HDAC4 in Parkinson Disease Parkinson disease is the second most common neurodegenerative disease in the elderly. In addition to tremor, rigidity, gait disturbances etc. motor dysfunctions, PD patients also have cognitive impairments (Jankovic, 2008). The major pathological hallmark of PD is the Lewy bodies which mainly consist of protein aggregates of a-synuclein, parkin, and ubiquitin (Jellinger, 2009). In addition, same pathological features were observed in patients with Lewy body (LB) dementia (Jellinger, 2009). A couple of studies indicate that HDAC4 is associated with the pathology of PD. First, mutations in the Parkin gene cause early onset familial PD and the dysregualtion of parkin has also been observed in sporadic PD. Second, parkin controls the levels of sumoylated HDAC4 (Kirsh et al., 2002;Um et al., 2006). Moreover, HDAC4 co-localized with α-synuclein in the LB (Takahashi-Fujigasaki and Fujigasaki, 2006). In addition, paraquat, a widely used herbicide, implicated in the induction of the pathology of PD, reduces the expression of HDAC4 in culture cells (Song et al., 2011). Furthermore, previous studies showed that aberrant HDAC4 expression results in learning and memory deficits in both mice and Drosophila (Kim et al., 2012;Fitzsimons et al., 2013). Above evidence suggests that alteration of HDAC4 may contribute to cognitive decline in patients with PD. However, no direct evidence shows that HDAC4 is implicated in the pathology of PD. HDAC4 in Ataxia-Telangiectasia Ataxia-telangiectasia (A-T), a rare neurodegenerative disease, is caused by mutations in the ATM gene. A-T patients showed many premature aging components, characterized by difficulty in movement and coordination, and early cognitive impairment including learning and memory deficits (Vinck et al., 2011;Shiloh and Lederman, 2016). In ATM deficient mice, nuclear HDAC4 is increased, which is mediated by the reduction of ATM-dependent phosphorylation of protein phosphatase 2A (PP2A). Reduced phosphorylation of PP2A results in increased HDAC4 dephosphorylation by enhancing PP2A-HDAC4 interaction (Li et al., 2012). HDAC4 dephosphorylation promotes its nuclear import and subsequent dysregulation of genes involved in synaptic plasticity, neuronal survival and neurodevelopment, which may contribute to the cognitive deficits in ATM mice. Consistently, reduced ATM accompanying with the increase of nuclear HDAC4 has been observed in brains of AD patients (Shen et al., 2016). ABERRANT HDAC4 EXPRESSION/FUNCTION IN MENTAL DISORDERS Many mental disorders, including autism spectrum disorders (ASDs), depression, and schizophrenia, are associated with neurodevelopment defects and cognitive impairment is a core feature of mental disorders (Ronan et al., 2013). Increased evidence indicates that aberrant HDAC4 expression or function plays an important role in cognitive deficits of mental disorders ( Table 1). HDAC4 in ASD and BMDR Syndrome Autism spectrum disorder is characterized by the impairment of social and communication ability, as well as cognitive defects. Several lines of evidence suggests that dysregulation of HDAC4 is implicated in ASD (Pinto et al., 2014;Fisch et al., 2016). First, HDAC4 mRNA was significantly increased in autistic brains (Nardone et al., 2014). Moreover, ASD, intellectual disability, developmental delay etc. are the characteristics of Brachydactylymental-retardation (BDMR) syndrome which is caused by 2q37 microdeletion. Importantly, the HDAC4 gene is located in this small region Lacbawan, 1993-2016). A rare case of BMDR syndrome carries an inactive mutant of HDAC4, suggesting that HDAC4 deficiency may be the cause of BMDR syndrome Lacbawan, 1993-2016;Williams et al., 2010). Moreover, in patients with BMDR syndrome, HDAC4 modulates the severity of symptoms in a dosage dependent manner, which further confirms the role of HDAC4 in ASD and other BMDR features (Morris et al., 2012). HDAC4 in Depressive Disorders Depressive disorders are the most common mood disorder leading to disability, which is characterized by the presence of sad, empty, or irritable mood and cognitive impairment (Rock et al., 2014). Recent studies highly suggest that HDAC4 is implicated in the pathology of depressive disorders. First, aberrant expression of HDAC4 mRNA has been detected in patients with depression (Otsuki et al., 2012). Consistently, antidepressant reduces the recruitment of HDAC4 to the glial cell-derived neurotrophic factor (GDNF) promoter, consequently increasing the expression of GDNF which is reduced in patients with depression (Otsuki et al., 2012;Lin and Tseng, 2015). In patients with bipolar disorder (BPD), HDAC4 mRNA is significantly increased in a depressive state, while its expression is marked decreased in a remissive state (Hobara et al., 2010). In addition, HDAC4 mRNA is significantly increased in brains of forced-swim stress-induced-and postnatal fluoxetine-induced depression model mice (Sailaja et al., 2012;Sarkar et al., 2014). Intriguingly, adult fluoxetine application does not induce depression-like behavior in mice which is associated with unchanged HDAC4 expression. Ectopic overexpression of HDAC4 in hippocampus is sufficient to induce depression-like behavior in adult mice, indicating that HDAC4 elevation is the key to induce depression-like behavior (Sarkar et al., 2014). Furthermore, depression is a common feature in AD, which may be associated with the increase of HDAC4 expression in AD patients. HDAC4 in Schizophrenia Schizophrenia is a complex psychiatric disorder, characterized by impairments in behavior, thought, and emotion. Cognitive impairment is common in patients with schizophrenia, in particular, learning and memory deficits. A couple of evidence suggests that HDAC4 might be associated with the pathology of schizophrenia. First, one SNP (rs1063639) in the HDAC4 gene associates with the development of schizophrenia in a Korea population (Kim et al., 2010). Moreover, in patients with schizophrenia, HDAC4 mRNA is negatively associated with the expression of GAD67, a candidate gene of schizophrenia (Sharma et al., 2008). However, the exact role of HDAC4 in the cognitive deficits of schizophrenia needs to be further investigated. HDAC4, A SPECIFIC TARGET FOR COGNITIVE IMPAIRMENT Growing evidence indicates that HDAC4 is a specific target for the treatment of cognitive impairment in multiple disorders, which is different from other HDACs. First, HDAC4 is highly enriched in brain compared with other HDACs. Second, HDAC4 has no or weak HDAC activity, suggesting that global HDAC inhibitors, targeting the catalytic sites of HDACs, may have no effect on HDAC4's function (Sando et al., 2012). Consistently, HDAC4 has a different effect on cognitive function compared with other HDACs. For example, conditional deletion of HDAC4 leads to learning and memory deficits, while global HDACs inhibition or HDAC2 deficiency significantly improves learning and memory in mice (Vecsey et al., 2007;Guan et al., 2009;Kim et al., 2012). Moreover, the maintenance of HDAC4 homeostasis is crucial for the disease treatment as either increased or decreased HDAC4 expression is detrimental to the cognitive function. It suggests that HDAC4 is a potential target for the treatment of cognitive impairment. However, only one selective HDAC4 inhibitor, tasquinimod, is commercially available, and its effect on cognitive function has not been explored. Therefore, specific HDAC4 modulators should be developed and their roles in cognitive disorders need to be investigated. CONCLUSION Although HDAC4 belongs to the family of HDAC, its deacetylase activity is weak or undetectable. Thus, it remains elusive whether HDAC4 per se could repress gene transcription by its HDAC activity (Figure 1). However, HDAC4 could regulate the transcription of genes involved in synaptic plasticity, neuronal survival, and neurodevelopment by interacting with multiple proteins, which is essential for the maintenance of normal cognitive function (Figure 1). Moreover, HDAC4 may function to regulate protein SUMOylation via interacting with Ubc9 contributing to the maintenance of cognitive function (Figure 1). Moreover, aberrant expression of HDAC4 may be implicated in the cognitive impairment of neurodegenerative diseases and mental disorders. Therefore, HDAC4 is a potential therapeutic target to rescue cognitive deficits in above disorders. AUTHOR CONTRIBUTIONS YW: Formulated the study, wrote the manuscript, and designed the figure. FH, XW: Formulated the study and wrote the manuscript. QK, XH, BB: Provided intellectual thoughts, revised the manuscript, and project leaders.
5,348
2016-11-01T00:00:00.000
[ "Biology", "Psychology" ]
Uncovering human METTL12 as a mitochondrial methyltransferase that modulates citrate synthase activity through metabolite-sensitive lysine methylation Lysine methylation is an important and much-studied posttranslational modification of nuclear and cytosolic proteins but is present also in mitochondria. However, the responsible mitochondrial lysine-specific methyltransferases (KMTs) remain largely elusive. Here, we investigated METTL12, a mitochondrial human S-adenosylmethionine (AdoMet)-dependent methyltransferase and found it to methylate a single protein in mitochondrial extracts, identified as citrate synthase (CS). Using several in vitro and in vivo approaches, we demonstrated that METTL12 methylates CS on Lys-395, which is localized in the CS active site. Interestingly, the METTL12-mediated methylation inhibited CS activity and was blocked by the CS substrate oxaloacetate. Moreover, METTL12 was strongly inhibited by the reaction product S-adenosylhomocysteine (AdoHcy). In summary, we have uncovered a novel human mitochondrial KMT that introduces a methyl modification into a metabolic enzyme and whose activity can be modulated by metabolic cues. Based on the established naming nomenclature for similar enzymes, we suggest that METTL12 be renamed CS-KMT (gene name CSKMT). Thus far, only a single human mitochondrial KMT has been characterized, namely ETF␤-KMT (gene name ETFBKMT; alias METTL20), which is a member of MTF16 and targets Lys-200 and Lys-203 in the ␤-subunit of electron transfer flavoprotein (ETF␤) (17,19). However, many mitochondrial proteins have been shown to contain methylated lysine residues, and several uncharacterized MTases have a predicted mitochondrial localization, suggesting the existence of additional mitochondrial KMTs (22,23). One interesting candidate is METTL12, which shows sequence similarity to established KMTs and has a predicted mitochondrial localization (20,23). Citrate synthase (CS) resides in the mitochondrial matrix where it catalyzes the condensation of oxaloacetate (OAA) with acetyl-CoA to form citrate and CoA, i.e. the first committed and irreversible step of the Krebs cycle. Numerous studies have investigated the regulation of CS activity (24 -26). It is generally accepted that the overall rate of the CS-catalyzed reaction is mainly determined by the availability of the substrates acetyl-CoA and OAA, which are usually present in mitochondria at concentrations much lower than those required to saturate CS. In addition, CS was reported to be inhibited by citrate and succinyl-CoA (27,28). So far, regulation of CS activity by post-translational modification has not been reported. However, mammalian CS has been reported to be subject to lysine methylation; CS from both human and pig were reported to be trimethylated at Lys-395 (22,29) (amino acid numbering according to UniProt), which is localized in the CS active site (30). In the present study we have taken an activity-based approach to establish the function of METTL12. We detected a single substrate for recombinant METTL12 in METTL12 knock-out (KO) cell extracts, and we found this substrate to be CS. Through extensive in vitro and in vivo studies we demonstrate that METTL12 is responsible for methylation of Lys-395 in CS. We also found that CS activity was diminished by methylation and that METTL12-dependent methylation was inhibited by OAA and AdoHcy. Cloning and mutagenesis Cloning of human open reading frames and mutagenesis were performed as described previously (17). All constructs were sequence-verified. Bioinformatics analysis NCBI Basic Local Alignment Search Tool (BLAST) was used to identify protein sequences homologous to human METTL12 (31). Multiple sequence alignments were generated using algorithms embedded in the Jalview (v2.8) interface (32). Cell cultivation and generation of stable cell lines Human cell lines were grown at standard conditions, typically in DMEM GlutaMAX medium supplemented with 10% (v/v) fetal bovine serum (FBS), 100 units/ml of penicillin, and 0.1 mg/ml streptomycin (P/S). Human HAP1 cells were grown in Iscove's modified Dulbecco's GlutaMAX medium (IMDM) supplemented with 10% FBS and P/S. The generation of METTL12 KO HAP1 cells was initiated as a custom (nonexclusive) project with Horizon Genomics (Vienna, Austria), and these cells are now commercially available (Horizon Discovery HZGHC000536c011). Genomic ablation of the METTL12 gene was performed using the CRISPR-Cas9 technology, with guide RNA designed to target the METTL12 gene upstream of motif "Post I" (Fig. 2A). Individual clones were selected by limiting dilution, and frame-shifting events within the METTL12 gene were determined by sequencing of genomic DNA. Complementation of METTL12 KO cells was performed by transfecting cells with a p3ϫFLAG-CMV-14-derived plasmid that encoded either wild-type or D107Amutated C-terminally 3ϫFLAG-tagged METTL12 using the FuGENE6 Transfection Reagent (Roche Applied Science). Transfected cells were selected with 1 mg/ml Geneticin (Gibco) and expanded in medium containing Geneticin. Individual clones were screened by Western blot for the presence of 3ϫFLAG tag, citrate synthase (CS), and GAPDH (as loading control) using anti-FLAG (Sigma, F1804), anti-citrate synthase (Proteintech, 16131-1-AP), and anti-GAPDH (Abcam, ab9485) antibodies, respectively. Transient transfection and fluorescence microscopy HeLa cells were transiently transfected with pEGFP-N1derived plasmids (Clontech), encoding human full-length METTL12 or only its N terminus (consisting of amino acids 1-30) fused to the N terminus of enhanced green fluorescent protein (EGFP). Cells were analyzed by confocal fluorescence microscopy 24 h after transfection. Living cells were stained with 50 nM MitoTracker Deep Red FM (Life Technologies) and 0.5 g/ml Hoechst 33258 (Sigma) to visualize the mitochondria and the nuclei, respectively. Cells were imaged using an Olympus FluoView 1000 (Ix81) confocal fluorescence microscopy system with a PlanApo 60ϫ NA 1.1 oil objective (Olympus). The different fluorophores were excited at 405 nm (Hoechst), 488 nm (EGFP), and 635 nm (MitoTracker), and Kalman averaging (n ϭ 3) was used to record multichannel images. The fluorescent signals emitted from EGFP, MitoTracker, and Hoechst were acquired through green, red, and blue channels, respectively, and merged. Proteins were Ͼ90% pure as assessed by SDS-PAGE and Coomassie Blue staining. Protein concentrations were measured using Pierce BCA protein assay kit (Thermo Fisher Scientific). Preparation and fractionation of cell extracts Crude mitochondria-enriched (mitochondrial) fraction was prepared using the Mitochondria Isolation Kit for Cultured Cells (Thermo Fisher Scientific) according to the manufacturer's instructions by following the reagent-based protocol. Human cells or mitochondrial fractions were lysed for 5 min at 4°C in lysis buffer supplemented with 1 mM dithiothreitol and protease inhibitor mixture (catalog no. P8340; Sigma), and the resulting lysates were cleared by centrifugation. Samples of frozen pig organs were fragmented mechanically, lysed in lysis buffer (supplemented as above), sonicated, and cleared by centrifugation. Mitochondrial or whole-cell extracts were fractionated at 4°C by ion-exchange chromatography using the Pierce Strong Cation Exchange (S) Spin Column (Thermo Fisher Scientific). First, NaCl concentration was reduced to 50 mM by diluting cell lysates with dilution buffer (50 mM Tris-HCl, pH 7.4, 5% glycerol), and then extracts were applied onto S-column equilibrated in dilution buffer. Material bound to the S-column was eluted in 100-l aliquots by a step gradient of increasing NaCl concentrations prepared in dilution buffer, similarly as previously described (33). In vitro methyltransferase assay using [ 3 H]AdoMet To test the MTase activity of METTL12 on cellular material, 10-l reactions were assembled on ice containing 1ϫ storage buffer, 50 pmol of recombinant METTL12, 40 -60 g of protein from cell extracts, and 0.5 Ci of [ 3 H]AdoMet (Perkin-Elmer Life Sciences) ([AdoMet] total ϭ 0.64 M, specific activity ϭ 78.2 Ci/mmol). Reaction mixtures were incubated at 30°C for 1 h and analyzed by SDS-PAGE and fluorography, similarly as previously described (18). Typically, fluorography experiments were performed three times, with similar results, and data from a representative experiment are shown. Preparation of samples for MS analysis In vitro methylation of recombinant or cellular (in extract) proteins, for the purpose of mass spectrometry (MS) analysis, was performed as in the above section, except that [ 3 H]AdoMet was replaced with nonradioactive AdoMet (1 mM). When indicated, the methylation reaction contained additionally 1 unit/ml AdoHcy hydrolase (AdoHcyase) from rabbit erythrocytes (Sigma). CS present in extracts was up-concentrated by loading cleared lysates on the S-column and eluting bound material with elution buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 5% glycerol) with added protease inhibitor mixture. Proteins were resolved by SDS-PAGE and stained with Coomassie, the portion of gel containing the protein of interest was excised and subjected to in-gel trypsin (Sigma) or chymotrypsin (Roche Applied Science) digestion, and the resulting proteolytic fragments were analyzed by liquid chromatography MS, similarly as previously described (18). MS data were analyzed using an in-house-maintained human and pig protein sequence databases using SEQUEST TM and Proteome Discoverer TM (Thermo Fisher Scientific). The mass tolerances of a fragment ion and a parent ion were set as 0.5 Da and 10 ppm, respectively. Methionine oxidation, cysteine carbamidomethylation, and lysine and arginine methylation were selected as variable modifications. MS/MS spectra of peptides corresponding to methylated CS were manually searched by Qual Browser (v2.0.7). Citrate synthase activity assay The activity of CS was assayed by reaction of 5,5-dithiobis-(2-nitrobenzoic acid) (DTNB) with the free thiol group of CoA (generated as result of citrate formation from OAA and acetyl-CoA), which leads to formation of yellow 5-thio-2nitrobenzoic acid (TNB) that is detected spectrophotometrically at 412 nm (34). To determine the effect of METTL12mediated methylation on CS enzymatic activity, 0.5 M CS was incubated for 1 h at 30°C with 2 M METTL12, either wild-type or D107A-mutated, in the absence or presence of 1 mM AdoMet or AdoHcy, all in 1ϫ storage buffer. Next, the incubation mixture was diluted 250ϫ in storage buffer supplemented with 100 M DTNB and 300 M acetyl-CoA. Samples were left to equilibrate at room temperature for 1 min, and then the reaction was started by adding 300 M OAA through manual mixing. For titration experiments the reaction was started by adding varying amounts of OAA. The kinetics of TNB formation was monitored continuously at 412 nm for 1 min using a Shimadzu UV-1601 spectrophotometer, and the initial rate of reaction was calculated from the slope of the linear part of the kinetic curve and expressed as change in absorption at 412 nm/min (⌬A 412 /min). When indicated, the results from individual experiments were normalized for the activity of CS in the appropriate control and averaged. Human METTL12 methylates mitochondrial citrate synthase Statistical analysis The independent two-sample Student's t test was used to evaluate the probability (p value) that the means of two populations are not different. METTL12 is a mitochondrial 7BS MTase mainly found in vertebrates We recently reported that the uncharacterized human 7BS MTase METTL12 is likely a KMT due to its sequence similarity with other established human KMTs (20). Also, METTL12 contains a putative MTS (23) and because only a single mitochondrial KMT has been characterized thus far, we found it of particular interest to investigate the function of METTL12. Protein sequence searches revealed that putative METTL12 orthologues are mainly restricted to vertebrates, where they show a somewhat scattered evolutionary distribution, but they are also present in some invertebrate animals (Fig. 1A). Among mammals, METTL12 can be found in human, cow, and pig, but it is present only in a subset of rodents, e.g. it is found in guinea pig, but is absent in rat and mouse. An alignment of METTL12 orthologues revealed the presence of conserved hallmark motifs characteristic for 7BS MTases, such as Motif I, Post I, and Motif II (Fig. 1A) (35), as well as the specificity-associated motif Post II, which is very similar to that found in eEF1AKMT2, eEF1AKMT4, and METTL13 (20). By using the MitoProt algorithm (36) we found that METTL12 is likely localized to mitochondria, as the initial ϳ28 Human METTL12 methylates mitochondrial citrate synthase N-terminal amino acids of METTL12 were predicted to represent a MTS. Indeed, both the full-length METTL12 and the isolated putative MTS were, when expressed as N-terminal fusions with GFP, able to target GFP to mitochondria, thus confirming that METTL12 is a mitochondrial protein (Fig. 1B). Generation of METTL12 KO cells and demonstration of protein MTase activity To functionally characterize METTL12, the METTL12 gene was disrupted in the haploid HAP1 cells using CRISPR/Cas9 technology. Abrogation of the METTL12 gene function was assured by designing a guide RNA to target a sequence located upstream of motif Post I, which contains a catalytically important acidic residue (i.e. Asp-107 in METTL12) that is crucial for AdoMet binding in 7BS MTases (12,15,17,20). Through DNA sequencing, a clone was identified that carried a 1-bp insertion at the target site, resulting in a shifted reading frame. This mutant gene encodes a predicted protein encompassing the 51 N-terminal amino acids of METTL12 followed by 66 residues resulting from out-of-frame translation. The mutant protein is thus severely truncated relative to the wild-type protein (240 amino acids) and highly likely inactive ( Fig. 2A). Previously, we showed that many recombinant human KMT enzymes can methylate their respective substrates in a cellular extract (14,15,17,18). In several cases, such methylation was much more prominent in extracts from corresponding KMT KO cells, due to the substrate being exclusively in an unmethylated state. Consequently, we tested the ability of recombinant METTL12 to methylate proteins in a mitochondria-enriched extract from METTL12 KO cells. For this purpose we used a truncated version of METTL12 that lacked the 28 N-terminal amino acids predicted to represent the mature, MTS-less version of the protein. The methylation reactions were performed in the presence of [ 3 H]AdoMet and then analyzed by SDS-PAGE and fluorography, enabling the visualization of methylated proteins as distinct bands. Interestingly, we detected radiolabeling of a protein with an apparent molecular mass of ϳ48 kDa upon incubation of METTL12 with the extract from KO cells. No such labeling was observed when the wild-type (WT) HAP1 cells were used (Fig. 2B), indicating that the 48 kDa band represents a true METTL12 substrate. A putatively enzymatically inactive mutant of METTL12 (D107A) was used as a negative control, and, reassuringly, we did not detect any label- Human METTL12 methylates mitochondrial citrate synthase ing of the ϳ48 kDa substrate when this mutant enzyme was used, excluding the possibility that the observed labeling was mediated by an E. coli-derived contaminant (Fig. 2C). In addition we detected after longer exposures weak automethylation of METTL12 similar to what has been observed for many other KMTs (12,17,20,33). In summary, these results show that METTL12 is a mitochondrial enzyme with protein MTase activity. Identification of citrate synthase as a likely substrate of human METTL12 To reveal the identity of the ϳ48-kDa substrate, we incubated mitochondrial extracts from METTL12 KO cells with [ 3 H]AdoMet and recombinant METTL12 and then subjected the reaction mixtures to fractionation on a cation-exchange (S) column (Fig. 3A). A parallel sample where wild-type recombinant METTL12 had been replaced by the enzymatically inactive mutant D107A was subjected to the same fractionation scheme, thus representing a negative control. We found that the ϳ48-kDa 3 H-labeled protein bound to the S-column was optimally eluted between 0.05 and 0.15 M NaCl (denoted as the 0.15 fraction) (Fig. 3B). To identify the METTL12 substrate, the METTL12 KO extract was fractionated (in the absence of radioactivity), the 0.15 fraction was resolved by SDS-PAGE, and the ϳ48-kDa region was excised from the gel (Fig. 3C) followed by trypsin digestion and protein identification by MS. Among several mitochondrial proteins that were identified (Table 1), only one, namely citrate synthase, had been reported in the literature as being methylated on lysine. CS contains a trimethylated lysine residue in position 368 in the mature protein corresponding to Lys-395 in the CS precursor (29). The latter numbering is used in protein databases such as UniProt and Phos-phoSitePlus (post-translational modifications) and will also be used here by us. The molecular weight of mature CS (49 kDa) matched that of the methylated substrate detected by fluorography, and when an anti-CS antibody was used to detect CS throughout fractionation, it was found that the elution profile of CS was indistinguishable from that of the ϳ48-kDa substrate (compare Fig. 3, B and C). To further investigate the potential ability of METTL12 to methylate CS, we incubated the METTL12 KO cell extract with recombinant METTL12, then subjected the extract to fractionation, and the methylation status of Lys-395 in CS from the 0.15 fraction was assessed by MS. Strikingly, we found that Lys-395 was exclusively unmethylated in the KO cells (Fig. 3, D and E), whereas treatment with recombinant METTL12 shifted Lys-395 to the trimethylated (Me3) and dimethylated (Me2) states (Fig. 3, D and E). This demonstrates that METTL12 can methylate Lys-395 in CS in vitro and suggests that the enzyme is responsible for CS methylation also in vivo. METTL12 catalyzes methylation of Lys-395 in CS in vitro To further investigate CS as a substrate for METTL12-mediated methylation in vitro, we expressed and purified mature (MTS-less) CS in E. coli, and the resulting recombinant protein was then tested as a substrate for METTL12-mediated methylation. Clearly, METTL12 methylated recombinant CS in vitro, but a rather high amount of enzyme relative to substrate (Ͼ2-fold excess) was required to obtain maximal methylation (Fig. 4A). Importantly, protein mass spectrometry demonstrated the presence of trimethylated Lys-395 in recombinant CS that had been treated with METTL12, whereas Lys-395 was found to be unmethylated in untreated CS (supplemental Fig. S1). We also replaced Lys-395 by arginine in recombinant CS, which completely abolished methylation, indicating that Lys-395 is the only methylation site (Fig. 4B). Moreover, we considered the possibility that also a neighboring lysine residue, Lys-393, may be subject to methylation. However, this appears not to be the case, as methylation was not affected by mutation at this position (Fig. 4B) and because we were unable to find evidence for such methylation in the MS data. Taken together, the above data demonstrate that METTL12 catalyzes methylation of CS at Lys-395 in vitro. Processivity, turnover, and metabolite modulation of METTL12-mediated methylation Many KMTs that introduce trimethylation on their target lysines have a distributive mode of action, i.e. they introduce a single methyl group per binding event, thus generating a mixture of methylation states at lower enzyme concentrations. In contrast, other trimethylating KMTs are processive, i.e. they introduce all three methyl groups during a single binding event and thus exclusively generate the trimethylated product. To investigate the mode of action of METTL12, we incubated recombinant CS with different amounts of METTL12 and assessed the methylation status of Lys-395 by MS using extracted ion chromatograms corresponding to the various methylation states of a Lys-395-encompassing peptide (Leu-389 -Leu-408). Treatment of CS with a large excess of METTL12 predominantly yielded the trimethylated state, but when equimolar amounts of CS and METTL12 were used a mixture of the methylation states was observed, clearly demonstrating that METTL12 has a distributive mode of action (Fig. 5A, upper panel). We noted that METTL12 enzyme was relatively inefficient and appeared incapable of performing catalytic turnover; at limiting enzyme concentrations, one MTase molecule mediated, on average, the incorporation of (approximately) a single methyl group (Fig. 5B). Many AdoMet-dependent MTases are inhibited by the byproduct AdoHcy (the demethylated counterpart of AdoMet) (3), and we, therefore, considered the possibility that inhibition of METTL12 by AdoHcy may prevent catalytic turnover. To test this we added AdoHcyase, which catalyzes the hydrolysis of AdoHcy, to the reaction mixture for METTL12-mediated methylation of CS. Clearly, the addition of AdoHcyase dramatically increased the efficiency of methylation and improved the catalytic turnover (Fig. 5, A and B). These results indicate that AdoHcy generated in the methylation reaction has a strong inhibitory effect on METTL12. To further investigate the influence of AdoHcy on METTL12 activity, we performed in vitro methylation reactions where the concentrations of CS, METTL12, and AdoMet were held constant (at 2 M, 2 M, and 32.6 M respectively) whereas the concentration of AdoHcy was varied (0 -300 M). The results showed that METTL12-mediated methylation of CS was strongly inhibited by the addition of AdoHcy (Fig. 5C); visible inhibition was observed already at 1 M AdoHcy, and methyl- Human METTL12 methylates mitochondrial citrate synthase ation was almost completely abolished by 100 M AdoHcy. Importantly, the addition of AdoHcyase greatly alleviated the inhibitory effect of AdoHcy, and in agreement with the results in Fig. 5B, AdoHcyase increased methylation also in the absence of AdoHcy. The above results indicate that METTL12 has a much higher affinity for AdoHcy than for AdoMet and that AdoHcy, therefore, acts as a strong inhibitor of the enzyme, preventing enzymatic turnover. We previously found for other 7BS KMTs that the efficiency of lysine methylation could be modulated by the specific binding of various ligands to the substrate (15,18,20). We, therefore, tested the effect of relevant metabolites, i.e. the CS substrates OAA and acetyl-CoA, as well as the product citrate on METTL12-mediated CS methylation. We found that OAA had a strong inhibitory effect on METTL12-dependent methylation of CS, i.e. methylation was inhibited by ϳ80% at 200 M OAA (Fig. 6A), whereas neither acetyl-CoA nor citrate had any effect (Fig. 6B). Taken together, the data presented above indicate that METTL12 methylates CS in a non-processive reaction, which is inhibited by AdoHcy and OAA. Modulation of CS activity by METTL12 in vitro The crystal structure of pig CS shows that Lys-395 is localized close to the active site of the enzyme (30), suggesting that CS activity may be affected by METTL12-mediated methylation. To test this, we incubated CS with METTL12 in the presence of AdoMet and then measured the enzymatic activity of CS. Corresponding samples, where AdoMet had been omitted or replaced by AdoHcy, were included as negative controls. Interestingly, when CS was incubated with METTL12 in the presence of AdoMet, a substantial (ϳ30%) reduction in CS activity was observed relative to the negative controls (Fig. 7A). Notably, no such effect was observed when the inactive METTL12 mutant (D107A) was used (we confirmed that this mutant enzyme was also inactive on recombinant CS; see supplemental Fig. S2). Taken together, these experiments indicate that METTL12-mediated methylation causes a small, but significant, reduction in CS activity. To determine how METTL12-mediated methylation of CS affected the kinetics of the CS reaction, we measured the velocity of the CS-catalyzed reaction as a function of increasing OAA concentration while keeping the concentration of acetyl-CoA constant. These titration experiments were performed in parallel for methylated CS (incubated with AdoMet and METTL12 wild type) and unmethylated CS (incubated with AdoMet and METTL12 D107A mutant). Although the overall shape of the saturation curves was similar in the two cases, the apparent maximum velocity of the reaction was reduced for methylated CS compared with unmethylated CS (Fig. 7B). These results indicate that METTL12-mediated methylation primarily affects the rate constant of the CS-catalyzed reaction while having less effect on the affinity of CS for OAA. METTL12 is responsible for CS methylation in vivo We showed that recombinant METTL12 catalyzed methylation of CS at Lys-395 in vitro, both on recombinant CS and on Figure 4. METTL12-mediated methylation of Lys-395 in recombinant CS in vitro. A, methylation of recombinant CS is dependent on METTL12. CS was incubated with [ 3 H]AdoMet and increasing amounts of METTL12, and [ 3 H]methyl incorporation was visualized by fluorography (top) of Ponceau S-stained membrane (bottom). B, evaluation of CS point mutants as substrates for METTL12-mediated methylation. CS, either wild-type (WT) or mutant (K395R or K393R), was incubated with [ 3 H]AdoMet and METTL12, and [ 3 H]-methyl incorporation was visualized by fluorography as in A. Human METTL12 methylates mitochondrial citrate synthase Human METTL12 methylates mitochondrial citrate synthase CS from METTL12 KO extracts. Taken together with the published observation that CS from pig heart is trimethylated on Lys-395,thisindicatesthatMETTL12isresponsibleforCSmethylation also in vivo. To firmly establish this, we assessed the methylation status of Lys-395 in CS in "wild-type" and METTL12 KO HAP1 cells as well as in KO cells that had been complemented (supplemental Fig. S3) with ectopically expressed genes encoding either wild-type METTL12 or the enzymatically inactive mutant (D107A). We found that Lys-395 was predominantly trimethylated in the WT cells, with only small amounts of the un-, mono-, and dimethylated forms being detected (Fig. 8A). In contrast and as already shown, Lys-395 was exclusively observed in the unmethylated state in the METTL12 KO cells (Fig. 8A). Reassuringly, complementation with an expression plasmid carrying the wild-type METTL12 cDNA restored methylation in the KO cells, whereas no such effect was observed with the D107A-mutated cDNA. Taken together with the results from in vitro experiments, these data firmly establish that enzymatic activity of METTL12 is necessary and sufficient for CS methylation in vivo. Methylation level of Lys-395 in CS differs between cell lines and various organs Our in vitro experiments indicated that CS activity may be regulated by METTL12-mediated methylation in response to alterations in metabolite levels. To further address whether CS methylation is variable and, thus, likely to play a regulatory role, we analyzed CS Lys-395 methylation status in a panel of human cell lines as well as in several organs from pig. To this end we partially purified CS from the cells/organs using the purification scheme described previously (depicted in Fig. 3, A and B) and then assessed the methylation status of CS by MS. In the majority of cell lines tested, we observed high levels of Lys-395 methylation, with the trimethylated form (Me3) being predominant (Ͼ80%) (Fig. 8B). However, in HEK293 cells, CS showed a considerably lower methylation level, and a mixture of all four possible methylation states of Lys-395 was detected, i.e. unmethylated (Me0: ϳ36%), mono-(Me1: ϳ28%), di-(Me2: ϳ15%), and trimethylated (Me3: ϳ21%). In all analyzed pig organs (heart, muscle, liver, kidney, brain, ovary, uterus, small intestine, bone marrow, spleen) CS Lys-395 was exclusively found in the trimethylated form (data not shown). Discussion In the present study we have unraveled the biochemical function of the human MTase METTL12 using both in vitro and in vivo approaches. We found METTL12 to be localized to mitochondria and to methylate CS at Lys-395 in a non-processive fashion. Moreover, methylation reduced the activity of CS and was inhibited by the CS substrate OAA as well as by AdoHcy, suggesting that methylation may regulate CS function in response to altered metabolite levels. We found that recombinant METTL12 methylated a single substrate of ϳ48 kDa in mitochondrial extracts. Partial purification of the extract followed by MS analysis identified several candidate substrates. These candidates included CS, which we subsequently established as a bona fide METTL12 substrate. Mitochondrial elongation factor Tu (TUFM) is another interesting candidate (Table 1) because it represents the mitochondrial ortholog of the cytosolic eEF1A, which is subject to methylation by the closest characterized mammalian homologues of METTL12, i.e. eEF1A-KMT2 and eEF1A-KMT4 (20,21). Consequently, we have tested the activity of METTL12 on recombinant TUFM but were unable to detect any methylation, indicating that TUFM is not a substrate of METTL12. While this manuscript was under preparation, a study from another group implicated METTL12 in methylation of CS (37). Using an antibody that recognized trimethylated lysine residues, various trimethyllysine-containing mitochondrial proteins were detected, and only one of these, subsequently identified as CS, lacked this modification in HAP1-derived METTL12 KO cells. MS analysis demonstrated that Lys-395 in CS was unmethylated in the KO cells. Also, ectopic overexpression of METTL12 in HEK293 cells, where CS is relatively hypomethylated on Lys-395, resulted in a substantial increase in methylation. Although this study linked METTL12 to CS Cellular MTases differ greatly with respect to their affinities for the substrate AdoMet (expressed as the K m value) and the potentially inhibitory product AdoHcy (expressed as the K i value). Some MTases have an approximately equal affinity for AdoMet and AdoHcy (K m (AdoMet) Ϸ K i (AdoHcy)) and others have a higher affinity for AdoMet (K m (AdoMet) Ͻ K i (AdoHcy)), whereas others bind AdoHcy more avidly than AdoMet (K m (AdoMet) Ͼ K i (AdoHcy)) (3). The latter category of MTases will be particularly prone to product inhibition by AdoHcy and sensitive to alterations in the [AdoMet]/[AdoHcy] ratio. Clarke and Banfield have compared K m (AdoMet) and K i (AdoHcy) values for 23 mammalian MTases (3), and the ones with the relatively highest affinity for AdoHcy displayed K m (AdoMet)/K i (AdoHcy) ratios in the range 25-30, which is comparable to METTL12 (the addition of only 1 M AdoHcy inhibited METTL12 activity substantially in the presence of 32.6 M of AdoMet; Fig. 5C). Thus, METTL12 appears, compared with most other mammalian MTases, to be highly sensitive toward alterations in the cellular [AdoMet]/[AdoHcy] ratio. This was also supported by the observation that METTL12 was unable to perform catalytic turnover unless AdoHcyase was added to the reaction mixture. Importantly, the AdoMet and AdoHcy concentrations used in our in vitro experiments are comparable with those found inside cells; typical cellular concentrations of AdoMet and AdoHcy lie within the ranges 10 -100 M and 0.1-20 M, respectively (38). Interestingly, a number of recent studies have demonstrated that alterations in cellular metabolism leads to changes in the cellular AdoMet/AdoHcy ratio, which again causes changes in histone lysine methylation, thus providing an interesting link between metabolism and epigenetics (2,40,41). Similarly, our in vitro data suggest that CS activity may be modulated by AdoHcysensitive, METTL12-mediated lysine methylation in response to metabolic changes. We found that the CS substrate OAA strongly inhibited METTL12-mediated methylation of CS, whereas no such effect was observed with either the co-substrate acetyl-CoA or with the product citrate. X-ray crystallographic studies have determined the structure of pig CS in complex with various metabolites, and CS exists in an "open" conformation in the absence of ligands or when complexed with citrate, whereas it adopts a "closed" conformation when complexed with OAA (30,42). Only a minor portion of the CS structure is affected by the conformational switch between the closed and open states, but the methylation site Lys-395 is part of a segment that is shifted substantially (Fig. 9). This suggests that OAA exerts its inhibitory effect through inducing the closed conformation and, thereby, a repositioning of Lys-395 and surrounding residues. We found that METTL12-mediated methylation caused a small, but significant, reduction in CS activity. In contrast, Rhein et al. (37) did not detect any difference in activity between CS isolated from wild-type and METTL12 KO HAP1 cells, and another study did not detect significant differences in Fig. S3. B, methylation status of Lys-395 in CS from various human cell lines. CS was enriched by cation-exchange (S-column) chromatography (as in Fig. 3, A and B), chymotrypsin-digested, and methylation of CS-derived peptides, encompassing Lys-395, was analyzed by MS. Shown are the relative intensities of MS signals gated for different methylation states of the peptides with error bars indicating the range of values from at least three independent experiments. Human METTL12 methylates mitochondrial citrate synthase catalytic properties between recombinant pig CS, which is unmethylated, and CS from pig heart, which is trimethylated (43). However, in these two studies, the activity of CS from two different sources was compared (CS from WT versus KO cells; recombinant CS versus CS from pig heart), and this approach may be prone to uncertainties (e.g. variable loss of activity during enzyme isolation or inaccuracies in protein concentration determination) that could hinder the detection of small differences. In contrast, we used the same preparation of recombinant CS and then studied the effect of different additions (METTL12, AdoHcy, AdoMet), thus likely avoiding the above mentioned sources of error and allowing us to demonstrate a small, but robust, effect of methylation on CS activity; an effect that required the presence of both AdoMet and catalytically active METTL12. The small observed effect agrees well with the sketchy distribution of METTL12 in mammals, e.g. its absence in rats and mice, which also indicates that methylation is not strictly required for CS function but rather serves an optimizing or regulatory function that is beneficial under certain conditions. We observed that CS was predominantly trimethylated in the tested human cell lines, except in HEK293 cells, where CS showed a much lower methylation level (Fig. 8B). Strikingly, we detected exclusively fully trimethylated CS in all analyzed pig organs, indicating that trimethylation of CS may represent the default state in METTL12-containing organisms. These observations further supports that methylation is important for CS function, but also raises the question of whether altered methylation actively regulates CS activity in vivo. It should be noted, however, that the analyzed organs came from a healthy, well fed pig, grown in a controlled environment. Thus, one may speculate that METTL12-mediated methylation may regulate CS activity under certain stress conditions (e.g. starvation or extensive exercise), thus playing a role in the adaptation to such conditions. Bioinformatics analysis identified METTL12 as a member of a family of 7BS MTases, which also encompasses the human MTases eEF1A-KMT2 (gene name EEF1AKMT2; alias METTL10), eEF1A-KMT4 (gene name EEF1AKMT4; alias ECE2), and METTL13 (20). Including the present work, three of the four human members of this family have now been established as KMTs. This indicates that the remaining uncharacterized human member of this family, METTL13, is likely also a KMT. Numerous human 7BS MTases have been established as KMTs during the last years, and all of these are highly specific, i.e. only a single substrate has been identified, or in case of METTL21A, a group of highly related substrates (various Hsp70 proteins) (8,13,15,39). Consequently, a naming nomenclature based on the substrate specificity has emerged for these enzymes. For example, the MTases enzymes METTL20 and FAM86A were found to target the ␤-subunit of the mitochondrial electron transfer flavoprotein (ETF␤) and eukaryotic elongation factor 2 (eEF2), respectively, and were consequently redubbed ETF␤-KMT and eEF2-KMT (gene names EFTBKMT and EEF2KMT) (14,17). Thus, we suggest that METTL12 is renamed CS-KMT (gene name CSKMT) in keeping with this established nomenclature.
7,730.8
2017-09-08T00:00:00.000
[ "Biology" ]
E-payment instruments and welfare : The case of Zimbabwe The development of e-payment systems has spurred the debate on the future of cash. However, for the average person, this is a moot point owing to the variety of available payment options. For lowand middle-income households, this remains a pertinent question, as most are dependent on cash, even in more advanced economies. As such, the rapid development of mobile money in developing countries is providing the poor with an important payment option. Namely, the difficulties associated with the absence of modern bank infrastructure are being overcome by increased access to mobile financial services and, in particular, the ability to pay for goods and services via mobile phones.1 In addition, as technological advances have significantly improved the affordability of mobile phones on top of improving access, mobile payments can reduce the cost of providing financial services by up to 90%.2 Introduction The development of e-payment systems has spurred the debate on the future of cash. However, for the average person, this is a moot point owing to the variety of available payment options. For low-and middle-income households, this remains a pertinent question, as most are dependent on cash, even in more advanced economies. As such, the rapid development of mobile money in developing countries is providing the poor with an important payment option. Namely, the difficulties associated with the absence of modern bank infrastructure are being overcome by increased access to mobile financial services and, in particular, the ability to pay for goods and services via mobile phones. 1 In addition, as technological advances have significantly improved the affordability of mobile phones on top of improving access, mobile payments can reduce the cost of providing financial services by up to 90%. 2 The literature suggests three building blocks of digital finance and related e-payments that are welfare enhancing. Firstly, efficient digital finance requires a robust and broad digital infrastructure, which includes widespread mobile connectivity and ownership, a national payment structure and a well-disseminated personal identification (ID) system. Many households in developing countries have a mobile subscription, ITU 3 showing there are 103 mobile cellular subscriptions per 100 people in developing countries. Moreover, 78% of the population in sub-Saharan Africa is estimated to have at least 3G mobile network coverage. From a macro-perspective, the requisite infrastructure exists. Further, national payment and ID systems are largely well established in most sub-Saharan African countries. 4 Secondly, a dynamic and sustainable financial service includes efficient and relevant financial services regulation. Adequate regulation protects investors, consumers and governments, but must also ensure the existence of competition for the development of efficient, quality and diverse financial products. 5 Finally, there is a need to provide an array of financial products relevant to consumers. Whilst a wide range of services have been developed, most of these are in the formal sector. As a result, many people in less developed economies are unserved. The prevalence of The literature shows that electronic payments are key to improving financial inclusion and achieving global development goals such as the United Nation's (UNs) Sustainable Development Goals. The benefits are premised on the welfare-enhancing effects of digital payments, which reduce costs, the probability of loss and risk for low-income consumers, as well as improve access to formal financial services. This study thus investigates the conditions under which these welfare-enhancing gains can be obtained. It considers the conditions under which e-payments can be welfare enhancing by using qualitative data from Zimbabwe. The severe liquidity constraints in Zimbabwe provide a good case for evaluating how well e-payments work, as the relative absence of cash has made the use of mobile money inevitable. Focus group data are analysed to understand participants' everyday experiences with the e-payment system in Zimbabwe. The results indicate that informal services is an indicator that the available services are not relevant to some population segments, especially in sub-Saharan Africa. This study thus uses qualitative data to understand the effects of e-payments and to evaluate the conditions under which e-payments can be welfare enhancing. The empirical evidence on whether the conditions discussed above are sufficient for households to fully benefit from e-payments is limited, as the literature on digital payments largely focusses on remittances and the macro-benefits of digital finance as a whole. Further, a significant amount of research is based on developed countries and investigates the future of cash as a payment instrument, 6,7,8,9 whilst the bulk of research on digital payment in Africa focusses on East Africa and M-Pesa in Kenya. These studies concentrate on the broad effects of M-Pesa livelihood and welfare. 10,11,12,13,i Moreover, these studies tend to focus on evaluating the effects of remittances. However, the literature is largely silent on the conditions under which e-payments can be welfare enhancing. Substantially, for the poor, the benefits associated with digital payments are linked to their ability to make e-payments. In other words, the associated reductions in costs and risk can significantly enhance welfare and improve their link to formal financial services. For example, the use of digital payments can create a digital footprint for users, which can increase transparency and the consequent access to credit. Understanding the conditions for welfare-enhancing payment systems is thus important from both an academic and policy perspective. Additionally, the failure of M-Pesa in South Africa exemplifies the importance of understanding the workings of mobile money and e-payments, particularly in settings different from Kenya. This study considers the conditions under which e-payments can be welfare enhancing by using qualitative data from Zimbabwe. The severe liquidity constraints in Zimbabwe provide a good case for evaluating how well e-payments can work, as the relative absence of cash has made the use of mobile money inevitable. Specifically, the Reserve Bank of Zimbabwe indicated that 96% of all official transactions were conducted electronically in 2017. 14 This study thus focusses on the actual use of mobile money rather than whether it will be used, as is the case for most studies on e-payments. Literature review E-payments are generally defined as any payments undertaken by using electronic means. They can also be defined as any payment alternatives to cash. 15 A more technical definition suggests that e-payments can be defined as the transfer of payment value from the payer to the recipient through an electronic means. 16 Welfare and e-payments The literature on the welfare-enhancing effects of e-payments in Africa focusses almost exclusively on mobile money. This literature stream can be divided into two sub-streams: one focussing on money agents and the other on welfare impacts of remittances. In the first case, the results indicate that mobile money generally enhances welfare. Money agents benefit from increased income by engaging in money trade and can thus increase their consumption of basic goods and services such as food, clothes and education. 19,20 However, the benefits from mobile money are negatively affected by fraud, poor information and network congestion. 19 The second literature stream focusses on the welfare effects on households, measured almost exclusively through remittances. 13,21 These studies show that the advent of mobile money has reduced the cost of remittances. Consequently, the level and frequency of remittances received by poor households have increased, which has enabled them to increase their consumption of both consumer and capital goods. For example, Kikulwe et al. 22 show that the farmers who received remittances and used mobile money utilised more market inputs, such as fertilisers and labour, which resulted in increased production. Peprah et al. 21 19 attempt to determine these issues by identifying the challenges faced by money agents within M-Pesa. They find that a slow network, congestion and fraud reduce the benefits to money agents. Secondly, this study considers all e-payment instruments, whilst the literature stream that details the benefits of mobile payments focusses almost exclusively on remittances. Further, these studies are largely quantitative and provide a good measure of the impact of remittances on consumption and incomes, but they do not evaluate the challenges associated with using the various instruments, especially in an environment where systems and infrastructure may be imperfect. The next section reviews the literature on e-payments to identify relevant measures and provide a context for the empirical analysis. Characteristics of e-payment instruments E-payments are primarily designed to benefit consumers by improving convenience and lowering transaction costs. 2,23 Because e-payment instruments do not have the same type of guarantee as cash, they have emerged as prepaid instruments rather than money. The implication is that e-payment instruments are a form of social relations, thus relying on the acceptance of both consumers and retailers. For this reason, unlike cash, the use of e-payment instruments relies on networked two-sided markets, which include the payment service provider as well as the merchant. 24,25 Therefore, for consumers to increase the use of digital payments, the utilised instrument must be widely accepted. By design, networked goods are affected by complementarities, externalities, switching costs and economies of scale. As a result, e-payments have little value in isolation, as the utility derived from any e-payment system will largely depend on other users also using the system. As more consumers use these instruments, the resulting network effects and externalities attract a critical mass on both sides of the market. Arango, Huynh, and Sabetti 26 find that the probability of paying by cash is significantly lower when card payments are perceived as widely accepted. Although consumers benefit from an increase in the number of users of the same or complementary goods or services, this can lead to market concentration, and in turn result in a poor infrastructural set-up that limits consumer choice and the quality of services that consumers get. 5,18 Related access channels critically affect competition and thus the quality and diversification of products received by consumers. 5 E-payments in the context of a developing country are, to a large extent, in the form of mobile payments rather than card payments. For example, in 2018 only 22% of the adults in developing countries made credit and debit card payments compared with 80% of the adults in developed countries. 1 By contrast, mobile payments are significantly higher in developing countries. For instance, sub-Saharan Africa has the largest number of mobile accounts but only 10% of adults have a bank account. Kenya has the highest number of active mobile accounts in Africa, whose transaction value accounted for 3.3% of the Gross Domestic Product (GDP) in 2009. 27 At the same time, Japan has the highest number of mobile accounts amongst the developed countries, but their transaction value was only 0.05% of the GDP in 2017. This is because the presence of established e-payment systems such as card payments in more advanced economies limits the penetration of mobile money. 28 For this reason, the discussion on e-payments in this article largely refers to mobile money. The World Bank Group 29 argues that, in developing countries, the cornerstone of each payment is the transaction account, which can be hosted by a bank or independent payment service provider. This is the account from which and to which payments are made. By design, these accounts require sufficient funds for payments to be made, essentially providing a value store. On the one hand, this benefits consumers, as it inadvertently 'forces' low-income households to save. On the other hand, these mobile accounts need to be efficient so as not to lead to restricted access for low-income households such as in the case of banks. Unstructured supplementary service data (USSD) is the cheapest and most convenient mobile money service interface available for low-income consumers and thus the most commonly used in Africa. For it to work, mobile money providers need to collaborate with mobile network operators (MNOs). This can result in network congestion and a lower quality of voice and short message service (SMS) services. 30,31 Additionally, MNOs may restrict USSD access to their partner microfinance providers. For example, despite its dominance in the mobile network operating market, in Zimbabwe, Econet only provides USSD services for ecocash. In cases where an MNO allows access to other providers, interconnection fees are usually very high. 5,32 This dominance generates significant market power and is reinforced by the network effects of two-sided markets. Therefore, consumers have no alternative providers to turn to in the face of inefficiencies and service failure. The infrastructural needs for card payments are different, but nevertheless significant. As with mobile money, several players within the supply chain are involved in ensuring that the infrastructure for card payments works. For example, a basic payment at the point of sale requires the merchant to have a point-of-sale (POS) machine. The machine in turn requires access to an efficient network, which creates a supply chain within the payment system. The failure or malfunction at any one of these points can affect the finality of a card payment. Van Laere et al. 33 show that, when the critical infrastructure for card payments breaks down or malfunctions, there are negative welfare effects, as consumers fail to pay for basic items such as food, housing and medicine. This is exacerbated in a market where the main payment instrument for basic commodities is digital, such as in Zimbabwe. They also show that these failures affect vulnerable groups most and reduce trust in the overall payment system. E-payments are perceived to have lower costs of transmission and management, 2,34 which reduces the overall cost of providing electronic financial services for payment service providers, who can then pass these benefits to consumers. 2,35 Moreover, because consumers can access mobile financial services remotely, it reduces the need to travel to banks, which are often very far for most low-income households. Further, the digital storage of mobile money entails that the poor can reduce the risk of loss and theft. 36 Therefore, e-payments are particularly beneficial for rural and remote areas, which is why they have generated significant interest as a vehicle for the expansion of financial inclusion in developing countries. 37,38 Klapper and Singer 15 emphasise the importance of digital payments in increasing financial inclusion and for the achievement of several Sustainable Development Goals. An efficient e-payments system must thus reduce costs for consumers. The literature also argues the importance of security. Although e-money reduces the risk of theft and loss, e-payments are not immune to fraud and personal information theft. In 2019, Symantec 39 showed that the financial sector was the most targeted by spear phishing attacks. Therefore, an e-payment system must have an increased security level. Accordingly, to a large extent, the level of security concerns can be linked to the level of transaction finality. For example, fraudsters prefer to attack payment instruments that have a rapid final settlement because this limits the possibility of countermanding stolen funds. 40 Correspondingly, consumers and retailers prefer payment instruments with immediacy to meet retailer needs. 41 Whilst e-payments have a high level of finality and immediacy which benefit the customer, they also increase the potential risks associated with it. Based on this review, the welfare effect of e-payments can be evaluated based on how well an e-payment system satisfies the basic characteristics of a payment instrument. These characteristics are summarised in Table 1. Research approach and questions This study used a qualitative method to explore the conditions under which e-payments can be welfare enhancing by probing participants' experiences with various payment methods in a liquidity-constrained economy -Zimbabwe. Participants and data collection The data were collected through focus group discussions in April 2019. Three separate group discussions were held for groups with different demographics. The first group comprised university employees and students. The second group were factory employees, which included two managers. The third group was a group of casual workers and shop floor employees in a peri-urban location. Despite the different demographics of these groups, the discussion soon reached saturation, as all three groups were bringing up almost the same information. Participants shared various experiences at the POS with services such as transport or transferring money to relatives in villages in remote areas. These experiences were recorded and transcribed. Emerging themes were very similar across the narratives. Data trustworthiness To start with, the nature of the study needed a neutral moderator. This is because the enquiry is driven by a nationwide phenomenon. Therefore, the focus groups were facilitated by two moderators: one from Zimbabwe and one from outside Zimbabwe. Further, the discussion audio was recorded and transcribed verbatim in the language in which the discussion took place. All three discussions used a mixture of Ndebele, Shona and English. The transcripts were then translated into English. To ensure consistency and accuracy, different individuals were used to transcribe and translate. Moreover, as Korstjens and Moser 42 outline, credibility is required to ensure the confidence of the research findings in that the inferences drawn are an accurate representation of participants' original meaning or views. Member validation was used by sharing the facilitator's understanding and interpretations with the participants. Additionally, triangulation was used by consulting secondary sources especially cases reported in the media. A brief on the payment systems in Zimbabwe In Zimbabwe, the National Payment System has been in place since 2001 and is monitored by the Reserve Bank of Zimbabwe. In 2009, a period of record hyperinflation led the country to abandon the local currency in favour of five foreign currencies as legal tender. Of these, the United States (US) dollar was chosen as the official currency. This was followed by a period of worsening cash shortages. In 2016, the Reserve Bank started to push for an electronic-based payment system. 14, 43,44 The result was a large shift of official transactions to digital payments. The Reserve bank declared that 96% of the official payments in 2017 were made electronically. Additionally, retail e-payment transactions increased, with transaction values and volumes rising by 216% and 343%, respectively. 14, 43 In 2019, the Reserve Bank of Zimbabwe announced that, in volume terms, more than 99% Data analysis The focus group discussions were recorded and transcribed verbatim. The transcripts were then used to identify the themes related to the experiences of the participants and how they in turn underscore the nature of requisite infrastructure required for a welfare-enhancing e-payment system. The discussion questions were deliberately designed to allow the participants to explore all avenues in their experiences. The identification of themes was therefore inductive, relying on participants' narratives. The data from each focus group were categorised by using key phrases. These categories were then organised into themes. Table 1-A1 shows a sample of relevant phrases and how they were categorised. The emergent themes are used to structure the discussion around the experiences of the participants with e-payments. These experiences were then evaluated considering the prerequisites discussed earlier to determine how they played out in the Zimbabwean context and impacted consumer welfare. Results and discussion The payment system in Zimbabwe centres around four modes of payment. These include cards (mainly debit cards), mobile money, Internet-based bank payments and multiplecurrency cash. Despite the official data showing a high usage of e-payment instruments, the participants indicated that ii.The previous charge was 5 cents per transaction. their preferred method of payment is cash. The US dollar and South African rand are the preferred currencies. The low levels of local production, which failed to meet consumer demand, have resulted in import dependency and an increased demand for and the resultant shortage of foreign exchange. The government introduced measures to curb the impact of US dollar shortages and promote the widespread use of other currencies in the currency basket. These measures were introduced in 2016 and required 40% of all new US dollar foreign exchange receipts from exports to be converted to rands and euros at the official rate. This led to an increased preference for payments in US dollars and South African rands by retailers to circumvent the restrictions imposed by the Reserve Bank. Next to cash, real time gross settlement (RTGS) bonds circulate in the country as a surrogate currency since 2016 and are preferred above e-payments. The preference for cash is driven by several intertwined factors, including the high costs associated with e-payments, poor infrastructure, eroded productivity within the country and the lack of acceptance by some retailers. These form the basis for participants' evaluation of the conditions under which e-payments impact their welfare. Costs associated with e-payments The costs cited by participants included premiums charged by retailers and e-payments, the 2% tax by the government, the payment of charges by payment system providers and opportunity costs associated with a poor infrastructure. The larger and more established supermarkets accept all forms of payment, and their pricing is uniform across these various modes of payment, whilst informal retailers and small shops prefer cash and use a multi-tier pricing system. Larger shops are considered expensive for many of the goods purchased by middle-and low-income households. Most consumers thus prefer to purchase their goods from informal retailers and corner shops, which are perceived to be cheaper and more conveniently located. A survey of prices showed that the cost differences could be significant, as shown in Table 2. One participant indicated that: 'It's cheaper to buy from the market the same things you buy from the supermarket. It's cheaper on the street except that you need cash to buy from the street. Besides, you know if you buy a bag of potatoes it is sold at 14 Zimbabwean dollars in the supermarket. If you go to the market, it's 9 Zimbabwean dollars. So I will be trying to save money to make it stretch. So, we prefer cash to buy from the market.' (P1, female, seamstress) The preference for cash is also driven by the costs associated with alternative payment instruments. Contrary to the general indications in the literature, the perception in Zimbabwe is that e-payments are costly. Consumers face three points of charges when making payments as follows. Firstly, the government introduced a 2% tax on mobile e-payments in 2017. Secondly, the payment service providers The high demand for cash, which is constrained in terms of supply, has led to the development of a parallel market for cash, where both foreign and local currencies are sold in exchange for electronic money. Many consumers use their wages and salaries on this market to buy cash at a premium of between 15% and 20%. Many of the participants indicated that this accounts for the high level of e-payments reflected in official government documents. One of the participants explained: 'I never got my salary from the bank this month and already have now spent from my account. I used everything but never got cash from the bank.' (P3, male, shopkeeper) Another narrated: 'Some of these transactions are done by us to get cash. We will be transacting using these methods to get cash at a premium, so that we can pay cash in the informal market. So the government can't really track these transactions. We use e-payments as a means to get cash, then we use cash in the market.' (P4, male, university lecturer) The cash shortage affects rural areas comparatively more. Whilst remittances such as ecocash are quite efficient, the use of mobile money for payments is negatively affected by the constraints above. The recipients of remittances choose to buy cash at a premium than pay for goods and services using ecocash. The premium on cash in rural areas can be up to 20%, in addition to the payment service provider charges paid for the transfer. The main reason cited is the tax and base charges levied by the payment service providers. The money agents in rural areas are often shop owners. Therefore, the recipients of mobile money are forced to make a minimum purchase in the shop before they can withdraw their money. A participant gave an example: 'When she goes for collection, they can't give her 10 Zimbabwean dollars, they say we will give you 5 Convenience Despite the general acceptance of mobile payments, their usage is hampered by infrastructural capacity. All participants lamented the frequent network failures associated with both mobile and card payments. On the one hand, mobile payments rely on the efficiency of MNOs, which is not always the case. This problem is pronounced in rural areas, being exacerbated by their geographical remoteness. On the other hand, card payments rely on Internet efficiency, which is frequently lacking. As a result, whilst all e-payments are hampered by the frequent network failures, the mobile network outages are less frequent than the Internet network ones. The main effect of network inefficiency is that transactions are often incomplete. Whilst retailer points indicate that payments have been declined, payment service providers often show that money has been deducted from the consumer's account. Security and trust Despite the high costs and network problems associated with mobile money, many participants indicated that mobile money is more secure than both cash and bank deposits: 'The advantage we have with ecocash is that it protects our money. It is safe because you don't have to go around carrying cash. You can lose your phone but because your cash is recorded somewhere I was to be able to access your cash by using another phone. Moreover, it is safer and quicker to send money to the village.' (P3, male, shopkeeper) Many participants remit money to relatives in rural areas and find that, despite the associated costs, it is the most convenient way to remit. They added that owing to the poor infrastructure in rural areas, the only alternative to mobile money is to send money through bus drivers, who charge 10% to take the cash. When added to the cost of obtaining the cash, it makes this alternative too expensive. Further, bus drivers have been known to abscond with the money. The real-time nature of ecocash for remittances also makes it preferable to cash. When physical cash is handed over to drivers, it can take up to 3 days for the recipients to get it. However, the lengthy and cumbersome refund processes have eroded faith in the e-payment system. Several participants indicated they felt that the suspense accounts used to hold money when a transaction is declined were used illegally by banks and payment service providers to defraud consumers: 'I believe that the money then forms a suspense account, which is then used illegally by these service providers.' (P7, male, university student) These problems have also eroded trust in the banking system. One participant said: 'It is no longer safe to keep your money in the bank in Zimbabwe because you will go through countless bank charges. There is no interest. They even charge you for checking your bank balance on your mobile phone. They charge you about 39 cents.' (P8, female, factory worker) Livelihoods Participants also indicated that the inefficient payment system has negative effects on their livelihoods. The dependence on imports and the preference for payments in US dollars means that prices are constantly increasing owing to the changes in the exchange rate. Although prices are also indicated in bonds, these are directly linked to the exchange rate. Further, many small shops vary their prices depending on the instrument used by the consumer. The prominence of a parallel market for cash has inevitably led to frequent increases in basic commodity prices. By August 2019, the inflation rate reached 300%, returning the country to the pre-dollarisation levels and making Zimbabwe the world's most inflationary economy. The effects on livelihoods can be significant for low-income households. Participants complained that their salaries are very low and, if the transaction 'hangs', they must find alternative sources of money to pay for groceries whilst they wait for the lengthy resolution process to be completed. The reversal times are much longer for bankcards than for ecocash. Whilst mobile payment reversals take between 48 and 72 h to reverse, bankcard and bank mobile payment reversals can take up to 4 weeks. Once the consumer has submitted a letter to the bank, the bank will issue the statement upon payment, which adds to the costs incurred by the consumers. One of the participants indicated the need to use smaller shops that do not accept e-payment instruments: 'Bigger shops tend to be more stable when it comes to pricing, and they seem to conform to official rates, as required by the central bank. Small shops are highly sensitive to events in the money market and, therefore, change their prices and payment terms frequently. But sometimes you find that things that are not in bigger shops are there in the smaller shops.' (P4, male, university lecturer) Another further explained why this was so important: 'Also, you must remember that most of them get their goods from outside the country. So they need the foreign exchange to pay for their stock. So, they sell their goods in cash and foreign exchange. For example, a bag of potatoes at the supermarket can be 14 Zimbabwean dollars and 9 Zimbabwean dollars at the vendors.' (P9, male, factory worker) E-payments and welfare The experiences of participants have indicated that e-payments in Zimbabwe have had both welfare-enhancing and welfare-reducing effects. The framework shown in Table 3 is used to summarise the findings. However, in line with the literature, the largest benefit from e-payments is from remittances. 13,21 E-payment instruments, especially mobile money, enable consumers to send money cheaply and securely to relatives. This indirectly affects the welfare of consumers through their extended family welfare. Moreover, mobile money is cited as being very secure relative to other forms of payment. These positive effects are countered by the high transaction costs. The lack of alternatives places a burden especially on low-income consumers. As previously indicated, both remittances and purchases can attract up to 20% in implicit and explicit surcharges on a single transaction. Chiroga et al. 48 and World Bank Findex 49 indicate that the greatest benefit of mobile money to the poor is the reduction of payment costs. However, the consumers in Zimbabwe are not benefitting from these reduced costs, even for remittances. In an efficient system, the immediacy and finality of e-payments provide significant benefit to consumers. In this case, we find that the infrastructure is also highly congested and transactions frequently fail at the POS. This poor quality of infrastructure and services can also be attributed to the lack of competition within the market, which leads to congestion. The Zimbabwean financial sector and mobile network show significant concentration. For example, Econet's ecocash held about 95% of the mobile payment market in October 2019 and processed 99.7% of all mobile transaction in the last quarter of 2019. 50,51 Its dominance is reflected by the fact that mobile money is generally referred to as ecocash, despite the presence of two other mobile money providers. The networked nature of mobile money implies that switching costs are high, especially given the very few alternatives. The quality of service has further been eroded by electricity shortages. Mobile money is generally accepted, but not as widely by street vendors. For instance, Tacoli, 52 Skinner and Haysom 53 and Patel et al. 54 highlight the importance of street vending as a source of food security and livelihood for the poor. Therefore, any payment instrument that fails to be generally accepted amongst street vendors is likely to disadvantage the poor. Conclusion The study investigates whether the use of e-payments in Zimbabwe is welfare enhancing. Whilst the government highlighted the successful mass transition to e-payments to circumvent the persistent cash shortage, anecdotal evidence suggests that this shift may be out of obligation and may not be fully benefiting consumers. Consumer perspectives were evaluated by using a focus group setting, which showed that despite the infrastructure, national payment and ID systems being in place, the malfunctions of these systems often lead to high transactions costs for consumers. Moreover, the poor infrastructure also limited their acceptance of e-payment instruments and eroded the faith of consumers in the National Payment System. This resulted in a strong preference for foreign currencies, particularly the US dollar and the South African rand. We conclude that e-payments are welfare enhancing through remittances, but largely welfare reducing owing to the persistent infrastructure and system failures. It is therefore not sufficient to have the prerequisites of digital finance in place, as the quality of these prerequisites matters as well. In Zimbabwe, the access to the network was overridden by its poor quality, resulting in negative welfare effects. Despite the presence of the well-established National Payment System, the lack of competition in the e-payment market has resulted in poor quality services and limited options for consumers. Econet's ecocash holds more than 90% of the mobile network market, which means it has significant market power. Moreover, the dominance of Econet on both the mobile network and mobile money markets has led to significant congestion, eroding consumer experience with both services. The experiences of consumers suggest that the regulatory framework in the country has not sufficiently addressed the need for increased competition. Moreover, the participants complained significantly about the lack of transparency in terms of pricing and product options. This resulted in increased search costs for consumers, who often opted to stay with the same provider, further consolidating market concentration. Finally, the lack of faith in the National Payment System has broader macroeconomic implications. The preference for foreign currencies as a mode of payment has continued to fuel inflation. Merchants link their local prices to the exchange rate in real time. The continued increase in demand for foreign exchange vis-à-vis the supply is likely to maintain the high inflation rate. The government thus needs to put measures in place to restore faith in the payment system and curb inflationary expectations. Author's contribution M.S. is the sole author of this research article. Ethical consideration Ethics clearance was obtained through the University of Fort Hare (clearance number: SIM003). Ethical approval was sought and granted, and the purpose of the study was clearly explained to the participants. Each participant gave individual consent for participating in the focus groups. No names were recorded and participants were informed of their right to leave the focus groups at any time if they so wished. Funding information The data collection for this article was in part funded by the Nedbank Chair in Economics as well as the Seed Fund from the University of Fort Hare's Govan Mbeki Research and Development Centre. Data availability The authors confirm that the data supporting the findings of this study are available within the article. Disclaimer The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any affiliated agency of the author. Through the bank to get the US dollars, it takes a long procedure. Cash shortage Cost The downside of the e-payment system is the very high charges that they charge us. High charges Cost Ecocash is expensive too because you are charged twice. The government and ecocash take their share. As a result, you are left with nothing from your salary because of the charges. Ecocash charges Cost Loss of real income Welfare For you to get $10 cash you may be charged $13 on ecocash. The additional $3 is nothing but bank charges which is about 30% of the actual sent amount. Charges on ecocash Cost Ecocash has also a disadvantage of poor network connectivity. Poor network Infrastructure You sometimes get to the shop and fail to transact because the network is poor. Poor network Convenience Ecocash agents are there available in the rural areas, but the problem with village people they now charge you to access your cash because of the cash unavailability in Zimbabwe. Double charge Cost These charges are in addition to the normal service charges and the 2% statutory tax. Double charge Cost When she goes for collection, they can't give her $10, they say we will give you $5 and then you have to buy ABC from his shop for the other $5. Double charge Cost First and foremost, sending hard cash has its own risks. Risks of sending cash Security That would be worse because the person you are sending the money will be charged several times. Every time they use the money, they will incur charges per transaction. Double charge of ecocash Cost You fail to transact because they won't have network. Poor network Infrastructure I don't think it is network; they are using our cash while it is in suspense. Distrust of PSP Trust We end up losing trust in these electronic payments. Loss of trust Trust One of the things that I am concerned with is about these 3 weeks of having your money hanging somewhere. Suspense account Trust It is no longer safe to keep your money in the bank in Zimbabwe because you will go through countless bank charges. Not safe in banks Trust Besides swiping, there are some things where I need to use cash. For example if I get cash, I need transport and I need to buy tomatoes. Where am I going to get the cash for that? Need for cash in some retailers Acceptance I did and we went to ecocash wanting to get the statement showing that the money indeed gone out of the account. It is the same case again with swiping. If it happens with swiping, going to the bank to get the bank statement will take you a long period of time. It will take over 4 weeks for the money to be restored. Payments not going through Infrastructure Ecocash and card payments Yes, there are cases of theft. If your phone is stolen, you can retrieve your records, ecocash account and money in the account. If you are carrying cash and you get robbed, you cannot recover the cash. The disadvantage is that in sending money both the sender and the receiver are charged. Sometimes to get cash you have to add some money. Preference for vendors who don't accept cards Acceptance Small shops are highly sensitive to events happening in the money market and therefore change their prices and payment terms frequently. Price discrimination Welfare Small shops are highly sensitive to events happening in the money market and therefore change their prices and payment terms frequently. Price discrimination Welfare PSP, payment service provider.
8,898.4
2020-12-08T00:00:00.000
[ "Economics" ]
Evaluation of Amyloid Polypeptide Aggregation Inhibition and Disaggregation Activity of A-Type Procyanidins The number of people worldwide suffering from Alzheimer’s disease (AD) and type 2 diabetes (T2D) is on the rise. Amyloid polypeptides are thought to be associated with the onset of both diseases. Amyloid-β (Aβ) that aggregates in the brain and human islet amyloid polypeptide (hIAPP) that aggregates in the pancreas are considered cytotoxic and the cause of the development of AD and T2D, respectively. Thus, inhibiting amyloid polypeptide aggregation and disaggregation existing amyloid aggregates are promising approaches in the therapy and prevention against both diseases. Therefore, in this research, we evaluated the Aβ/hIAPP anti-aggregation and disaggregation activities of A-type procyanidins 1–7 and their substructures 8 and 9, by conducting structure–activity relationship studies and identified the active site. The thioflavin-T (Th-T) assay, which quantifies the degree of aggregation of amyloid polypeptides based on fluorescence intensity, and transmission electron microscopy (TEM), employed to directly observe amyloid polypeptides, were used to evaluate the activity. The results showed that catechol-containing compounds 1–6 exhibited Aβ/hIAPP anti-aggregation and disaggregation activities, while compound 7, without catechol, showed no activity. This suggests that the presence of catechol is important for both activities. Daily intake of foods containing A-type procyanidins may be effective in the prevention and treatment of both diseases. Introduction The number of people worldwide suffering from Alzheimer's disease (AD) and type 2 diabetes (T2D) is on the rise, posing serious health problems in aging societies. Numerous studies have shown that a relationship exists between AD and T2D [1]. AD and T2D share many common pathophysiological features, including aggregation of amyloid polypeptides with an intermolecular β-sheet structure and increased oxidative stress [2][3][4]. Amyloid β (Aβ) and human islet amyloid polypeptide (hIAPP) are amyloid polypeptides responsible for AD and T2D, respectively [5][6][7]. hIAPP consists of 37 amino acids, and is secreted from pancreatic β cells, while Aβ consists of 36-43 amino acids and is produced from amyloid precursor protein in the brain [8]. Aβ and hIAPP show sequence identify (25%) and similarity (50%) [9]; both amyloid polypeptides aggregate through a similar structure called cross-β-sheet structures via the nucleation-elongation phase [10]. However, the secondary structures distributions of Aβ and hIAPP are different [11]. These aggregates attack cells in various ways [12], causing atrophy of the cerebrum and hippocampus in the Pharmaceuticals 2021, 14, 1118 2 of 12 brain and insulin deficiency in the pancreas. Furthermore, recent studies have shown that hIAPP is mixed in senile plaques, aggregates of Aβ, present in the brains of AD patients [13]. On the other hand, Aβ has been found to aggregate in the pancreas of transgenic mice expressing both Aβ and hIAPP [14]. Therefore, compounds that can inhibit the aggregation of both amyloid proteins would be effective drugs for the prevention and treatment of both diseases. It has been reported that epigallocatechin gallate and resveratrol display Aβ/hIAPP aggregation inhibitory activities, and considerable attention has been devoted toward polyphenols, which are abundant in various foods [15][16][17][18]. It is well known that several natural compounds, including polyphenols, can control hIAPP aggregation, and many such polyphenols have antioxidant activity, and while the hydrophobic and aromatic properties of polyphenols inhibit the formation and elongation of amyloid fibrils, their antioxidant capacity has been found to promote the destabilization of fibril aggregates [19]. Moreover, it has long been suggested that catechol is involved in the inhibition of Aβ aggregation, and a recent structure-activity relationship study using three tyrosol ligands also showed aggregation-inhibiting activity with catechol, which was attributed to the stabilization of the Aβ-ligand interaction by H-bonding to Glu22 by the hydroxyl group of catechol [20]. We have previously reported that caffeoylquinic acid, phenylethanoid glycoside, and hispidin derivatives inhibit Aβ42 aggregation [21][22][23][24]. We have also recently shown that kukoamines A and B, schizotenuin A, lycopic acids, rosmarinic acid, and clovamide exhibit inhibitory activity against Aβ/hIAPP aggregation [25][26][27][28][29]. These compounds, which inhibit Aβ42/hIAPP aggregation, all contain a catechol moiety, and catechol-type polyphenols can potentially inhibit amyloid protein aggregation. In this study, we focused on A-type procyanidins, which have two catechols, and investigated their effects on the aggregation of amyloid proteins. A-type procyanidins are found in peanut skin and consist of (+)-catechin or (−)-epicatechin. In addition, to identify the active site, anti-aggregation activity tests of Aβ42/hIAPP were performed and structure-activity correlations were examined by use of A-type procyanidins 1-7 and their substructures 8 and 9 ( Figure 1). Furthermore, degradation of already aggregated amyloid polypeptides (disaggregation activity) was also evaluated, as well as the antioxidant activity of these compounds. hippocampus in the brain and insulin deficiency in the pancreas. Furthermore, recent studies have shown that hIAPP is mixed in senile plaques, aggregates of Aβ, present in the brains of AD patients [13]. On the other hand, Aβ has been found to aggregate in the pancreas of transgenic mice expressing both Aβ and hIAPP [14]. Therefore, compounds that can inhibit the aggregation of both amyloid proteins would be effective drugs for the prevention and treatment of both diseases. It has been reported that epigallocatechin gallate and resveratrol display Aβ/hIAPP aggregation inhibitory activities, and considerable attention has been devoted toward polyphenols, which are abundant in various foods [15][16][17][18]. It is well known that several natural compounds, including polyphenols, can control hIAPP aggregation, and many such polyphenols have antioxidant activity, and while the hydrophobic and aromatic properties of polyphenols inhibit the formation and elongation of amyloid fibrils, their antioxidant capacity has been found to promote the destabilization of fibril aggregates [19]. Moreover, it has long been suggested that catechol is involved in the inhibition of Aβ aggregation, and a recent structure-activity relationship study using three tyrosol ligands also showed aggregation-inhibiting activity with catechol, which was attributed to the stabilization of the Aβ-ligand interaction by H-bonding to Glu22 by the hydroxyl group of catechol [20]. We have previously reported that caffeoylquinic acid, phenylethanoid glycoside, and hispidin derivatives inhibit Aβ42 aggregation [21][22][23][24]. We have also recently shown that kukoamines A and B, schizotenuin A, lycopic acids, rosmarinic acid, and clovamide exhibit inhibitory activity against Aβ/hIAPP aggregation [25][26][27][28][29]. These compounds, which inhibit Aβ42/hIAPP aggregation, all contain a catechol moiety, and catechol-type polyphenols can potentially inhibit amyloid protein aggregation. In this study, we focused on A-type procyanidins, which have two catechols, and investigated their effects on the aggregation of amyloid proteins. A-type procyanidins are found in peanut skin and consist of (+)-catechin or (−)-epicatechin. In addition, to identify the active site, anti-aggregation activity tests of Aβ42/hIAPP were performed and structure-activity correlations were examined by use of A-type procyanidins 1-7 and their substructures 8 and 9 ( Figure 1). Furthermore, degradation of already aggregated amyloid polypeptides (disaggregation activity) was also evaluated, as well as the antioxidant activity of these compounds. Evaluation of Aβ42 Aggregation Inhibitory Activity of Compounds 1-9 To assess the ability of synthetic A-type procyanidins 1-7 [30] and their substructures 8 and 9 to inhibit Aβ42 aggregation, thioflavin-T (Th-T) assay was conducted (Figures 2 and S1). In this study, (−)-epigallocatechin gallate (EGCG), the activity of which has been reported in previous studies, was used as the positive control [15,17,31]. The IC 50 values for these compounds are shown in Table 1. Aβ42 aggregation was inhibited in a concentration-dependent manner by all the compounds except compound 7. The Aβ42 aggregation inhibitory activity of these compounds was as follows: 1, 2, 3, 4, 5, and 6 > 8 and 9 >> 7. These results suggest that compounds containing two catechols are more active than those with one (the activity is proportional to the number of catechols) and that the presence of catechol is important. pound added), it was confirmed that numerous hIAPP aggregates were distributed in a mesh-like pattern. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, compounds 1-6, 8, and 9, which showed activity in the Th-T assay, gave rise to reduced aggregation compared to hIAPP alone. Furthermore, these results indicated that compounds with two catechols were more active than those with one. These results support the results of the Th-T assay. To confirm the results of the Th-T assay, Aβ42 fibrils were observed directly using TEM (Figures 3 and S2). In the case of the Aβ42-only reaction solution (no compound added), it was confirmed that many Aβ42 aggregates were spread out in a mesh-like pattern. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. By contrast, compounds 1-6, 8, and 9, which showed activity in the Th-T assay, gave rise to reduced aggregation compared to Aβ42 alone. Furthermore, these results show that compounds with two catechols are more active than those with one. These results support the results of the Th-T assay. Evaluation of hIAPP Aggregation Inhibitory Activity of Compounds 1-9 To assess the ability of synthetic A-type procyanidins 1-7 and their substructures 8 and 9 to inhibit hIAPP aggregation, thioflavin-T (Th-T) assay was conducted (Figures 4 and S3). The IC 50 values for these compounds are shown in Table 1. hIAPP aggregation was inhibited in a concentration-dependent manner by all the compounds except compound 7. The hIAPP aggregation inhibitory activities of these compounds were as follows: 1, 2, 3, 4, 5, and 6 > 8 and 9 >> 7. These results suggest that the presence of catechol is important for activity and that the activity increases in proportion to the number of catechols. The hIAPP aggregation inhibitory activity of each compound was higher than its Aβ42 aggregation inhibitory activity, but the overall trend was similar to that of the Aβ42 aggregation inhibitory activity. To confirm the results of the Th-T assay, the hIAPP fibrils were observed directly using TEM ( Figures 5 and S4). In the case of the hIAPP-only reaction solution (no compound added), it was confirmed that numerous hIAPP aggregates were distributed in a mesh-like pattern. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, compounds 1-6, 8, and 9, which showed activity in the Th-T assay, gave rise to reduced aggregation compared to hIAPP alone. Furthermore, these results indicated that compounds with two catechols were more active than those with one. These results support the results of the Th-T assay. To assess the disaggregation ability of compounds 1, 3, 5, 7, and 8 on Aβ42 aggregates, thioflavin-T (Th-T) assay was conducted (Figures 6 and S5). These compounds were selected based on the number of catechols they contained, their steric structure, and constituent units. The EC 50 values for these compounds are shown in Table 2. For all the compounds except compound 7, Aβ42 aggregates were disaggregated concentrationdependently. The disaggregation activities of these compounds on Aβ42 aggregates were as follows: 1 and 5 > 3 and 8 >> 7. These results suggest that the presence of catechol is important for Aβ42 disaggregation activity. On the other hand, Aβ42 disaggregation activities showed a different trend from the aggregation inhibition activities, as there was a difference in activity even when the number of catechols was the same. there was a difference in activity even when the number of catechols was the same. To confirm the results of the Th-T assay, the Aβ42 fibrils were observed directly using TEM (Figures 7 and S6). In the case of the Aβ42-only reaction solution (no compound added), the presence of copious aggregates of Aβ42 distributed in a mesh-like pattern was confirmed. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, reduced aggregation was noted in the presence of compounds 1, 3, 5, and 8, which showed activity in the Th-T assay, compared to Aβ42 alone. These results support the results of the Th-T assay. 5.0/3.7 a EC50 values were calculated based on the disaggregation effective rate (%) of amyloid polypeptide aggregation by Th-T assay after 24 h for each compound whose concentrations was changed. To confirm the results of the Th-T assay, the Aβ42 fibrils were observed directly using TEM (Figures 7 and S6). In the case of the Aβ42-only reaction solution (no compound added), the presence of copious aggregates of Aβ42 distributed in a mesh-like pattern was confirmed. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, reduced aggregation was noted in the presence of compounds 1, 3, Evaluation of Disaggregation Activity of Compounds 1, 3, 5, 7, and 8 on Pre-Existing hIAPP Aggregates To assess the disaggregation ability of compounds 1, 3, 5, 7, and 8 on hIAPP aggregates, thioflavin-T (Th-T) assay was conducted (Figures 8 and S7). The EC₅₀ values for these compounds are shown in Table 2. For all the compounds except compound 7, hIAPP aggregates were disaggregated concentration-dependently. The disaggregation activities of these compounds on hIAPP aggregates were as follows: 1 and 3 > 5 and 8 >> 7. Therefore, it suggests the importance of the presence of catechol for the disaggregation of hIAPP. Moreover, hIAPP disaggregation activity showed a different trend from the aggregation inhibition activity, as there was a difference in activity even when the number of catechols was the same. This trend was different from that of hIAPP aggregation inhibitory activity. To confirm the results of the Th-T assay, the hIAPP fibrils were observed directly using TEM (Figures 9 and S8). In the case of the hIAPP-only reaction solution (no compound added), the presence of numerous aggregates of Aβ distributed in a mesh-like pattern was observed. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, in the presence of compounds 1, 3, 5, and 8, which showed activity in the Th-T assay, aggregation was reduced compared to hIAPP alone. These results support the results of the Th-T assay. Evaluation of Disaggregation Activity of Compounds 1, 3, 5, 7, and 8 on Pre-Existing hIAPP Aggregates To assess the disaggregation ability of compounds 1, 3, 5, 7, and 8 on hIAPP aggregates, thioflavin-T (Th-T) assay was conducted (Figures 8 and S7). The EC 50 values for these compounds are shown in Table 2. For all the compounds except compound 7, hIAPP aggregates were disaggregated concentration-dependently. The disaggregation activities of these compounds on hIAPP aggregates were as follows: 1 and 3 > 5 and 8 >> 7. Therefore, it suggests the importance of the presence of catechol for the disaggregation of hIAPP. Moreover, hIAPP disaggregation activity showed a different trend from the aggregation inhibition activity, as there was a difference in activity even when the number of catechols was the same. This trend was different from that of hIAPP aggregation inhibitory activity. Evaluation of Disaggregation Activity of Compounds 1, 3, 5, 7, and 8 on Pre-Existing hIAPP Aggregates To assess the disaggregation ability of compounds 1, 3, 5, 7, and 8 on hIAPP aggregates, thioflavin-T (Th-T) assay was conducted (Figures 8 and S7). The EC₅₀ values for these compounds are shown in Table 2. For all the compounds except compound 7, hIAPP aggregates were disaggregated concentration-dependently. The disaggregation activities of these compounds on hIAPP aggregates were as follows: 1 and 3 > 5 and 8 >> 7. Therefore, it suggests the importance of the presence of catechol for the disaggregation of hIAPP. Moreover, hIAPP disaggregation activity showed a different trend from the aggregation inhibition activity, as there was a difference in activity even when the number of catechols was the same. This trend was different from that of hIAPP aggregation inhibitory activity. To confirm the results of the Th-T assay, the hIAPP fibrils were observed directly using TEM (Figures 9 and S8). In the case of the hIAPP-only reaction solution (no compound added), the presence of numerous aggregates of Aβ distributed in a mesh-like pattern was observed. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, in the presence of compounds 1, 3, 5, and 8, which showed activity in the Th-T assay, aggregation was reduced compared to hIAPP alone. These results support the results of the Th-T assay. To confirm the results of the Th-T assay, the hIAPP fibrils were observed directly using TEM (Figures 9 and S8). In the case of the hIAPP-only reaction solution (no compound added), the presence of numerous aggregates of Aβ distributed in a mesh-like pattern was observed. Similar results were obtained for compound 7, which showed no activity in the Th-T assay. In contrast, in the presence of compounds 1, 3, 5, and 8, which showed activity in the Th-T assay, aggregation was reduced compared to hIAPP alone. These results support the results of the Th-T assay. Evaluation of Antioxidant Activity of A-Type Procyanidins and Their Related Compo To assess the antioxidant potential of the compounds 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical-scavenging assay was cond The IC₅₀ values for these compounds are shown in Table 3. All of the compounds, compound 7, exhibited radical-scavenging activity, which increased in a con tion-dependent manner and show high antioxidant rate at concentration of 50 μM results suggest that the presence of phenolic hydroxyl groups is important for scavenging activity. In addition, the antioxidant activities among the A-type p nidins were comparable. Discussion In this research, we examined the effects of compounds 1-9 on the aggregati disaggregation of Aβ42 and hIAPP, as well as their antioxidant properties. The results of structure-activity relationship studies of A-type procyanidin tives confirmed that catechol is important for the inhibition of Aβ42 aggregation dition, compounds bearing two catechols were more active than those with one ca This trend is consistent with previous research showing that polyphenols with m catechol moieties exhibit higher Aβ42 aggregation inhibitory activities [21][22][23][24][25][26][27][28]. In tion, it was surmised that the steric structure and constituent units did not have a icant effect on the activity. Catechol was also important for hIAPP aggregation inh activity, which followed a similar trend to that of Aβ42 aggregation inhibitory acti This tendency is consistent with the results of previous studies [25][26][27]29]. Ho the IC₅₀ values for hIAPP aggregation were higher than those for Aβ42 aggregatio Evaluation of Antioxidant Activity of A-Type Procyanidins and Their Related Compounds To assess the antioxidant potential of the compounds 1-9, 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical-scavenging assay was conducted. The IC 50 values for these compounds are shown in Table 3. All of the compounds, except compound 7, exhibited radical-scavenging activity, which increased in a concentration-dependent manner and show high antioxidant rate at concentration of 50 µM. These results suggest that the presence of phenolic hydroxyl groups is important for radical scavenging activity. In addition, the antioxidant activities among the A-type procyanidins were comparable. Table 3. Efficacy of compounds 1-9 against DPPH free radical. Discussion In this research, we examined the effects of compounds 1-9 on the aggregation and disaggregation of Aβ42 and hIAPP, as well as their antioxidant properties. The results of structure-activity relationship studies of A-type procyanidin derivatives confirmed that catechol is important for the inhibition of Aβ42 aggregation. In addition, compounds bearing two catechols were more active than those with one catechol. This trend is consistent with previous research showing that polyphenols with multiple catechol moieties exhibit higher Aβ42 aggregation inhibitory activities [21][22][23][24][25][26][27][28]. In addition, it was surmised that the steric structure and constituent units did not have a significant effect on the activity. Catechol was also important for hIAPP aggregation inhibitory activity, which followed a similar trend to that of Aβ42 aggregation inhibitory activity. This tendency is consistent with the results of previous studies [25][26][27]29]. However, the IC 50 values for hIAPP aggregation were higher than those for Aβ42 aggregation. This may be due to the differences in the amino acid sequence and 3D structure between Aβ42 and hIAPP, which affects their affinity for the compounds. The catechol moiety readily auto-oxidizes to form o-benzoquinone, which may covalently bind to nucleophilic amino acid residues of amyloid proteins (Michael addition and Schiff base formation) and destabilize the β-sheet structure [32][33][34]. The fact that the Pharmaceuticals 2021, 14, 1118 9 of 12 activity increased in proportion to the number of catechols is thought to be due to this mechanism. On the other hand, compound 7 is suggested to have no amyloid polypeptide aggregation inhibitory activity because o-benzoquinone is not formed in compound 7. Previous research has indicated that π-π stacking interactions between amino acid residues of Aβ42 and the aromatic ring of the compounds acting as an inhibitor as well as hydrogen bond are possible factors that govern the inhibition of Aβ42 β-sheet formation [35]. However, compound 7, which has four aromatic rings and no phenolic hydroxyl group, did not show amyloid polypeptide aggregation inhibitory activity, suggesting that π-π stacking interactions may not be involved in the amyloid polypeptide aggregation inhibition mechanism of A-type procyanidin derivatives. On the other hand, because the bulkiness of the methyl group of 7 may prevent it from entering the space between amino acid residues, we plan to examine the inhibitory activity of A-type procyanidins on amylolytic polypeptide aggregation under conditions where catechol is not oxidized by adding a reducing agent. A more detailed analysis will be conducted in the future. For Aβ42/hIAPP disaggregation activity of A-type procyanidin derivatives, structureactivity relationship studies indicated that the presence of catechol is important for their activity. However, unlike the Aβ42/hIAPP aggregation inhibitory activity, the Aβ42/hIAPP disaggregation activity results suggest that the steric structure also contributes significantly to the activity. The reason for these differences is that, unlike monomers, amyloid polypeptides form aggregates and access to them is restricted. Moreover, several compounds showed different activities against each aggregate, which may be due to differences in accessibility resulting from differences in the steric structure and secondary structure distribution of Aβ aggregates and hIAPP aggregates. Catechin and epicatechin have been reported to destabilize Aβ fibrils, and several other aromatic compounds have been reported to degrade amyloid polypeptide fibrils. However, the disaggregation mechanism remains unclear; therefore, it is necessary to clarify this mechanism in the future. The results of the DPPH radical scavenging activity test confirmed the antioxidant activity of all the A-type procyanidin compounds except compound 7, suggesting the importance of phenolic hydroxyl groups. It has been reported that amyloid polypeptides generate radicals during the aggregation process, leading to further aggregation and cell death [36][37][38]. On the other hand, as the DPPH radical does not exist in the body, antioxidant activity must be further evaluated from additional perspectives, such as the superoxide dismutase (SOD) activity test. It has been reported that procyanidins with a low degree of polymerization, such as dimers, can penetrate the blood-brain barrier (BBB). All procyanidins used in this study were dimers. Therefore, in this research, we investigated A-type procyanidins for their Aβ42/hIAPP aggregation inhibitory, Aβ42/hIAPP disaggregation, and antioxidant activities, and showed that these active compounds have significant potential for use as preventive and therapeutic agents for both diseases. On the other hand, nobiletin, an O-methoxylated flavonoid, has been reported to have the potential to cause demethylation in vivo [39] and to show efficacy in AD model mice [40]. Therefore, although compound 7 did not show any activity in the in vitro experimental system conducted in this study, the presence or absence of in vivo activity needs to be investigated in the future. The results of this research may contribute to the development of preventive and therapeutic agents for AD and T2D. In the future, we aim to elucidate the inhibitory and disaggregation mechanisms of A-type procyanidins in more detail. Furthermore, it is necessary to investigate the cytoprotective activity using cells and the preventive effect on cognitive function using mice as an in vivo experiment. The results of this research suggest that dietary materials containing high amounts of A-type procyanidins can potentially contribute to the development of functional foods for the prevention of AD and T2D. Thioflavin T (Th-T) Assay The degree of aggregation of Aβ42/hIAPP was assessed using the Th-T method developed by Naiki et al. [41]. The procedure for this is described elsewhere [42]. Briefly, hIAPP (KareBay Biochem Inc., Monmouth Junction, NJ, USA) was dissolved in HFIP solution (1% acetic acid aqueous solution = 1:1), and Aβ42 was dissolved in 0.1% NH 4 OH solution at 250 µM. The amyloid solution was diluted tenfold with 50 mM PBS (pH 7.4) and incubated with or without samples. The peptide solution (2.5 µL) was added to 250 µL of 1 mM Th-T in 50 mM Gly-NaOH (pH = 8.5). Using a Wallac 1420 ARVO MX Multidetection Microplate Reader (PerkinElmer), the fluorescence intensity was measured at an excitation wavelength of 420 nm and an emission wavelength of 485 nm, and IC 50 values of each compound were calculated based on the percentage inhibition of amyloid polypeptide aggregation (%) after incubation at 37 • C for 24 h. In the disaggregation activity test, amyloid polypeptides were pre-incubated for 24 h to form aggregates beforehand, and then the compounds were added. EGCG, which is known to show aggregation inhibition and disaggregation activities against amyloid polypeptide, was used as the positive control in this assay [15,17,32]. Transmission Electron Microscope (TEM) Observations Aβ42 and hIAPP (25 µM each) were treated with compounds 1-9 and EGCG (10 µM for Aβ and 100 µM for hIAPP), dropped onto carbon-coated Formvar grids, incubated at room temperature for 2 min, washed twice with H 2 O, and air-dried for 5 min. After 24 h of incubation, the samples were observed using a JEOL JEM-1400 electron microscope. Conclusions Structure-activity relationship studies by Th-T assay and TEM observation were performed to investigate the Aβ/hIAPP anti-aggregation and disaggregation activities of A-type procyanidins 1-7 and their substructures 8 and 9. These results suggested that A-type procyanidins 1-6 with two catechol moieties exhibited potent Aβ/hIAPP antiaggregation and disaggregation activities, while compound 7, without catechol, showed no activity. This suggests that the presence of catechol is important for both activities. Therefore, this study suggests that dietary materials, containing high amounts of A-type procyanidins, may contribute to the development of functional foods for the prevention of AD and T2D.
6,011.6
2021-10-31T00:00:00.000
[ "Medicine", "Chemistry" ]
SARS-CoV-2 Infection and Associated Rates of Diabetic Ketoacidosis in a New York City Emergency Department Introduction In early March 2020, coronavirus 2019 (COVID-19) spread rapidly in New York City. Shortly thereafter, in response to the shelter-in-place orders and concern for infection, emergency department (ED) volumes decreased. While a connection between severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and hyperglycemia/insulin deficiency is well described, its direct relation to diabetic ketoacidosis (DKA) is not. In this study we describe trends in ED volume and admitted patient diagnoses of DKA among five of our health system’s EDs, as they relate to peak SARS-CoV-2 activity in New York City. Methods For the five EDs in our hospital system, deidentified visit data extracted for routine quality review was made available for analysis. We looked at total visits and select visit diagnoses related to DKA, across the months of March, April and May 2019, and compared those counts to the same period in 2020. Results A total of 93,218 visits were recorded across our five EDs from March 1–May 31, 2019. During that period there were 106 diagnoses of DKA made in the EDs (0.114% of visits). Across the same period in 2020 there were 59,009 visits, and 214 diagnoses of DKA (0.363% of visits) Conclusion Despite a decrease in ED volume of 26.9% across our system during this time period, net cases of DKA diagnoses rose drastically by 70.1% compared to the prior year. INTRODUCTION The coronavirus 2019 (COVID-19) pandemic began to impact visits to our system's New York City emergency departments (ED) in March 2020. The city's first case was detected at our New York City ED on March 1. Case rates rapidly rose across the city; on March 12 mass gatherings in NYC were restricted, and on March 20 a "shelter-in-place" model was ordered by the governor. 1 By April 6, COVID-19 cases peaked, and they have steadily decreased since. 2 As public health measures went into effect, ED visits at our system's EDs dropped significantly, and they have only recently started rising again. COVID-19 has many pathologic manifestations. One difficult-to-manage aspect of severe COVID-19 infections is uncontrolled hyperglycemia and diabetic ketoacidosis (DKA). 3 Patients with a history of diabetes mellitus (DM) are also at increased risk for mortality; 4 DM was shown to be the leading risk factor among chronic medical conditions along with cerebrovascular disease for COVID-19 mortality. 5 In several retrospective studies, uncontrolled hyperglycemia has been associated with worsening mortality, 3 and recent consensus guidelines support the importance of glycemic control. 6,7 An exact pathophysiology for this phenomenon has not been SARS-CoV-2 and Rates of Diabetic Ketoacidosis in a NYC ED Ditkowsky et al. elucidated, although several theories exist. Elevated glucose levels in pulmonary secretions are thought to suppress antiviral immune response. Furthermore, it is possible that exposure of pulmonary epithelial cells to elevated glucose concentrations increase viral replication, as it does for influenza. 4 In this study we present retrospective findings from our own institution's EDs that support the theory that COVID-19 infection is associated with a notable increase in concomitant DKA. METHODS The hospital system's EDs include academic and community-oriented facilities in a diverse urban environment and see over 500,000 visits a year across three boroughs in New York City. Five of these EDs are on a shared electronic health record system (Epic Systems Corporation, Verona, WI). For these EDs, deidentified visit data extracted for routine quality review was made available for analysis. The data was initially part of a quality assurance/quality improvement project and did not require institutional review board approval. We looked at total visits, and select visit diagnoses related to DKA, across the months of March, April, and May 2019 and compared those counts to the same period in 2020. RESULTS A total of 93,218 visits were recorded across our five EDs from March 1-May 31, 2019. During that period there were 106 diagnoses of DKA made in the EDs (0.114% of visits). Across the same period in 2020 there were 59,009 visits, and 214 diagnoses of DKA (0.363% of visits). Figure 1 compares the timeline of the NYC COVID-19 pandemic based on weekly hospitalizations as reported by the Department of Health to the observed rise in DKA visits in that same time period and compares this to DKA rates in 2019. Figure 2 displays percent change in cumulative DKA visits (what change in percent of total 2019 DKA rates was observed in 2020) compared to cumulative percent change in ED visit volume (what change in percent of total 2019 ED visits was observed in 2020). This is displayed against weekly DKA visit rates in 2019 and 2020. DISCUSSION Shortly after March 1, 2020, the number of ED visits with a diagnosis of DKA began to increase across our system's EDs, compared to the year prior. Even as daily ED visits began to drop in late March, the rate of ED DKA visits rose. This increased rate was noted throughout the period reviewed. By mid-May 2020, although ED visits were approximately one-third of those in 2019, net diagnoses of DKA approximately doubled. Similar to these findings, other authors have pointed out a correlation that suggests COVID-19 can precipitate DKA in many patients. Several theories may explain the observed growth in DKA diagnoses during this period. Beyond physiologic mechanisms, this rise in DKA could simply represent the inability for diabetic patients to get insulin prescriptions during the public health emergency; many clinics were not able to meet regularly with patients. However, given the association between severe COVID-19 infections, hyperglycemia, and a history of DM, it is reasonable to suspect patients may present with concomitant DKA disproportionately to other disease states. Similar to other acute infections, COVID-19 infections are not only worsened by hyperglycemia, but associated with increased incidence of hyperglycemia. 8 This may be a consequence of stress hyperglycemia from release of counter-regulatory hormones. 9 COVID-19 is also associated with a relative insulin deficiency due to pancreatic islet cells' ACE2 receptor, which may allow viral entry to pancreatic parenchyma leading to islet cell damage. 10 The combination of worsened serum glucose with relative insulin deficiency may lead to an increased incidence of DKA in COVID-19 infected patients. Given a likely shortage of intensive care unit (ICU) beds globally as a result of COVID-19, including in the United States, clinicians will need to use healthcare resources judiciously. 11 It may be necessary to find alternate treatment strategies for treating DKA to help preserve these resources, as management often necessitates ICU level of care. Further analysis of ED visits will involve correlating DKA visits with COVID-19 test results, as well as an assessment of DKA severity and inpatient course. LIMITATIONS This was a retrospective study that indicates a correlation between DKA visits and the COVID-19 pandemic; however, no causation can be established. Other factors during this period such as limited patient access to clinics may have impacted rates of DKA. The study is also geographically limited in nature, and further study will be needed to definitively state the trend is applicable to other localities. Additionally, data prior to 2019 was not available for review as additional EDs within our system were not using a shared electronic health record, leading to a temporally limited study. We were unable to assess direct rates of concomitance of COVID-19 infection and DKA. CONCLUSION Despite a decrease in ED volume of 26.9% across our health system during the COVID-19 pandemic in New York City, net cases of diabetic ketoacidosis diagnoses rose drastically by 70.1% compared to the prior year. Although further study is needed, these findings may indicate a direct relationship between COVID-19 infection and risk of developing DKA.
1,787.8
2021-05-01T00:00:00.000
[ "Medicine", "Biology" ]
A Different Angle on Quantum Uncertainty (Measure Angle) The uncertainty associated with probing the quantum state is expressed as the effective abundance (measure) of possibilities for its collapse. New kinds of uncertainty limits entailed by quantum description of the physical system arise in this manner. Introduction One could easily imagine the results of this presentation being reported at a physics conference long ago, well before Alice, Bob and Charlie were part of quantum discourse. In fact, it would have been natural for this to happen when the Copenhagen interpretation of quantum mechanics (QM) was only emerging [1]. One reason it did not occur may be that the needed association of probability and measure was not developed or appreciated at the time, although it very well could have been. Here we obviously do not mean the use of measure theory to formalize probability [2]. Rather, what we have in mind is a generalization of measure by means of probability: the extension of the measure map µ = µ(A) onto a larger domain µ = µ(A, π), where π is a probability measure on A. The role of π is to specify the "relevance" for various parts (measurable subsets) of A. The desired extension is then chiefly driven by two requirements. (i) µ(A, π) should decrease relative to µ(A) in response to π favoring certain parts of A, so that π involving more concentrated probability entails larger reduction (monotonicity with respect to cumulation). (ii) µ(A, π) should remain strictly measure-like in that the original additivity relation involving sets A and B generalizes into one involving pairs (A, π A ) and (B, π B ), for all π A and π B . If (i) and (ii) can be accommodated simultaneously and together with few other basic requirements, then µ(A, π) defines a meaningful effective measure of A with respect to π. Such quantifier could then be used in correspondingly wider contexts, but with essentially the same meaning and significance, as that of an ordinary measure. Being a framework for assigning probabilities to events, quantum mechanics would be among prime natural settings for its use. Whether the outlined general approach materializes into fruitful enrichment of QM depends on the existence and multitude of the above extensions for relevant measures. In Ref. [3], released during the ICNFP 2018 meeting, the extension program was completely carried out for the foundational case of counting measure. It is the surprising results of this analysis that suggest, among other things, a qualitatively new outlook on quantum uncertainty [1,4] that we point out in this presentation. Since the ICNFP 2018 meeting, the ensuing concept of measure uncertainty (µ-uncertainty) has been fully developed in Ref. [5]. Over the course of that process, effective measures were also defined for subsets of D-dimensional Euclidean space R D with Jordan content, i.e. whose "volume" is expressible as Riemann integral. Since the discrete case involves a somewhat specific language, we summarize the correspondence with the general case before we start. A "measurable set A" becomes simply a "collection of N objects", and µ(A) corresponds to N . A probability measure π is specified by the probability vector P = (p 1 , p 2 , . . . , p N ). The construction of effective measure extension µ = µ(A, π) then turns into the construction of function N = N[P ] interpreted as the effective total (effective count). The theory of these objects is referred to as the effective number theory since N retains certain algebraic features of integers [3]. In the same vein, functions N are called the effective number functions (ENFs). In the first part of the presentation, we will describe the key features and results of the effective number theory, which is a crucial stepping stone for our arguments regarding quantum uncertainty. One natural approach to constructing the theory is to view it as a tool to solve a generic counting problem of quantum mechanics, which we refer to as the quantum identity problem [3]. In particular, consider the state ψ ⟩ and the basis { i ⟩} ≡ { i ⟩ i = 1, 2, . . . , N } in N -dimensional Hilbert space. Is it meaningful to ask how many basis states i ⟩ ("identities" from { i ⟩}) are effectively contained in ψ ⟩? This question can be phrased in many equivalent ways that directly relate to a particular application of interest. For example, in the context of describing localization, the inquiry would be about how many states from { i ⟩} is ψ ⟩ effectively spread over. On the other hand, in assessing the efficiency of a variational calculation involving eigenstate ψ ⟩ and basis { i ⟩}, we would be concerned with how many i ⟩ effectively describe ψ ⟩. Yet, it is the same quantum identity question arising in all these situations, and it entails seeking the maps consistently assigning the effective totals. Hence, in the context of the quantum identity problem, the "objects" in the discrete sets are the basis states i ⟩ and their probabilistic weight is encoded in ψ ⟩ by the quantum-mechanical rule p i = ⟨ i ψ ⟩ 2 . The effective number function N is thus a map whose domain consists of probability vectors P so constructed. 1 In the second part of this presentation, we argue that effective number theory leads to a novel understanding of quantum uncertainty. In fact, the association of effective measure and uncertainty constitutes a conceptual thread connecting all of the mentioned quantum applications. Effective Number Theory Although we are concerned with describing an abstract theory, it is useful to keep a concrete physical situation in mind when doing so. We will use the elementary example of a spinless lattice Schrödinger particle, described by wave function ψ ⟩ → (ψ(x 1 ), . . . , ψ(x N )), for this purpose. Thus, N is the number of lattice sites and p i = ψ ⋆ ψ(x i ) refers to the probability of detecting the particle at position x i . In the language of the quantum identity problem (1), we ask how many position basis states i ⟩ is ψ ⟩ effectively composed of? The situation is exemplified in Fig. 1, showing three probability distributions on the lattice with three sites. Clearly, all ENFs should assign N = 3 to uniform distribution (left panel) and N = 1 to δ-function distribution (middle panel). The problem in question boils down to establishing whether well-founded effective numbers can be assigned to generic distributions, such as the one shown in the right panel. Our approach to this issue is to develop an axiomatic definition of the set N containing all effective number functions N, and then analyze its properties [3]. Two of the axioms play a prominent role in shaping the possible ENFs, namely additivity and monotonicity with respect to "cumulation". The latter is closely related to Schur concavity. Since both need some care in their formulation, we will discuss them in somewhat more detail than other requirements. It should be noted that additivity, while crucial for a measure-like concept, was not part of related considerations in the past. Additivity Consider the lattice Schrödinger particle on the lattice with N sites, in a state producing the probability vector P = (p 1 , . . . , p N ). Similarly, let the particle be restricted to a nonoverlapping lattice of M sites, generating the probabilities Q = (q 1 , . . . , q M ). Then there exists a state of the particle on the combined lattice (see Fig. 2) that leads to the probability distribution on the position basis of the combined system. Here ⊞ denotes the concatenation operation, namely (a 1 , . . . , a N ) ⊞ (b 1 , . . . , b M ) ≡ (a 1 , . . . , a N , b 1 , . . . , b M ). Note that P Q doesn't change the weight ratios for position states inside the two parts of the system, thus preserving the individual distribution shapes. It also properly corrects weight ratios of position pairs from distinct parts by their respective "measures" (lattice sizes N and M ). While the nominal size of the combined system is trivially given by N + M (ordinary count), its effective size in a said state (effective count) also has to be additive, yielding the corresponding condition for ENFs, namely The above equation has to be satisfied for all P and all Q. The measure conversion factors, like those appearing in Eq. (2), can be eliminated from all relevant expressions by simply working with counting vectors C rather than probability vectors P , namely Formally, the set of counting vectors C is defined as Clearly, if C ∈ C N and B ∈ C M , then the counting vector associated with the combined system is simply C ⊞ B ∈ C N +M , which is to be compared with Eq. (2). Thus, from now on, we treat ENFs as maps whose domain is C, namely N = N[C] , C ∈ C. The additivity condition then reads [3] N Monotonicity The purpose of monotonicity property is to ensure that, given a pair of distributions C, B ∈ C N , the one with more cumulated weights won't be assigned a larger effective number. To formulate the requirement, it is important to realize that all C, B cannot be readily compared by degree of their cumulation. In the left panel of Fig. 3 we show an example of a comparable pair. For this purpose, weight entries were ordered in a decreasing manner so that the cumulation center is on the left. Distribution C is more cumulated since it can be obtained from B by transfer of weight (flow) directed toward the cumulation center at all times. On the other hand, the distributions shown in the right panel cannot be compared without introducing ad hoc assumptions, since flow in both directions is needed to perform the needed deformation. Thus, the universal notion of monotonicity is only concerned with properly treating the situation on the left. It is straightforward to check that, for any pair of such comparable discrete distributions, the deformation in question can be carried out as a finite sequence of pair-wise weight exchanges, each transferring weight toward the center of cumulation. Since performing such an elementary operation on C produces C ′ such that (C, C ′ ) is a comparable pair, the monotonicity requirement is entirely captured by the set of conditions concerning these elementary operations, namely (M − ) monotonicity is designed to identify functions respecting cumulation. To place its meaning in a more conventional context, we point out that imposing it in conjunction with symmetry (S) (see below) results in a well-known property of Schur concavity [6]. Effective Number Functions Apart from additivity (A) and monotonicity (M − ), the axioms defining effective number functions N = N[C] incorporate intuitive and easily formulated features such as symmetry, continuity, and previously mentioned "boundary values" associated with uniform and δfunction distributions. The complete list of these additional requirements is formally specified below. Each ENF contained in N provides a consistent scheme to assign effective totals (effective counting measures) to sets of objects endowed with counting/probability weights. It should be pointed out in that regard that none of the quantifiers currently used as substitutes for effective numbers, such as participation number [7], exponentiated Shannon entropy [8] or exponentiated Rényi entropies [9], is (A)-additive. However, they do respect all the other axioms defining N. The Minimal Amount Interestingly, the effective number theory based on the above definition of N can be entirely solved [3]. In fact, all ENFs were explicitly found, and the structural properties of N were established. These results are summarized by Theorems 1,2 of Ref. [3]. To convey the aspects needed here, we first define the function N + on C, counting the number of objects with non-zero weights, namely Note that N + is not an ENF due to the lack of continuity. In addition, the following function N ⋆ on C is important in the context of effective number theory and the Theorem below. Theorem. There are infinitely many elements in N including N ⋆ . Moreover, for every fixed We wish to emphasize the following points regarding this Theorem. (a) Since N is non-empty, the set of all possible effective number assignments for a given counting vector C (LHS of (8)) is also non-empty, and thus equal to closed interval Thus, the concept of effective counting measure necessitates the existence of a non-trivial intrinsic minimal amount (count, total), specified by N ⋆ . The universal (independent of N ) function n ⋆ entering the definition (7) of this minimal ENF is referred to as the minimal counting function. This result has non-trivial consequences, including those concerning quantum uncertainty discussed here. (c) In addition, the Theorem conveys that N ⋆ is the only ENF with such definite structural role in N. For example, since N + ∉ N, there is no largest element in N, i.e. the analog of N ⋆ "at the top". More importantly, for given C, function N can be adjusted to accommodate any intuitively possible total larger than N ⋆ [C]. Consequently, there are no "holes" in the bulk of N where other privileged ENFs could be identified. Given its absolute meaning, the minimal amount N ⋆ provides for the canonical solution of the quantum identity problem (1). In particular [3] The Measure Aspect of Quantum Uncertainty The uncertainty in QM refers to the indeterminacy of outcomes obtained by probing the quantum state. More specifically, consider a canonical situation where state ψ ⟩ from N -dimensional Hilbert space is probed by measuring the observable associated with nondegenerate Hermitian operatorÔ. In a standard manner, ψ ⟩ is repeatedly prepared and measured, generating a sequence of outcomes . . , N } denoting the set of eigenstate-eigenvalue pairs, ( i ⟩, O i ) specifies the outcome of -th trial, namely the collapsed state and the measured value. Quantum uncertainty of ψ ⟩ with respect to its probing byÔ is intuitively associated with the "spread" of outcomes ( i ⟩, O i ) so generated. Clearly, the precise content of the notion so construed depends on how we choose to quantify the "spread". In the usual approach, the focus is on the sequence of eigenvalues O i with spread characterized in terms of distance (metric) on the spectrum ofÔ. We will refer to this approach as metric uncertainty (ρ-uncertainty). A commonly used quantifier of this type is the standard deviation which leads to a particularly simple form of quantum uncertainty relations [1]. However, it may be interesting to view quantum indeterminacy differently [5]. A possible approach is to characterize it by the effective number of distinct outcomes occurring in (11). This would express the spread in terms of the "amount", and we refer to it as measure uncertainty (µ-uncertainty). Note that, in this case, it is immaterial whether we focus on sequence O i , sequence i ⟩, or on the sequence of corresponding pairs: the object of interest is the effective total of the outcomes. While such approach might have seemed rather nebulous in the past, it is clear that the effective number theory not only puts it on a firm ground, but also leads to rather unexpected revelations. First, by construction, the set N of ENFs is identical to the set of all possible µuncertainties. More explicitly, if µ = µ[ ψ ⟩, { i ⟩}] formally denotes a valid µ-uncertainty map, then there exists N ∈ N such that for all ψ ⟩ and { i ⟩} and vice versa. In other words, featured in the quantum identity problem (1). Secondly, the existence of minimal effective number gives rise to minimal µ-uncertainty. In particular, the effective number theory allows us to deduce the following rigorous statement [ Eqs. (7), (10) ]. Remarkably, U 0 asserts that uncertainty is built into quantum mechanics as an absolute concept. In particular, by expressing it as a measure, we learn that there exists an intrinsic irremovable "amount" of uncertainty in state ψ ⟩ relative to probing basis { i ⟩}, specified uniquely as N ⋆ [ ψ ⟩, { i ⟩}] states. Note that U 0 can be viewed as a quantum uncertainty principle of very different kind than the one conveyed by Heisenberg relations [1,4]. Indeed, while the latter is of a relative (comparative) nature, the µ-uncertainty principle is absolute. It allows us to express the fundamental difference between quantum and classical notions of state in a particularly direct and economic way represented by the following diagram Thus, the properties of classical state S can be measured with arbitrarily small error, meaning that its intrinsic µ-uncertainty is always one (S has single identity). On the other hand, if the probing of a quantum state ψ ⟩ involves the collapse into elements of basis , which is generically much larger than one ( ψ ⟩ has Finally, it is important to point out that the above considerations are by no means restricted to a discrete case or finite-dimensional Hilbert spaces. The relevant extensions are worked out in complete generality by Ref. [5]. As an elementary example, consider the case of a spinless Schrödinger particle contained in region Ω ⊂ R D with volume V . Using the above results, one can show that its minimal µ -uncertainty with respect to the position basis is given by where ψ(x) is particle's wave function. Thus, the measure uncertainty takes the form of effective volume. The corresponding µ-uncertainty principle states that a particle described by ψ cannot be associated with an effective volume smaller than V ⋆ [ψ]. Conclusions In the first part of this presentation, we outlined the construction of effective number theory and discussed how it solves the quantum identity problem [3]. The latter has a wide range of potential applications in quantum physics. Particularly close attention was given to the most consequential result of effective number theory, namely the existence of a minimal amount (total, count) consistently assignable to a collection of objects distinguished by probability weights. This finding offers a rather unexpected new insight into the nature of measure. In the second part of the presentation, we analyzed the consequences of effective number theory for the concept of uncertainty in quantum mechanics [5]. In particular, we argued that the ensuing measure approach reveals the existence of uniquely-defined intrinsic µuncertainties in quantum state, each associated with a particular way of probing it. We propose these intrinsic uncertainties as potentially useful characteristics of quantum states. It is interesting to note in this regard that, starting from essentially classical (measure) considerations, we arrived at describing aspects of the state that are truly quantum in their nature.
4,428.2
2019-07-02T00:00:00.000
[ "Physics" ]
Enhanced Land Use and Land Cover Classification Through Human Group-based Particle Swarm Optimization-Ant Colony Optimization Integration with Convolutional Neural Network — Reliable classification of Land Use and Land Cover (LULC) using satellite images is essential for disaster management, environmental monitoring, and urban planning. This paper introduces a unique method that combines a Convolutional Neural Network (CNN) with Human Group-based Particle Swarm Optimization (HPSO) and Ant Colony Optimization (ACO) algorithms to improve the accuracy of LULC classification. The suggested hybrid HPSO-ACO-CNN architecture effectively solves the issues with feature selection, parameter optimization INTRODUCTION A crucial challenge that has significant implications across a range of regions is the accurate categorization of land cover and land use using satellite images.The primary focus of the position is the organized classification and labelling of the surface of the Earth, which serves as a fundamental perspective for understanding and managing the planet's changing landscapes.The designation of urban regions, the identification of infrastructural requirements, and the reinforcement of well-informed choices on land utilization allocation all have been made possible by the LULC categorization, which is crucial for urban planning [1].This allows for the establishment of effective and environmentally responsible cities.It is a vital instrument in the field of management of the environment for determining how ecosystems are changing, detecting deforestation, and keeping track of the condition of ecosystems in their natural state.In addition, LULC categorization in agriculture provides farmers with knowledge about different crop categories, production, and farming methods, permitting targeted farming methods and boosting food security [2].Accurate mapping of LULC can help with evaluating susceptibility, organizing for minimizing disaster risks, and adapting quickly to emergencies throughout disaster reconstruction and prevention operations.The capacity of satellite imaging to take wideranging images of the exterior of the planet from orbit is crucial for LULC categorization.These images provide us an unusual perspective from which can observe the intricate and constantly shifting topography of the earth [3].The investigation has access to a variety of data on the Earth's surface, such as specifics about human behaviours, landscape www.ijacsa.thesai.orgcharacteristics, and the surrounding environment, by using the camera's array of satellite sensors.A specific component of this categorization, known as land utilization, deals with the numerous ways individuals utilize and communicate with the land, including metropolitan regions, agricultural areas, transportation systems, manufacturing regions, and more.In contrast, land cover describes the physical properties of the Earth's surface independent of human activity, including forests, marshes, lakes and rivers, deserts, and arid areas.Together, the two distinct aspects related to land cover and land use provide an accurate representation of the Earth's surface and provide information regarding the complex interactions between the activities of humans and the surrounding ecosystem [4].The level of accuracy of assessments made in a variety of disciplines is strongly impacted by how well LULC categorization is done.In urban planning, accurate regulatory control, infrastructure optimization, and support for ecologically friendly techniques are all aided by the definition of land utilization classifications.The capacity to distinguish between diverse kinds of land covers in management of the environment enables investigators to observe wildlife migratory patterns, follow habitat changes, and determine the effects of warming temperatures on ecosystems.In terms of agriculture, LULC categorization enables farmers to engage in decisions based on information, enabling them to select crops more effectively, manage irrigation more effectively, and lessen the impact of diseases and pests.Quick and precise LULC mapping is crucial for response to disasters in order to evaluate destruction, identifying impacted people, and efficiently arrange relief activities [5].The key component for solving some of the most important issues confronting the global community, from development and deterioration of the environment to food availability and disaster resilience, is proper LULC categorization. Satellite imagery is now more widely available and of higher quality than ever due to notable technological breakthroughs in the area of remote sensing in recent years.The latest phase of Earth observations has begun as a result of the growth in gathering information, providing an unusual viewpoint on the globe from orbit.Researchers have been able to collect data about the outermost layer of the planet and its changing operations at a degree of complexity never before possible due to the installation of innovative Earth-observing satellites with modern sensors.These satellites continually gather enormous volumes of information that cover a wide range of spectral data, temporal frequencies, and geographical resolutions [6].Because of this, the field of remote sensing today is distinguished by an extensive collection of extensive and varied satellite imagery, which serves as a significant resource for a wide range of scientific, ecological, and social purposes.Even if the amount of available imagery from satellites is increasing exponentially, there are still many difficult problems it raises.For the information to be used effectively, it requires advanced approaches due to their enormous number and complexity.The fact that these images are multi-spectral and hyperspectral, indicating that they collect data from a broad variety of wavelengths, which include those outside the visible spectrum, presents one of the main obstacles [7].This spectral variety adds a degree of complexity that necessitates sophisticated analytical methods capable of understanding the subtle differences in the information.Conventional LULC categorization methods suffer to handle this complexity because they are unable to capture the complicated patterns seen in multi-spectral and hyperspectral data.These methods are frequently founded upon manual characteristic engineering and rule-based systems.Traditional LULC categorization techniques frequently depend on hand-made characteristics and preestablished criteria, which may not be sufficient to capture the entire range of variability inherent in satellite images.These methods can be laborious and frequently need expertise in the area for extraction of features.Additionally, rule-based systems' low capacity for adapting to various and changing environments limits their usefulness.The immense prospective of deep learning methods, particularly CNNs, has, in comparison, emerged progressively more understood in the context of the analysis of satellite imagery [8].CNNs are exceptionally effective at gathering pertinent characteristics from unprocessed information, which enables them to find complex spatial and spectral correlations that can resist manual characteristic engineering.They therefore provide a potential way to improve the accuracy and efficiency of LULC categorization using the vast amount of available satellite information. The present article introduces a novel method that makes advantage of the interaction between algorithms for optimization and deep learning approaches to address the significant issues provided by the complexities of satellite images and the rising need for precise land use and land cover categorization.In particular, this innovative method combines Convolutional Neural Networks with two potent optimization algorithms-Human Group-based Particle Swarm Optimization and Ant Colony Optimization-to create a hybrid structure designed exclusively for the accurate and reliable categorization of LULC according to satellite imagery [9].This integrative approach's primary driving force is to handle choosing characteristics and hyperparameter optimization, two crucial aspects of LULC categorization.The correct interpretation of satellite images depends heavily on identifying features, which involves choosing the most significant spectral bands or channels.Each of the categories are equally significant in the context of multi-spectral and hyperspectral imaging, and choosing a suitable combination of channels is essential for lowering distortion and redundancies while enhancing the approach capacity to discriminate between distinct land cover classifications.Human Groupbased PSO intelligently selects the most relevant spectral bands to improve the standard of data given into the CNN using a collaborative procedure of optimization motivated by social group characteristics.The subsequent crucial issue the hybrid system addresses is hyperparameter optimization.A wide range of hyperparameters, including learning rates, batch sizes, and the number of convolutional layers, are included in CNNs as algorithms for deep learning.The effectiveness of the simulation is significantly impacted by these hyperparameters, therefore determining the optimum setup is extremely important [10].ACO is used to adjust these hyperparameters, in order to ensure that the CNN performs at its highest level.It aims to minimize overfitting while www.ijacsa.thesai.orgoptimizing categorization accuracy by balancing model complexities and generalization.This combined strategy transcends the constraints of conventional approaches that depend on manual characteristic engineering and rule-based systems, signalling an important change in LULC categorization.This structure aims to enhance the accuracy and resilience of satellite based LULC categorization, permitting the efficient usage of the extensive and complicated information contained within satellite data.It does this by integrating the effectiveness of optimization methods into deep learning [11]. A crucial and fundamental stage in the field of satellite imagery evaluation, especially in the broader context of classifying land use and land cover, involves characteristic selection.The selection of the appropriate subset of spectral bands is crucial for a number of explanations, not the least of which is that not all spectral bands contributed similarly to the categorization process.Initially reducing data noise is accomplished by carefully choosing the spectral bands.Noise in imagery from satellites can come from a variety of places, such as air interference, sensor constraints, and changes in surface reflectance [12].Feature selection eliminates or reduces the influence of noisy information by selecting the most pertinent bands, producing more accurate and precise categorization outcomes.This noise reduction improves the categorization model's general durability, making it less sensitive to incorrect classifications carried on by external influences.The process of categorization becomes quicker and more resource-efficient because to the reduction in redundancy, which also improves computing effectiveness.The present study uses Human Group-based Particle Swarm Optimization, a method informed by the combined intelligence of social networks, to carry out the process of characteristic selection effectively [13].PSO replicates the cooperative behaviour of members of a group, where each member represents a possible mixture of spectral bands.These "particles" move around the spectral band subset search space, continuously modifying their placements in accordance with their individualized and shared understanding.Particles may successfully explore and utilize the search space thanks to PSO's cooperative characteristic by combining spectral bands in techniques that improve categorization accuracy while reducing noise and redundancies. Convolutional Neural Network architecture tuning of hyperparameters is crucial for obtaining optimal results and strong adaptation as well as to characteristics selection.Hyperparameters include important factors that control whether deep-learning algorithms develop and are built, such as learning rates, size of batches, and the quantity of convolutional layers.These hyperparameters have a substantial impact on how well a CNN can recognize and understand complicated patterns in the information being processed.For example, during optimization, the learning rate determines the phase size and might affect the algorithm's convergence rate and quality.The batch size influences both computational effectiveness and generalization by affecting how the system procedures and modifies parameters during training [14].Additionally, both the complexity and depth of the CNN is directly determined by the quantity of convolutional layers, with a greater number possibly permitting the collection of more complicated data.Therefore, it is essential to optimize these hyperparameters to ensure that the CNN performs at its optimal level while minimizing the danger of overfitting, which occurs when the algorithm develops excessively specific to the information used for training.This study presents a complete technique that integrates feature selection and hyperparameter optimization, two essential components of satellite image evaluation.The resultant hybrid method, which incorporates CNNs, ACO, and HPSO, has the possibility to transform satellite image processing.The combination of PSO for characteristic selection and ACO for hyperparameter optimization results in an integrated structure that makes use of both the representational strength of deep learning systems and the collective knowledge of optimization algorithms.This innovative method improves categorization accuracy while also strengthening resilience against complicated or noisy satellite imaging information.The hybrid PSO-ACO-CNN strategy that has been developed marks a substantial advancement in the effort to fully use satellite images for important applications in a variety of fields.Land cover and land use categorization skills, which are essential for environmental monitoring, urban planning, agriculture, and disaster management, are set to become more precise and dependable as a result of this technology.The study advances the latest developments in satellite image evaluation by demonstrating the efficacy and effectiveness of this framework via thorough investigations and findings.The potential significance of this study extends beyond the limits of research by providing real-world details that can enable experts to reach better decisions about how to manage the resources of the planet and deal with difficult problems.In short, the study represents a crucial step toward releasing satellite imagery's hidden potential for tackling pressing problems that the planet is currently and in future generations will be confronting. The Key Contribution of the paper is given as follows: Section IV covers the proposed technique for categorization of Land Use and Land Cover from satellite images.Section V illustrates the performance measures and summarises the findings and compares the method's performance to previous techniques.Section VI summarises the conclusion. II. RELATED WORKS The positive consequences of merging Sentinel-1 and Sentinel-2 imagery in the context of land use land cover categorization with U-Net and an evolving understanding of the combinatorial benefits of multi-sensor information fusion are highlighted.The benefits of using both Sentinel-1's radar data and Sentinel-2's optical information for improved LULC categorization have been studied in this field.Sentinel-1's radar information is useful for assessing land surfaces in a variety of environmental circumstances since it can operate in all weather conditions and can be observed through cloud cover.Contrarily, Sentinel-2's optical data offers high-resolution, multispectral data that specializes at catching specific spectral fingerprints, notably in differentiating between different plant varieties and urban characteristics.A potential method has evolved for combining these complementary information sources: U-Net, a deep learning architecture renowned for its capacity for semantic segmentation.In addition to increasing categorization accuracy, it also increases the resilience of LULC mapping by reducing the drawbacks of employing the various sensors separately, such as the sensitiveness of optical information to cloud cover and the sensitivity of radar information to specific varieties of land cover and roughness of the surfaces.Although this fusion strategy has a lot of potential, there are still difficulties in processing the volume of information, integrating multiple information modalities, and efficiently optimizing the deep learning algorithm's parameters [15]. A significant body of research highlighting the essential function of these technologies in evaluating environmental modifications in this important ecosystem has been revealed by the observation of land cover and land use modifications employing GIS and remote sensing methods in humaninduced mangrove forest regions in Bangladesh.Investigations in previous years have demonstrated how well Geographic Information Systems technologies paired with remote sensing information, especially from satellites like Landsat and Sentinel, can capture and analyze alterations in mangrove forest cover, extent, and health.These methods have provided benefits including extensive coverage, recurrent gathering of information, and the capacity to distinguish between different land cover classes, that are crucial for tracking changes brought on by humans in mangrove ecosystems.Indicators like the Normalized Difference Vegetation Index and spectral characteristics have been used by researchers to recognize and categorize modifications, facilitating the discovery of elements like urbanization, aquaculture growth, and deforestation that have an impact on these ecosystems.However, issues with information quality, imagine interpretation, and the requirement for fine-scale observation to detect minor modifications still exist.Even yet, the combination of remote sensing and GIS offers a lot of potential for improving the comprehension of the dynamics and preservation of Bangladesh's human-induced mangrove forests [16]. Understanding the link between land cover and urban heat dynamics via remote sensing technologies is important, as demonstrated by the land-cover categorization and its effects on Peshawar's land surface temperature [17].Previous studies have emphasized the benefits of using satellite imagery, especially Landsat and MODIS information, to map different types of land cover and measure how much that effect affects LST.Studies have shown how important land cover is in controlling urban microclimates, with permeable surfaces like buildings and roads causing higher LSTs that are frequently linked to the urban heat island effects.The influence of modifications to land cover on LST variations in Peshawar has been examined using a variety of categorization approaches, including supervised and unsupervised techniques, together with GIS tools.However, issues with information quality, geographical resolution, and the requirement for highly temporal-resolved statistics to record cyclical temperature fluctuations still exist.However, these studies contribute to the region's initiatives at development strategy and climate adaptation by offering significant understanding into the effects of urbanization-related modifications to land cover and their consequences for Peshawar's thermal environment. A variety of research has been done on employing remote sensing technologies in order to track and understand the dynamic character of urban settings.This is evident in the study of urban land cover and land use changes employing www.ijacsa.thesai.orgRandom Forest categorization of Landsat time series information.With its constant and wide-ranging coverage, Landsat satellite information has proven to be a useful tool for tracking modifications to urban land cover over time.Random Forest, a machine learning algorithm, has been used in several research due to its efficacy in categorizing different types of land cover in metropolitan settings.The benefits of Random Forest, including its capacity to manage complicated spectral and temporal structures, account for noisy input, and produce reliable and accurate categorization outcomes, have been demonstrated by these researches.The investigations covered a variety of urban applications, such as detecting land use changes, assessing urban expansion, and characterizing urban heat islands, demonstrating the adaptability of this method.Due to their considerable effects on urban sustainability, management of resources, and quality of the environment, it also emphasizes the rising significance of monitoring modifications to urban land cover and land use.Urbanization, a global trend, has caused fast and occasionally uncontrolled expansion, changing the number of impermeable surfaces, deforestation, and urbanization, among other aspects of the land cover.Wide-ranging effects of these changes include higher energy use, changing microclimates, and ecological disturbances.In order to give statistical knowledge into these urban transitions, academics have increasingly resorted to remote sensing and machine learning approaches like Random Forest.Although the approach has many benefits, there are still some problems, such as the necessity for strong validation techniques, complicated information pre-processing, and modifying variables in the model.The substantial body of research in this area however emphasizes the crucial role that remote sensing and Random Forest categorization serve in dealing with the changing dynamics of urban land cover and land use transformations [8]. The evaluation of deep learning approaches to solve the challenges of satellite imagery evaluation highlights the rising interest in utilizing techniques based on deep learning for land use and land cover categorization in Southern New Caledonia [18].Convolutional neural networks, in particular, have shown potential in automating LULC categorization activities.They have the capacity to gather features from unprocessed information, adjust to heterogeneous landscapes, and scale to multi-spectral and hyperspectral datasets, among other benefits.The complex and changing landscapes of Southern New Caledonia require effective methods for identifying spatial interdependence within images.The necessity for significant training data that is labeled, problems with the algorithm's interpretability, vulnerability to overfitting, computing resource requirements, and the requirement for balancing the collection of local and contextual data are still problematic.However, deep learning constitutes a substantial development in LULC categorization and has the possibility to enhance the knowledge of and ability to control the dynamics of land cover and land use in Southern New Caledonia. Machine learning approaches were used to forecast land cover and land use from satellite photos, underscoring the increasing interest in utilizing cutting-edge technology for precise and effective land categorization.For tracking and comprehending modifications to land cover, imagery from satellites has evolved into a vital resource, and machine learning techniques have proven effective instruments in this field [19].Several machine learning methods have been utilized in multiple studies to estimate the types of land cover and land use from satellite imagery.These methods have a number of benefits, including the capacity to handle big datasets, record complicated spatial patterns, and respond to various topographies.The breadth of research on machine learning-based land utilization forecasts has been demonstrated across a variety of applications, from urban planning and monitoring the environment to agricultural and disaster management.It also emphasizes the significance of precise land-use land-cover forecasts in tackling current issues like urbanization, deforestation, and environmental degradation.The capacity to observe and simulate modifications to land cover is essential for informed decisionmaking and effective utilization of resources as the global population keeps on growing urbanize and landscapes transform.By simplifying the categorization procedure and supplying accurate and fast data, machine learning approaches have been important in expanding our knowledge of these shifts.The necessity for high-quality information with labels, modelling generalization across diverse locations, and the understanding of complicated machine learning systems remained obstacles regardless their benefits.Nevertheless, the collection of research in this area highlights the possibilities of machine learning approaches in improving the ability to anticipate and efficiently react to modifications in land cover and land use. An increasing number of researchers are interested in using advanced neural network topologies to improve the precision and effectiveness of land cover categorization operations, as shown by the examination of the deep learning framework for patch-based land cover categorization.Due to their ability to gather pertinent characteristics from image updates deep learning architectures, in particular Convolutional Neural Networks, have become increasingly popularity in recent years.It renders them ideal for classifying land cover from satellite or aerial images [20].The benefits of CNNs have been demonstrated in research, particularly the capacity to deal with complicated land cover patterns, the capacity to concurrently record spatial and spectral data, and their adaptation to multi-spectral and high-resolution images.This research examined at a variety of deep learning architectures, including model topologies, hyperparameter tuning, and transfer learning, and have shown the way they may be used to achieve the highest possible accuracy in classifying land cover.The also highlights how important precise land cover categorization is for purposes in environmental evaluation, urban development, agriculture, and disaster prevention.For making decisions and policy creation, the capacity to autonomously and accurately classify different kinds of land cover at the patch levels is crucial.In this environment, deep learning architectures, which can handle massive datasets and provide real-time data, are emerging as an innovative technology.The necessity for large amounts of labeled information for training, the ability to interpret of models, and the computing resources necessary for deep network training are still issues.However, the large amount of research in this area shows the enormous potential of deep www.ijacsa.thesai.orglearning architectures in enhancing the ability to categorizing land cover and addressing important problems with land cover and land use assessment. The comprehensive extraction of multiscale timing dependency used in the land-cover categorization with timeseries data remote sensing images emphasizes the growing significance of using temporal data in land cover assessment.Conventional land-cover categorization frequently employs images from a single date, which could not accurately represent how quickly land cover varies.On the other hand, time-series imagery from satellites, which are often collected over a long period of time, provide an extensive amount of information for comprehending land cover dynamics.Multiscale timing dependency, which takes into account not only the spectrum data but also time-dependent trends and relationships among observations, has been identified by investigators as having potential.Recurrent neural networks and machine learning methods have both been investigated to obtain thorough temporal information that will increase the reliability of land cover categorization.The results of these investigations show the benefits of using time-series information in land cover research, allowing for more accurate monitoring of modifications to land cover, urbanization, agricultural methods, and environmental alterations.The requirement for more study in this field is highlighted by the fact that there are still issues with data pre-processing, handling cloud cover, and dealing with the computing needs of analyzing large time-series datasets [21]. An integrated strategy that combines nature-inspired optimization approaches with modern deep learning methodologies to improve the accuracy of land cover categorization is reflected in the most effective orientation whale optimization algorithm and hybrid deep learning systems for land cover and use categorization.Due to the increasing accessibility of remote sensing information as well as computer resources, conventional land use land cover categorization methods have experienced substantial improvements.A potential optimization method for adjusting the hyperparameters of deep learning systems is the optimum guiding whale optimization algorithm, an extension of the whale optimization algorithm.This algorithm demonstrates increased convergence and optimization characteristics and is motivated by the social behaviour of humpback whales.This method uses spatial as well as temporal data from satellite images in combination with deep learning networks to accomplish accurate LULC categorization.The also highlights the significance of precise LULC categorization in several applications, such as urban planning, environmental surveillance, disaster preparation, and agriculture.Deep learning networks and the best guide whale optimization technique are used to solve the problem of improving complicated models while taking into account the special properties of remote sensing information.There continue to be issues with modelling interpretability, algorithm integration, and the requirement for a large amount of labeled training information.However, this method offers an innovative research area with the possibility of substantially enhance the accuracy and effectiveness of LULC categorization, which would be advantageous to many fields that depend on land cover data for making decisions and policy development [22]. The previously mentioned investigations pertaining to the classification of land use and cover highlight the growing need of using a wide range of remote sensing technologies and sophisticated methodologies to improve the ability to precisely and effectively evaluate the dynamics of land cover.These approaches have demonstrated an enormous amount of potential in terms of their ability to offer comprehensive data on alterations in land use and cover.The effectiveness of these methodologies in documenting changes in land cover and land use through time, for example, has been shown by studies undertaken in places like Peshawar, Southern New Caledonia, and human-induced mangrove forest areas in Bangladesh.Through the utilization of remote sensing technology, researchers have been able to get broad coverage, track alterations in land cover classes, and evaluate the effects of deforestation, urbanization, and aquaculture growth on diverse ecosystems.These methods have great potential to handle modern issues like catastrophe preparedness, urban planning, environmental protection, and agricultural management, where precise land cover data is essential.However, there are several difficulties and disadvantages with these intriguing approaches.The significant need for labeled training information which can be labour-intensive and timeconsuming to obtain, particularly for extensive land cover mapping projects is one of the main obstacles.Furthermore, because they might affect the precision and dependability of classification results, the reliability and interpretability of information from remote sensing remain to be a cause for concern.Large time-series dataset management and analysis can provide logistical and technological difficulties.In order to fully realize the potential of these techniques and assure their successful implementation in real-world scenarios where accurate and timely land cover information is critical for wellinformed decision-making and efficient resource management it must be essential that these obstacles be addressed. III. PROBLEM STATEMENT From the above literature review it is observed that the most important tasks in environmental monitoring, urban planning, and natural resource management is classifying land cover and land use utilizing satellite information.A critical component of decision-making processes is the correct categorization of land cover kinds, such as forests, urban areas, agricultural fields, and water bodies.The complexity and size of current satellite imaging information is frequently excessive for conventional strategies for land cover and land use categorization to manage [23].This study suggests a unique method for addressing the issue by fusing the strength of hybrid HPSO and ACO with CNN for land use and land cover categorization.The goal is to create a categorization system that is accurate and effective, capable of autonomously analyzing satellite images and categorizing different types of land cover.By combining PSO and ACO with human input, the optimization procedure is regulated by human knowledge.By combining domain-specific expertise, this "human in the loop" method can produce superior outcomes.www.ijacsa.thesai.orgIV. PROPOSED HPSO-ACO-CNN Data gathering, pre-processing, feature selection utilizing a human group based PSO algorithm, and CNN hyper parameter optimization employing ACO constituted the approach used in this study.The process of gathering data includes building the EuroSAT dataset, which consists of 27,000 annotated Sentinel-2 satellite photos representing ten distinct land use and land cover classifications over 13 spectral bands.After data collection, normalization and histogram equalization were used in image pre-processing to improve the quality of the images.PSO was used in feature selection to intelligently identify pertinent spectral bands, while ACO was used to optimize CNN hyperparameters which includes batch size and learning rate.The CNN model, created for LULC categorization, completed training, and optimization was made possible by ACO-DL, a modification of ACO that allows for simultaneous optimization of several parameters.This hybrid approach provides a comprehensive approach integrating optimization approaches with deep learning for efficient satellite image processing.Its goal is to increase LULC categorization accuracy.Fig. 1 shows the overall structure of the proposed framework. A. Data Collection The study presents an innovative set of satellite images for the categorization of land cover and land use.The Sentinel-2 satellite imagery that constitute up the 27,000 annotated images that constitute up the provided EuroSAT dataset depend on an overall of 10 distinct classifications.The patches are 64 by 64 pixels in size.The European Urban Atlas cities were chosen for the study's satellite images1.The given satellite image dataset, which includes thirteen spectral bands and has a significant amount of two thousand to three thousand image patches per class, differs significantly from earlier datasets in that it enables the investigation of multimodal fusion strategies in the overall setting of these bands.If deep neural networks need to be used for categorization, this is a particularly challenging problem.The offered dataset additionally depends on publicly available Earth observation information, opening up a variety of novel real-world applications.In accordance with the coverage in the European Urban Atlas, the areas included in the dataset were collected from cities distributed over thirty different European nations.Additionally, each individual picture patch's geoformation is made accessible to the public together with the labeled dataset EuroSAT.In order to obtain as much variation from the covered land cover and land use classifications as feasible, the study also extracted images taken throughout the year [24]1 . B. Image Pre-processing using Min-Max Normalization To improve the quality of the satellite images, normalization and histogram equalization techniques are used after data collection.By altering the range of pixel values, a process known as image normalization, or contrast stretching, one may enhance the visually appealing qualities of satelliteimage collection.( 1) is a well-known simple formula that expresses the typical scenario of a min-max normalization to generate an additional image spanning from 0 to 1. Where the original satellite image is denoted as , the minimum and maximum intensity values, which range from 0 to 255, are represented as and , respectively, the image after min-max normalization is denoted as , and the new minimum and maximum values are denoted as and x.The histogram equalization approach is then applied to enhance the image quality without eliminating any of the image's borders, patches, or points.The histogram equalization approach adjusts the normalized images' mean brightness to the allowable range's midpoint, while maintaining the original brightness prevents intrusive artifacts from appearing in the images. C. Feature Selection using Human Group-based Particle Swarm Optimization In this work, feature selection is done using Human Group-based Particle Swarm Optimization, which has the distinct benefit of simulating human cognitive capacities in optimization problems.By adding a human-guided component, HPSO improves upon the communal intelligence of particle swarm optimization, in which particles stand in for potential subsets of characteristics.The feature selection process is guided by heuristics and important domain experience provided by this human-in-the-loop technique, which increases its efficiency and context awareness.HPSO assures the selection of the most important characteristics while minimizing computational overhead by fusing human understanding with the computational power of PSO.This method is especially well-suited for difficult tasks like satellite image processing where domain knowledge is essential for precise feature selection.It improves the quality of selected www.ijacsa.thesai.orgcharacteristics, which in turn improves the efficiency of deep learning models like Convolutional Neural Networks.After generating the characteristic vectors, characteristics are chosen employing the human group-based PSO method.PSO is a population-based searching algorithm that usually simulates bird behaviour.In Eq. ( 2), is employed to modify the particle's position and velocity in order to produce new locations for each particle. where, stands for the number of iterations, and are expressed as random real integers between [0, 1], w is a representation for the acceleration weight, is a symbol for the best position, ( )is a symbol for the local best position, and ( ) is a symbol for the global optimal position of the particle.In PSO, an adaptable uniform mutation is used to increase convergence and simplify implementation after the HGO method has been used to initially affect the particles. A discrete multi-label is first converted into a continuous label using HGO.The employed approach locates the obtained feature vectors in accordance with decision , where the vectors of the particle's location are supplied as ( ) ( ). The feature selection algorithm's capacity for exploration is improved by the adaptive uniform mutation.The variety and choice of the mutation on each particle, in this operator are controlled by a nonlinear function .Eq. ( 3) is used to update at each cycle. Where, m represents the number of iterations, M is designated as the maximum iteration, and the value tends to fall as the number of iterations rises.If the value is greater than the random number between [0, 1], the mutation selects the s elements at random from the particle.The mutation value of the items contained in the search space is then reset, with s serving as an integer value that limits the mutation range.Eq. ( 4) mathematically denotes the value of s as: The following describes the human group-based PSO algorithm's step-by-step procedure. Step 1: Establish the particle swarm's initial parameters, including (a) the number of iterations M, the swarm size , and the archive size .A non-dominated solution is saved into the archive after steps (b) initialize the particle locations, (c) estimate the aim of each particle, and do so. Step 2: The particular best position of the particles is updated using the Pareto dominance relationship.The particular best position of the particles continues to remain unaltered if the new position ( ) is superior to the previous personal best position ( ), set ( ) ( ), where is shown as the best position and ( ) is shown as the local best position. Step 3: Choose the global finest position from the archives according to the variety of solutions.To choose the particle's global optimal position ( ), a binary tournament is employed after initially calculating the crowding distance value. Step 4: The decision value is then initialized depending on ( ).The feature vector c's decision is each a binary value .Each characteristic vector c is associated to the fitness value V(c), which is thought of as the weighted average of T stochastic contributions ( ).However, the significance of decisions and other K selections affects their contributions. The total quantity of variables that interact decision values is denoted by the integer index .The parameter [ ], which represents the probability that each member has been informed of their contribution to the decision, determines the knowledge level of the member.Each member n determines their individual estimated fitness utilizing (6) depending on their degree of knowledge. where, ̆ is referred to as the matrix, whose generic member examines the numerical value one with probabilities (P and 0) with probabilities (1-P). Step 5: Eq. ( 7) is employed to modify the particle's location and velocity in accordance with the decision value . Step 7: Utilizing the crowding distance approach, upgrade the external archives. Step 8: Examine the termination circumstance: if the proposed algorithm completes the maximum number of iterations, the process should be terminated; otherwise, move back to phase 2. The HGO algorithm's fitness function ( ) is used to remove the most deficient particles. D. Optimizing CNN Hyperparameters using Ant Colony Optimization Strategy The study utilizes Ant Colony Optimization to optimize numerous parameters simultaneously, which makes it a suitable method for fine-tuning Convolutional Neural Network hyperparameters.ACO is an optimization method that draws www.ijacsa.thesai.orginspiration from nature and is particularly efficient at navigating intricate search spaces.As such, it may be used to determine the best possible combination of hyperparameters for deep learning models.Its benefit is that it can investigate a large variety of hyperparameter values and adaptively modify them while optimizing.When working with hyperparameters, ACO's probabilistic method is extremely beneficial since it resembles how actual ants choose the shortest path in their natural environment.This method facilitates effective parameter space exploration and exploitation.This work uses the effectiveness of ACO to achieve optimal generalization, decreased overfitting risk, and enhanced CNN performance.As a result, it provides a reliable method for improving the model's accuracy in applications like satellite image categorization.ACO, developed by Marco Dorigo in 1992, is a standard heuristic swarm intelligence program that uses probabilistic calculations to identify the best planning pathway.ACO is a system based on positive feedback in which the ant finally chooses the path with the highest pheromone concentrations in order to obtain the best outcome possible in terms of the regulatory mechanisms. The Convolutional Neural Network utilized in this study's Deep Learning framework was optimized using ACO.The study also altered the conventional ACO by using multitype ants to simultaneously improve different variables.The total quantity of ant varieties in ACO-DL is equal to the number of characteristics that need to be optimized.As a result, ACO-DL was able to optimize simultaneously a number of parameters related to the model in order to produce the best possible answer to the function of objective.The number of batches (A) in the network and the starting learning rate (L) in Adam were optimized for the CNN framework using ACO.The objective function (F (A, L)) selected was an accurate rate of predictions.Additionally, a given interval's values for A and L were determined in Eq. ( 9) and Eq.(10). The fundamental concept is to iteratively discover the shortest path to the best solution of the goal function.In the meantime, the study established the subsequent two termination standards in order to ensure the efficiency of the optimization algorithm: 1) No apparent increase in accuracy; 2) the maximum number of repeats. E. Classification Using Convolutional Neural Network The CNN is the most efficient and productive approach network among deep learning techniques.Because CNNs can categorize intricate contextual images, they are widely used to categorize remote sensing data.Usually, these methods are not required for completing an output image prediction.CNNs are feed-forward neural structures that employ substantially local correlations to produce judgements by imposing an immediate interaction arrangement between neurons in neighbouring segments of the system.A maximal layer of pooling, the network layer, numerous convolutional layers, and fully linked layers constitute their architecture.Every stage of convolution calculates the weighted average of the prior characteristic using a channel before sending its findings via a stimulation functions to obtain the outcome.Using this method, the kernel measurement is computed to find neighbourhood correlations while preserving consistency for each region throughout the data clusters.The final characteristic pattern is created using constants at the lowest attainable unit level.The many levels of convolutional or layers of pooling are finally interfaced into a coherent unit using a fully coupled network of neurons.Eq. ( 11) and Eq. ( 12) gives the convolution operation. Following the extraction of the characteristics, a down sampling or pooling process is employed to gather the intersection of characteristics that are resistant to moderate transverse changes and deformation.It is given in Eq. ( 13). Similar to this, stands for the Qth input feature-map's pooling characteristics-map of the mth layer, and stands for the pooling operation.The maximum, average, L2, overlapping, and spatial pyramid pooling formulae are used in CNN.An activation function is used to speed up learning and offer a method of decision-making for a complicated featuremap.Both the non-linearity of the characteristics and the accelerated learning rate are provided by these activation functions.ReLu, sigmoid, tanh, maxout, and SWISH activation functions all have the same capability for supplying nonlinearity and resolving the vanishing gradient issue. stands for the activation function in Eq. ( 14), for the convolution output, and for the converted output. The two main decisions regarding design for CNN that offer superior efficiency and eliminate the overfitting issue are training and optimization.The number of extra problems for training the information generally grows along with the volume of information.The framework has difficulties when a novel or unfamiliar dataset is presented.Overfitting is a result of this issue, which dropout and batch normalization can solve.The dropout mechanism is employed to disable a large number of nodes at the conclusion of each cycle of the training phase.To enhance entire accuracy, strengthen the system's resistance to overfitting, and quicken the gradient descent process' convergence, batch normalization aims to impose a zero mean and a one standard deviation across every activation function in the established layer and for every single inadequate batch.The fully connected layer, the last component of the CNN framework as depicted in Fig. 2, combines every component with an additional layer to categorize.It gathers data from the characteristic extraction phase and analyses the output from every step before it.As a consequence, data categorization is accomplished by nonlinearly linking a set of chosen characteristics.www.ijacsa.thesai.org V. RESULTS AND DISCUSSION The study first acquired a special EuroSAT dataset made up of twenty-seven thousand annotated satellite images from Sentinel-2 that included thirteen spectral bands and ten different land cover and land use classifications.To improve the quality and usability of these images for subsequent evaluation, necessary pre-processing techniques such as normalization and histogram equalization were used.The main contribution of this research is the combination of CNNs and PSO and ACO optimization approaches to enhance the accuracy of land cover and land use categorization.PSO was utilized to effectively choose the most pertinent spectral bands, decreasing data noise and redundancies.ACO significantly improved crucial CNN hyperparameters including batch size and learning rate, which improved the efficiency of the framework as a whole.On the EuroSAT dataset, the study assessed the hybrid PSO-ACO-CNN architecture and contrasted its performance with that of conventional categorization techniques and independent CNN models. A. Performance Evaluation To evaluate the success of categorization, assessment indicators are crucial.The method most frequently used for this objective is an estimation of precision.A classifier's accuracy for any particular set of data may be assessed by the proportion of test datasets that it properly classifies.Because making the optimal decisions will not be possible if the accuracy metric is used alone.To evaluate the performance of the classifier, researchers additionally employed other factors.Accuracy, recall, precision, and F1-score measures were used to evaluate the performance of the suggested technique.The following is a description of each measure's definitions: (True Positive) refers to the amount of information that has been correctly categorized. The term (False Positive) represents the volume of reliable information that was incorrectly categorized. False negatives ( ) are instances where incorrect information has been given an actual classification. The categorization of incorrect information values is referred to as (True Negative). The classifier's accuracy displays how frequently it makes the right assumption.The ratio of accurate forecasts to all other credible hypotheses is known as accuracy.It is demonstrated by Eq. (15). 𝑎 The amount of correctly classified outcomes is determined by calculating the precision, or level of accuracy, of a classifier.Reduced false positives are the result of improved accuracy, whereas many more are the result of decreased precision.The percentage of instances that are correctly categorized compared to all occurrences is the definition of precision.It is defined by Eq. ( 16).(16) The sensitivity of a categorization, or how much relevant information it produces, are determined by recall.The overall quantity of reduces with improved recall.Recall is the ratio of cases that have been correctly categorized to all of the predicted occurrences.This is demonstrable by Eq. ( 17).(17) The combination of metrics known as F-measure, which reflects the weighted mean of recall and accuracy, are obtained by adding precision and recall.It is characterised by Eq. (18).www.ijacsa.thesai.org(18) Area under the ROC Curve, or AUC, is a well-known evaluation metric for binary classification problems in deep learning and machine learning.The area under the receiver operating characteristic (ROC) curve, which is a graphic representation of the binary identification algorithm's efficacy, is evaluated by the area under the curve (AOC).The classifier in a binary classified problem tries to figure out whether the input information belongs to a positive or negative division.The vs. the is shown on the ROC curve for various classification criteria.AOC values range from 0 to 1, with higher numbers denoting more efficiency.An absolutely randomized classifier has an AOC of 0.5, whereas an optimal classifier has an AOC of one.Because the method considers all possible degree of detection and provides a single number to compare the performance of different classifiers.The development of an optimization algorithmspecifically, Ant Colony Optimization-over a number of iterations is seen in Fig. 5.The y-axis shows the fitness of the algorithm's created solutions, while the x-axis shows the quantity of iterations or generations.In ACO, fitness often refers to how well or effectively a solution addresses the issue at hand.The algorithm continually updates and improves its solutions to increase their fitness as it moves through iterations.As a result, the graph depicts how the solutions' fitness changes over time and, ideally, converges to an optimum or substantially optimal solution.Any levelling out or stability in the graph's later iterations denotes that the algorithm has probably achieved an optimal solution or a point of decreasing effectiveness.The sharp decrease or large loss in fitness towards the beginning of the graph's iterations signals rapid improvement.This illustration assists in evaluating the algorithm's rate of convergence and potency in locating superior solutions to the current optimization challenge. The performance metrics of the HPSO-ACO-CNN hybrid deep learning model are summarized in Fig. 6.In order to evaluate the model's performance in a classification position, it offers important assessment metrics.The "Accuracy" statistic measures the model's overall accuracy in making predictions, and a high result of 99.3% shows that the model performs well in terms of categorization."Recall" (98.7%) assesses the model's capability to properly recognize every single positive example, while "Precision" (99.2%) measures the model's capacity to correctly categorize positive cases.Precision and recall are combined into one score called the "F1-Score" (98.7%), which takes into account the trade-off between both.The HPSO-ACO-CNN model is very accurate and dependable in its categorization task, with an especially strong capacity to categorize positive situations properly while retaining a high overall accuracy level, according to these high values across all metrics.The True Positive Rate and False Positive Rate values for a binary categorization model at various threshold levels are shown in Fig. 7.These numbers are frequently employed to build a ROC curve.The fraction of real positive cases that the model properly classifies as positive is measured by TPR, sometimes referred to as sensitivity or recall.On the other hand, FPR measures the percentage of real negative cases that the model misclassifies as positive.The graph displays how these rates alter when the threshold for categorization changes from 0 to 0.6.The TPR typically rises as the threshold rises, showing that the model gets better at properly recognizing positive situations but frequently at the expense of a larger FPR.The ROC curve, created from these results, graphically illustrates the trade-off between TPR and FPR at various threshold levels, assisting in evaluating the model's categorization effectiveness and determining the best threshold in accordance with the demands of the particular application. The suggested HPSO-ACO-CNN is a hybrid of the Deep Neural Network (DNN), Multiclass Support Vector Machine (MSVM), Long Short-Term Memory (LSTM), and Deep Neural Network for a specific task.The Table I and Fig. 8 provides a number of significant efficiency measures for each model, one for each row: "Precision" measures the model's capacity to accurately classify positive cases, "Recall" measures the model's capacity for correctly recognizing all actual positive cases, and "F1-Score" is a balanced metric combining precision and recall."Accuracy" denotes the overall proportion of correct predictions generated by the model. The outcomes show that the suggested HPSO-ACO-CNN model exceeds the competition with the greatest values for accuracy (99.3%), precision (99.2%), and F1-Score (98.7%), demonstrating its better performance in the task at hand.Additionally, LSTM performs well, whereas DNN and MSVM score slightly more severe on these criteria.Together, these measures offer insightful comparisons of these models' success in the particular categorization task, with higher values representing better model effectiveness.A comparison of datasets utilizing the suggested approach in comparison to other current methodologies is shown in Table II and Fig. 9. Four important criteria are used to assess their effectiveness: F1-Score, Accuracy, Precision, and Recall.The initial dataset, the 2014 Landsat 8 Imagery, has an accuracy of 89%, 87% recall, 87% precision, and an 88% F1-Score.Landsat 5 Thematic Mapper Imagery, the second dataset, was particularly better than the initial one, with an F1-Score of 94%, accuracy of 94.17%, precision of 95%, and recall of 96%.The suggested EuroSAT Dataset performed outstandingly, achieving 98.7% recall, 99.2% precision, 99.3% accuracy, and a 98.7% F1-Score.These findings show that the suggested EuroSAT Dataset outperforms the other datasets in all four standards, indicating that it is the most effective alternative for the given objective, which is probably connected to the categorization or analysis of satellite images.Variability in data features, including resolution, spectral bands, and landscape variety, might be the cause of the variances in comparison results between datasets.The suggested methods could perform better on datasets whose properties are comparable to those that were encountered during the development of the EuroSAT dataset.These datasets might include high-resolution and diversified satellite images. B. Discussion The findings show that the proposed HPSO-ACO-CNN model has a number of benefits over other machine learning techniques already in utilization for the categorization of land use and land cover using satellite images.With an accuracy of 99.3%, precision of 99.2%, recall of 98.7%, and an F1-Score of 98.7%, the HPSO-ACO-CNN model outperformed in all assessment measures.These findings demonstrate that the hybrid technique, which combines a CNN with PSO and ACO, significantly improves the classification capabilities of the model. VI. CONCLUSION AND FUTURE WORKS The study concludes by presenting a novel technique that substantially enhances the precision of classifying land use and land cover using satellite images.The merging of ACO, CNN, and HPSO algorithms results in significant performance increases in the proposed HPSO-ACO-CNN model.Combining CNN hyperparameter optimization with spectral band selection yields remarkable accuracy, precision, recall, and F1-Score performance for this hybrid architecture.Results from experiments conducted on the EuroSAT dataset demonstrate how well the HPSO-ACO-CNN model performs when compared to other methods and standalone CNN models.In addition to addressing important problems with feature selection, parameter optimization, and model training, the work creates new opportunities for satellite image analysis.This novel method has great potential for a number of uses, such as sustainable land use, urban planning, environmental monitoring, and disaster management.It highlights how deep learning techniques and optimization strategies may be combined to improve remote sensing applications.Regarding potential avenues for future research, there are a number of intriguing options to consider.An intriguing line of investigation is the expansion of the HPSO-ACO-CNN architecture to handle larger and more complicated datasets of satellite images, potentially incorporating other spectral bands and land cover categories.Additionally, assessing the model's resilience and scalability in various environmental conditions and geographical areas may yield unexpected findings.Finally, there is potential to further the more general goals of environmental conservation and sustainable land management by investigating applications of transfer learning and customizing the model for additional Earth observation tasks, such change detection and crop monitoring. A deep learning model's training and testing accuracy score over a number of training epochs are summarized in Fig. 3. Every row displays the associated training accuracy and testing accuracy for an epoch number that ranges from 10 to 100.Testing accuracy assesses the model's effectiveness on new or validation information, whereas training accuracy shows how effectively the model is effective on the training information it was shown during training.Both training and testing accuracy often increase as the number of training epoch's rises, suggesting that the framework is learning from the information and getting more proficient in generating predictions.The model attains exceptionally accurate levels on both the training and testing datasets by the end of 100 epochs, indicating that it has acquired the ability to generalize to new, unanticipated information successfully.The graph shows the growth of the model's effectiveness as it goes through training. TABLE I . COMPARISON OF PERFORMANCE METRICS OF PROPOSED METHOD WITH OTHER EXISTING APPROACHES TABLE II . COMPARISON OF DATASETS OF PROPOSED METHOD WITH OTHER EXISTING APPROACHES .8. Comparison of performance metrics of proposed method with other existing approaches.Fig. 9. Comparison of datasets of proposed method with other existing approaches. ROC Curve www.ijacsa.thesai.orgFig The model excels at accurately detecting positive Thematic Mapper Imagery Proposed EuroSAT Dataset www.ijacsa.thesai.orginstances while reducing false positives and false negatives, as seen by its excellent accuracy and recall scores.A precise categorization of land cover and land use is essential in applications like environmental monitoring and catastrophe management.While DNN and MSVM are reasonable models in comparison, they fall short of HPSO-ACO-CNN's performance.Although the LSTM model also exhibits comparable performance, HPSO-ACO-CNN stands out because to its greater accuracy and precision.These results illustrate the effectiveness of combining deep learning methods with optimization algorithms, emphasizing the potential for more precise and reliable mapping of land use and land cover in the context of sustainable land management and protecting the environment.
12,261.2
2023-01-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
An SOM-Like Approach to Inverse Kinematics Modeling Robot kinematics modeling has been one of the main research issues in robotics research. For realtime control of robotic manipulators with high degree of freedom, a computationally efficient solution to the inverse kinematics modeling is required. In this paper, an SOM-Like inverse kinematics modeling methodis proposed. The principal idea behind the proposed modeling method is the use of a first-order Taylor series expansion to build the inverse kinematics model from a set of training data. The work space of a robot arm is discretized into a cubic lattice consisting of Nx×Ny×Nz sampling points. Each sampling point corresponds to a reciprocal zone and is assigned to one neural node, storing four different data items(e.g., coordinates position vector, template position vector,the joint angle vector, and the Jacobian matrix) about the first-order Taylor series expansionof the inverse kinematics function at that sampling point. The proposed inverse kinematics modeling method was tested on a 3-D printed robot arm with 5 degrees of freedom (DOF). The performance of the proposed method was tested on two simulated examples. The average approximation error could be decreased to 0.283 mm in the workspace, 200.0 mm×200.0 mm×72.0 mm and 0.25 mm in the workspace, 200.0 mm×200.0 mm. An SOM-Like Approach to Inverse Kinematics Modeling Publication History: Received: January 14, 2017 Accepted: February 16, 2017 Published: February 18, 2017 Introduction Robot kinematics modeling has been one of the main research issues in robotics research.It can be divided into forward kinematics and inverse kinematics.Forward kinematics refers to the calculation of the position and orientation of an end effector in terms of the joint angles, where represents the Cartesian position of the end effector and represents the joint angles where we assume there are n joints in the joint configuration.Inverse kinematics refers to find the transformation from the position of the end effector in the external Cartesian position space to the joint angles in the internal joint space.While there is always a straightforward solution to forward kinematics, the solution to inverse kinematics is usually more difficult, complex, and computationally expensive.For real-time control of robotic manipulators with high degree of freedom, a computationally efficient solution to the inverse kinematics modeling is one of the main requirements. Approaches to the inverse kinematics problem can be roughly categorized into four classes: the analytical approach (e.g., [1]- [5]), the numerical approach (e.g., [6]- [11]), the computational intelligence-based approach (e.g., [12]- [20]), and the lookup tablebased approach (e.g., [21]- [23]).While the analytical approach solves the joint variables analytically according to given configuration data to provide closed form solutions, the numerical method provides a numerical solution (e.g., the use of the Jacobian matrix of the forward kinematics function, to approximate the optimal joint angles [24].Real-time applications usually prefer closed form solutions than numerical solutions because the latter one either requires heavy computations or fails to converge when a singularity exists.The computational intelligence-based approach provides an alternative solution to the inverse kinematics problem [12]- [20].Many co1mputational intelligence-based methods were based on the use of the self-organizing feature map (SOM) [12]- [14], [16]- [20].Recently, the lookup table-based approach has been introduced to solve the inverse kinematics problem due to its simplicity [21]- [23].Basically, the lookup table-based approach consists of two phases: the phase of the off-line construction of the lookup table and the on-line interpolation phase.The lookup table-based approach may encounter the following problems.First of all, the amount of memory required for constructing an effective table may increase as the number of joints and the resolution of the table increase.In addition, a further approximation procedure may be adopted to search for a better solution once an initial table entry has been located.Without any doubt, each one of the aforementioned four approaches has its advantages and limitations. The goal of this paper is to endow a 3-D printed humanoid robot arm with the ability of positioning its fingertip to a target position in real time.To achieve this goal, the robot system has to seek a high efficiency solution to inverse kinematics modeling.In this paper we propose an SOM-like approach to solving the inverse kinematics problem.The proposed approach integrates the SOMbased approach and the lookup table-based approach.Our approach is the use of a Taylor series expansion to build the transformation from the position of the end effector in the external Cartesian position space to the joint angles in the internal joint space from a set of training data.The principal idea behind the proposed modeling method is to discretize the work space of a robot arm into a cubic lattice consisting of N x ×N y ×N z sampling points.Each sampling point corresponds to a reciprocal zone and is assigned to one neural node.Each neural node storesfour weight vectors or data items: the coordinates position weight vector , the template position weight vector , the joint angle weight vector , and the Jacobian matrix W j J .All these four data terms can be quickly learned by the proposed modeling method from a collected training data set.The training data set can be constructed by either the uniformly discretization scheme or the real-life data generation scheme.The computations of the joint angles corresponding to a target position in the work space involve the following two steps.First of all, we search the reciprocal zone which is International Journal of Computer & Software Engineering Mu-Chun Su * and Chung-Cheng Hsueh Department of Computes Science and Information Engineering, National Central University, Taiwan closest to the target position.Secondly, the joint angles are approximated by he first-order Taylor series expansion of the transfor mation via the target position vector target , the joint angle vector , and the Jacobian matrix W j J within the reciprocal zone. The performance of the proposed SOM-like inverse kinematics modeling method was tested on a 3-D printed robot arm with 5 degrees of freedom (DOF).Two simulated examples were designed to test whether the robot arm could successfully position its fingertip to target positions in the work space.This paper is organized as follows.Following this introduction is a brief review of the Taylor series expansion and the SOM algorithm.Section III explains the detailed descriptions of the proposed SOMlike inverse kinematics modeling method.Simulation results are given in Section IV.The final section contains the discussions and conclusions. Brief Review of the Taylor Series Expansion and the SOM Algorithm The Taylor Series Expansion In mathematics, a vector-valued function can be approximatedvia the first-order Taylor expansion as follows: (1) where is a data point, is a template point, is a vector-valued function, and is the Jacobian matrix at the template point The Jacobian matrix is the matrix of the all first order partial derivatives of the vector-valued function as follows: ( An immediate problem needed to be solved is the estimation of the Jacobian matrix at the point, .There are two methods to estimate the Jacobian matrix from the N+1 data pairs.One popular method is the use of the Moore-Penrose generalized Inverse operator.Assume we have N+1 data pairs, . Let us rewrite Eq. ( 1) as follows: ( Since we have N+1 data pairs, , Eq. ( 3) can be expanded to be as follows: The solution of the matrix is computed as follows: where is the Moore-Penrose generalized inverse of the matrix . The SOM The training algorithm proposed by Kohonen for forming a selforganizing feature map (SOM) is summarized as follows [25]- [26]: Step 1. Initialization: Consider the network on a rectangular grid with M rows and N columns.Each neuron in the neural network is associated with an n-dimensional weight vector , Randomly choose values for the initial weight vectors . Step 2. Winner Finding: Present an input pattern to the network and search for the winning neuron.The winning neuron at time k is found by using the minimum-distance Euclidean criterion: (8) where represents the kth input pattern and indicates the Euclidean norm. Step 3. Weight Updating: Adjust the weights of the winner and its neighbors using the following updating rule: where is a positive constant and is the topological neighborhood function of the winner neuron at time k. Step 4. Iterating: Go to step 2 until some pre-specified termination criterion is satisfied. The Proposed SOM-Like Approach to Inverse Kinematics Modeling The goal of the proposed SOM-like approach to inverse kinematics modeling is to derive an corresponding joint angles, from any fingertip position, .It involves two phases: (1) the off-line training phase and (2) the real-time manipulating phase.While the off-line training phase is to derive the inverse kinematics model for each sampling point over the discretized work space of the robot arm from a collected training data set, the real-time manipulating phase is to compute the corresponding angles for a particular fingertip position based on the trained inverse kinematics model in real-time. The off-line training phase The proposed off-line training phase integrates the merits of the SOM-based approach and the lookup table-based approach.It fully utilizes the topology-preserving property of the SOM algorithm to generalize the modeling capability from collected data to unknown data space.In addition, similar to the table-based approach, it is able to calculate the necessary information to derive the inverse kinematics with no need of a complicated learning procedure.The principal idea behind the proposed modeling method is to discretize the workspace of a robot arm into a cubic lattice consisting of N x ×N y ×N z sampling points.Each sampling point corresponds to a non-overlapping reciprocal zone in the workspace.For each sampling point, we will stores four weight vectors or data terms (i.e. ) in order to quickly derive corresponding joint angles .for any fingertip position located inside the corresponding reciprocal zone of the sampling point via the first-order` Taylor expansion.All these four data terms can be quickly learned by the proposed off-line training phase. The off-linetraining phase is describedas follows: Step 1: Workspace Specification-First of all, we need to specify where the workspace of the robot arm is.Assume that the workspace of the robot arm is located in the region defined by [m where the parameters, m x ,M x , m y ,M y ,m z , and M z are the lower bounds and upper bounds for the workspace with respective to the axes, X, Y, and Z, respectively. Step 2: Lattice Determination-Discretize the work space into an equidistant cubic lattice consisting of N x ×N y ×N z template points.The more template points the workspace has, the smaller the approximation error the training phase will achieve.Accordingly, we set the SOM network structure to be a 3-dimensional lattice structure with the network size,N x ×N y ×N z .Each neural node isthenrespectively assigned to its correspondingtemplate point.Therefore, the reciprocal zone of each neural nodeis . Each neural node then needs to store the its corresponding fourweight vectors: the coordinates vector of the sampling point w c , . , the template position vector the joint angle vector , and the Jacobian matrix W (k,l,m) .These four weight vectors will be computed in the following steps from a set of training data. Step 3: Collecting training data-Assume the forward kinematics model of the robot arm has been developed via some kind of forward kinematics modeling method.Based on the generated forward kinematics model, we can easily derive the position of the fingertip of the robot arm from a given combination of joint angles.The training data can be constructed by either the uniformly discretization scheme or the real-life data generation scheme.If we have generated the forward kinematics model then we suggest to use the uniformly discretization scheme.Let RA i and ∆θ i represent the physical range limit and the sampling step for the ith joint, respectively.Therefore, we will collect training data, , where N act represent the number of joint angles.The more training data we collect, the higher the performance of our method will achieve.The prices paid for the high performance are: (1) the need of larger memory for storing the corresponding four weight vectors and (2) the more training time. Step A flag is attached to each neural node to indicate whether the node has already won one competition.If the node has won for at least one competition then the value of its flag will be set to be one; otherwise, zero.After all Ns training data have been presented to the SOM network, the neural nodes with non-zero flag value are regarded as "template nodes".On the contrary, neural nodes with a zero flag value are claimed to be "novice nodes".All novice nodes have to enter the next sub-step to update their weight vectors.One thing should be pointed out is that several neural nodes may have won the competitions for more than one time.If this case happens then the training data with the smallest distance to the coordinates position vector, is adopted to update the particular node's template position vector via (15). (4.4) Updating the empty nodes' weights: In this sub-step, we fully utilize the topology-preserving property of the SOM.We assume that neural nodes have similar responses as their neighboring nodes. Based on this assumption, the weight vectors of a novice node can be computed from its neighboring nodes.For each novice node (k, l, m), we will find how many neighbors of the novice node are already template nodes.We only consider its 3×3×3 neighboring nodes.The novice nodes are sorted in a decreasing order according to the numbers of their neighboring nodes which are template nodes. According to the sorted order, the joint angle vector .ofa novice node is updated as follows: , , and ( 1) ( 1) where NS (k,l,m) represents the set of template nodes within the 3×3×3 region.After the updating change, a novice node then becomes a template node.This process is repeated until all novice nodes are updated and become template nodes. Step 5: Computationof Jacobian matrix-According to Eq. ( 1), if the template point is very close to the data point then the joint angles .corresponding to a particular location can be linearly approximated as follows: Where is a template point with the Jacobian matrix J( ).An immediate problem is how to compute the Jacobian matrix for each template point.After Step 4, the neural node located at the sampling point has been assigned a template position weight vector, , and a joint angle weight vector, .Assume each node has N nb neighboring nodes.For example, nodes located at the eight corners have only 3 neighboring node but most of the nodes inside the lattice have 26 neighboring nodes.Then we can construct a set of N nb data pairs for the neural nodelocated at to estimate the corresponding Jacobian matrix via (7).To be concise and clear, we denote the N nb data pairs for h = 1,…, N nb as follows: These N nb data pairs should meet the following conditions: where the Jacobian matrix J is a N act ×3 matrix since there are N act joints and the workspace is a 3-dimensional space.Via (7) the Jacobian matrix can be computed as follows: where The real-time manipulating phase After the off-line training procedure, an inverse kinematics model has been approximated via a trained SOM network with N x ×N y ×N z lattice structure.Each neural node stores the necessary information (e.g., the template position weight vector the joint angle Page 4 of 7 weight vector , and the Jacobian matrix weight vector . within the reciprocal zone) for the inference of the first-order Taylor expansion.The trained inverse kinematics model can be used to predict a set of appropriate joint angles given a special targeted position vector . Results and Discussion Step 1: Initialization: Set the iteration parameter k=0 and set the initial real position vector to be the targeted position vector, .( = ). Step 2: Winner finding: Present the present positionx to the network and search for the winning neuron.The winning neuron (k * , l * , m * ) at time k is found by using the minimum-distance Euclidean criterion: We search the sampling point of which reciprocal zone is the closest to the present real position vector .If k> 0, then , which will be calculated at the next step. Step 3: Calculating the joint angles: The joint angles are approximated by he first-order Taylor series expansion via the joint angle vector , and the Jacobian matrix : Where and Step 4: Actuating the joint angles: Let the robot arm move to the new real position, , according to the joint angles, θ(k+1), computed by the previous step. Step 5: Termination criteria: If the new real position is close enough to the target position (e.g., the approximation error between the target position and the final real position where ϵ is a pre-specified threshold) then we terminate the procedure; otherwise, we set k = k + 1 and go to step 2 until some pre-specified iteration number is reached. Simulation Results To test the performance of the proposed SOM-Like inverse kinematics modeling method, humanoid robots arm with 5 degrees of freedom was our test platform.The design of the robot arm was from the InMoov project [27].The InMoov was the first open source 3D printed life-size robot.Based on the open source and a 3D printer, we implemented a humanoid robot arm with 5 degrees of freedom as shown in Figure 1.The Denavit-Hartenberg (DH) method is the most common method to construct a forward kinematics for a robot platform based on four parameters (e.g., the link length, link twist, link offset or distance, and joint angle) [28].A coordinate frame is then attached to each joint to determine the DH parameters.Figure 2 shows the coordinate frame assignment for the robot arm. Based on the forward kinematics, 20,800 training data points and 9,072 testing data points were generated, respectively.Figure 3 shows the workspace of these data points.The physical boundaries of these data points and the way for generating the data points are tabulated in Table 1.We then used five different network sizes to discuss whether the network size would influence the approximation performance. ( ) ( , , ) Arg min ( ) * * * ( , , ) Table 2 tabulates the average approximation error, the maximum approximation error, and the computational time.The simulation was done on a PC with 4.0 GB memory, Intel Core i7-2600 CPU @ 3.40 GHz, and the operating system Windows 7. Two observations can be concluded.First of all, the larger the network size is the smaller the average approximation error is reached.Secondly, the larger the network size is the longer the computational time is required. We then took a tradeoff between the approximation error and the computational efficiency to choose an appropriate network size for the following two simulated examples. After the inverse kinematics model of the robot arm has been constructed, two simulated examples were designed to further test the performance of the proposed SOM-Like inverse kinematics modeling method. Example one: Tracking a Circular Helix The first simulation was designed to track the circular helix defined as: The circular helix was sampled for every ∆θ=1° and there were total 720 sampling points.Figure 4 shows the circular helix tracked in the workspace, 200.0 mm×200.0mm×72.0mm, with an average error 5.73 mm if the maximum iteration number is only 1.To further decrease the approximation error, the termination criterion parameter ϵ was set to be 0.5 mm so that the real-time manipulating phase took about 0.0306 milliseconds to repeat the iterative procedure for 200 iterations.Finally, the average approximation error decreased to be 0.283 mm. Example two: Writing a letter "B" The second simulation was designed to write a letter "B" as follows: Line 1: x = -207.7mm,z = 638.8mm,-493.3mm≤ y ≤ -293.3mm Ellipse 1: 90≤θ≤-90 90≤θ≤-90 The ellipses were sampled for every ∆θ=1° and there were total 360 sampling points.The line was sampled for every 1 mm and there were 200 samples.Figure 5 shows the letter "B" tracked in the workspace, 200.0 mm×200.0mm, with an average error 6.58 mm if the maximum iteration number is only 1.To further decrease the approximation error, the termination criterion parameter ϵ was set to be 0.5 mm so that the real-time manipulating phase took about 0.0175milliseconds to repeat the iterative procedure.Finally, the average approximation error decreased to be 0.247 mm. 4 : 4 . 1 ) Training of the SOM network-It involves the following four sub-steps.(Initialization: For each neural node, its corresponding fourweight vectors are initialized as follows: (4.2) Winner Finding:Present an input pattern to the network and search for the winning neuron.The winning neuron(k*, l*, m*) corresponding to the input pattern is found by using the minimumdistance Euclidean criterion: (4.3) Weight Updating:Adjust the weights of the winning neuron using the following updating rule: Figure 1 : Figure 1: The humanoid robot arm with 5 degrees of freedom used as the test platform. Figure 3 : Figure 3: The workspace of the training data and the testing data. Figure 2 : Figure 2: The coordinate frame assignment for the robot arm. Table 2 : The physical boundaries of the robot arm.The approximation performance of the proposed SOM-Like inverse kinematics modeling method with five different network sizes.
4,884.6
2017-02-18T00:00:00.000
[ "Engineering", "Computer Science" ]
Discovering Phonesthemes with Sparse Regularization We introduce a simple method for extracting non-arbitrary form-meaning representations from a collection of semantic vectors. We treat the problem as one of feature selection for a model trained to predict word vectors from subword features. We apply this model to the problem of automatically discovering phonesthemes, which are submorphemic sound clusters that appear in words with similar meaning. Many of our model-predicted phonesthemes overlap with those proposed in the linguistics literature, and we validate our approach with human judgments. Introduction Linguists have long held that language is arbitrary, or that a word's phonetic and orthographic forms have no relation to its meaning (de Saussure, 1916). For example, there is nothing about an apple that suggests that apple is the proper word for it-this link between meaning and the representation in language is arbitrary. Arbitrariness is a defining feature of human language, and it is a key component of the design features of language proposed by Hockett (1960). Despite this, work over the last decades has revealed several exceptions to the arbitrariness of language. One such exception is iconicity, where the form of a word directly resembles its meaning. For example, Ohala (1984) showed that speakers tend to associate vowels with high acoustic frequency with smaller objects, while vowels with low acoustic frequency are associated with larger objects. In this case, speakers make a link between the phonetic form of a word and its perceived meaning because of an innate belief that smaller entities emit higher-frequency vowels while larger entities tend to emit low-frequency vowels. Similarly, Köhler (1929) and Ramachandran and Hubbard (2001) observed a non-arbitrary con-nection between the shapes of objects and speech sounds. American college undergraduates and Tamil speakers were presented with a jagged shape and a rounded shape and asked which is "kiki" and which is "bouba". In both groups, 95% to 98% selected the jagged shape as "kiki" and the rounded shape as "bouba", demonstrating that the human brain connects sounds to shapes in a consistent way. D'Onofrio (2014) posits that the rounded shape is commonly named "bouba" since the mouth forms a rounded shape in producing the word, whereas pronouncing "kiki" requires a tighter, more angular mouth shape that seems more apt for the jagged object. In this case, there is a strong, non-arbitrary link between the articulatory properties of the sound and their perceived meaning. Phonesthemes are another exception to the arbitrariness of language. Phonesthemes are noncompositional, submorphemic phonetic units that consistently occur in words with similar meanings. For example, the word-initial gl-, occurs at the beginning of many English words relating to light or vision, like glint, glitter, gleam, glamour, etc. (Hutchins, 1998;Bergen, 2004). The work of Hutchins (1998) includes a compilation of 46 phonesthemes proposed by linguists. There is a body of previous work suggesting that phonesthemes are units in the mental lexicon of native speakers. For example, the work of Hutchins (1998), Magnus (2000, and Bergen (2004) uses priming experiments and other methods from psycholinguistics to demonstrate that phonesthemes significantly affect native speaker reaction times in a range of language processing tasks. In another line of work, Otis and Sagi (2008) and Abramova and Fernández (2016) verify phonesthemes by analyzing whether the words containing a given phonestheme are more semantically similar than expected by chance, where se-mantic similarity is derived from a distributional semantic model. While there has been much work in verifying previously proposed phonesthemes, there has been little work on automatically discovering new ones. In this work, our goal is to identify the likely phonesthemes of a language from a collection of semantic vectors. We do this by identifying the character or phoneme sequences that are predictive of word meaning by training a model to predict word vectors from subword features. Then, we use standard feature selection techniques to find a subset of features that best predict the vectors; this subset of features contains the model-predicted phonesthemes. Lastly, we validate the model-predicted English phonesthemes with human judgments and also find that many of our predicted phonesthemes overlap with those documented in previous work. Method To extract phonesthemes from a set of vectors, we want to find submorphemic units (e.g., character or phoneme n-grams) that are highly predictive of word meaning. We approach this problem through the lens of feature subset selection: given a model capable of predicting semantic vectors from submorpheme information, our goal is to select the subset of submorphemes (model features) that are most predictive. Intuitively, if a submorpheme is especially predictive of the word vectors, then it may be a meaning-bearing phonestheme. We use linear regression to predict word vectors from binary feature vectors that encode the submorphemes occurring in a surface form. We use sparse regularization to select relevant features from this model, which enables it to automatically choose a subset of the submorpheme features that predict the vectors (our predicted phonesthemes). Specifically, we regularize our linear regression model with the elastic net (Zou and Hastie, 2005). We used scikit-learn (Pedregosa et al., 2011) to train our models, and we tune the L 1 and L 2 regularization strengths on held-out error in 5-fold cross-validation. Mitigating the Effect of Morphemes A principal concern is that the model will detect morphemes rather phonesthemes. Many past studies on the relationship between form and meaning in language (Shillcock et al., 2001;Monaghan et al., 2014;Gutiérrez et al., 2016;Dautriche et al., 2017) mitigated this concern by only considering monomorphemic words, discarding a large fraction of the lexicon in the process. We take a different approach to this problem by proposing a two-step model designed to mitigate the effect of morphemes. We begin by training an unregularized linear regression model to predict semantic vectors from morpheme-level features. Then, we use the residuals of this first stage morpheme-level model as the new target vectors for the sparsely regularized phonestheme extraction model. This removes the components of the word vector that are predictable from morphemelevel information, leaving only the aspects of word meaning not covered by morphology. We use the the morphological analyses in the CELEX lexical database (Baayen et al., 1996) to compile a list of morphemes, which is used to create the morpheme-level feature vectors. We also use this list to remove any morphemes that may appear in the final model output. Data For our experiments, we use 300-dimensional GloVe (Pennington et al., 2014) English word embeddings trained on the cased Common Crawl. Many of the terms in the set of pretrained vectors are not English words. As a first attempt toward removing non-English words and named entities, we discard types that are not alphabetical or not completely lowercased. In addition, it's unlikely that rare words or very common words will contribute to the formation of sound-meaning associations (Hutchins, 1998). To further filter these rare or common words (and remove additional non-English types), we remove types that either occur less than 1000 times in the Gigaword corpus or in more than half of all Gigaword documents. Lastly, we remove types that share the same lemma if the lemma is also in the set of filtered word vectors. After this process, we are left with 7889 types out of the original 2.2 million. We phonemicize our vectors by associating each word's vector to the word's ARPAbet symbol sequence, as provided in the CMU Pronouncing Dictionary (Carnegie Mellon University, 2014). If multiple types have the same ARPAbet symbol sequence (and are thus homophones), we discard them all. We also do not use types that are not in the CMU Pronouncing Dictionary. Phonemicizing the filtered set of vectors results in a set of 6633 vectors. Note that our model can be applied using either orthographic or phonemicized vectors. Phonesthemes are an inherently phonetic phenomenon, which suggests that it is ideal to model the features at the phoneme level. However, using character-level features, in some cases, will be a reasonable approximation, especially since many of our extracted phonesthemes have a consistent orthographic representation. We release code for preprocessing data and training the models at http://nelsonliu.me/ papers/phonesthemes/. Experiments and Results The candidate phonesthemes considered by the model are the word-initial phoneme bigram sequences that occur more than five times in our set of phonemicized vectors; we set a frequency threshold for feature inclusion since rare prefixes are unlikely to carry meaning. Each word's feature vector is a one-hot encoding of its bigram phoneme prefix. We choose to focus on wordinitial bigrams since the bulk of prior work in linguistics has also focused on phonesthemes in this position. However, our method easily extends to larger subword units (e.g., trigrams), candidate phonesthemes within or at the end of a word, even other languages; we leave analysis of phonesthemes of other sizes, in different positions, and of different languages for future work. We train our two-stage model on the phonemicized vectors; the features that are assigned a nonzero weight are our model-predicted phonesthemes. The features of our morpheme-level model are binary indicator features corresponding to 181 different morphemes extracted from the CELEX2 database. In total, our phonestheme extraction model considers 307 candidate phonesthemes; tuning the regularization strength on held-out error in 5-fold cross-validation results in a model that selects 123 candidate phonesthemes as predictive. The phoneme bigrams corresponding to the 30 features with the highest absolute model weight are in Table 1. Qualitatively, the words with the lowest error under the model containing each selected phonestheme candidate seem semantically coherent. Many of the phonesthemes identified by our model have been proposed and validated by past work. 13 of the top 15 model-predicted phonesthemes were in Hutchins' set of 17 proposed word- initial phoneme bigram phonesthemes. This is an improvement over past work; Otis and Sagi (2008) identified 8 as statistically significant, with a hypothesis space restricted to 50 pre-specified word beginnings and endings. Gutiérrez et al. (2016) also identified 8, but with a much larger hypothesis space of 225 candidates. Our model considers an even larger hypothesis space of 307 candidate phonesthemes, which are all automatically extracted from the set of word vectors. Validating Phonesthemes with Human Judgments Following the method of Hutchins (1998) and Gutiérrez et al. (2016), we empirically evaluate our phonesthemes by soliciting naïve human judgments about how well-suited a word's form is to its meaning. We randomly selected 5 words containing each of the top 15 model-selected phonesthemes and 5 words containing 15 random phonestheme candidates that were not selected by the model, for a total of 150 words. We recruited native English-speaking participants through Mechanical Turk, and asked them to judge how well each word fits its meaning on a Likert scale from 1 to 5. 150 words is too many judgments for a single HIT (annotators would become fatigued and words might start to lose meaning). As a result, we randomly divided the task into 10 different HITs, each with 15 of the words to be tested. We required Amazon Mechanical Turk Masters status for the crowdworkers and compensated them $0.20 per HIT; each word received 30 ratings. Following Hutchins (1998), we compute ratings for each candidate phonestheme by averaging the rating of the words that contain it. On average, model-predicted phonesthemes were rated 0.58 points higher than unselected phonestheme candidates (3.66 versus 3.08, respectively). To assess whether this difference is statistically significant, we use the one-tailed Mann-Whitney U test (Mann and Whitney, 1947) since the data is ordinal and unpaired. Based on the results of the test, we reject the null hypothesis that the average rating of words containing model-selected phonesthemes is not greater than the average rating of words that contain phonesthemes not selected by the model (p < 10 −9 ). Figure 1 plots the human ratings of the top 15 model-selected phonesthemes against their absolute weight under the model; there is a weak positive correlation (r = 0.081). 2 of the 15 model-predicted phonesthemes with the highest absolute weight were not previously proposed by (Hutchins, 1998): br-and wi-. Both of these sound clusters seem like plausible phonesthemes. To the authors, the br-cluster evokes the idea of a raw, almost uncultured force, with words like "brags," "brutish," and "brusque" appearing among the words with the lowest error under the model. The types containing the word-initial wicluster with the lowest error under the model seem to convey fragility: "wimpy," "wince," and "weak." From Figure 1, we can see that the br-phonestheme candidate received a very high model weight, but received lower ratings on average from human annotators. On the other hand, the average human rating of the wi-phonestheme candi-date seems in line with its assigned model weight. Future work could further explore whether br-and wi-have psychological reality to native speakers. Related Work Several psycholinguistic studies have shown that native speakers associate certain sounds with a particular meaning, and phonesthemes have been identified in languages from English (Wallis, 1699;Firth, 1930) to Swedish (Abelin, 1999) and Japanese (Hamano, 1998). Bergen (2004) additionally demonstrates that phonesthemes affect online implicit language processing, and Parault and Schwanenflugel (2006) suggest that they play a role in language acquisition. In recent years, the work of Otis and Sagi (2008) and Abramova and Fernández (2016) used computational methods to automatically detect and validate phonesthemes by examining whether words that contain a candidate phonestheme are more semantically similar than predicted by chance, according to a distributional semantic model. Dautriche et al. (2017) analyze lexicons of Dutch, English, German, and French and find that the space of monomorphemic word forms is clumpier than what would be expected by chance, according to lexical, phonological, and network measures. Most similar to our work is that of Gutiérrez et al. (2016), who introduce an algorithm for learning weighted string edit distances that minimize kernel regression error and use it to detect systematic form-meaning relationships within language. Our model uses linear regression between candidate phonestheme features and semantic vectors. In addition, our model directly selects the predicted phonesthemes with sparse regularization; their model instead provides a systematicity score for each type, and they extract phonesthemes by taking the word-beginnings with mean errors lower than predicted by a random distribution of errors across the lexicon. Conclusion In this work, we present a simple model for extracting non-systematic form-meaning relationships from a collection of word vectors. Our model is a sparsely regularized linear regression model that seeks to predict a word's semantic vector from a feature vector that encodes information about the candidate phonesthemes it contains; the sparse solutions of the regression problem have the effect of automatically selecting the features that are most predictive of word meaning, which we take as predicted phonesthemes. We also develop a simple and effective twostage approach for mitigating the effect of morphemes in the model. We initially train a model to map from morpheme-level features to word vectors, and then use the residuals of the morphemelevel model as the targets for the downstream phonestheme extraction model. We empirically compare our model's predicted phonesthemes and find that many were previously proposed by linguists. We verified our results with human judgments of proposed and unselected phonesthemes, and annotators believe that words with a model-selected phonestheme "fit their meaning" more than words that contain a candidate phonestheme that was not selected by the model.
3,536.4
2018-06-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Multiples in Onshore Niger Delta from 3 D prestack seismic data The presence of multiples has been investigated in Onshore Niger Delta using 3D seismic data. The aim of the study was to investigate the characteristics of reflection events beyond 3s two way time on seismic data behind the boundary faults associated with the shadow zone. This involves detailed velocity analysis on semblance plot panel and accounting for moveouts due to reflections away and within the shadow zone. Interval velocity-depth models were generated from the velocity analysis and analyzed for shadow effect in the data. Results of the study revealed the presence of two velocity scenarios Onshore Niger delta. These are the primary and lower than normal velocities away and within the shadow zone, respectively. The interval velocity-depth models and their overlays on the seismic show a constant increase of velocity with depth for the primary model which seems normal, but this is contrary to the lower than normal velocity model where low seismic velocities predominate beyond 3s two way time (3.8 km), especially at the footwall of the boundary fault. These variations are likely due to the fact that sediments at the footwall of the boundary fault are thicker, compacted and thus yield stronger reflectors than the corresponding sediments away from the faults. The lower than normal velocity reflections in the absence of overpressure and anisotropy, which are also causes of low velocity reflections, are attributed to interbed multiple reflections in the data. INTRODUCTION The presence of interbed multiples in onshore Niger Delta has not until recently received attention, due to its impact on the quality and resolution of seismic reflection data.Multiples are seismic energies that have been reflected more than once before being recorded by receivers.They are known to have short periods, low velocities and amplitudes than the desired primary reflection signals.Because of these characteristics, they are not readily distinguishable from primaries, since they have almost the same arrival time and exhibit a dispersed character that creates a curtain of noise often stronger than the primary events (Retailleau et al., 2012).*Corresponding author.E-mail<EMAIL_ADDRESS>Author(s) agree that this article remain permanently open access under the terms of the Creativ e Commons Attribution License 4.0 International License The study area is located in an onshore field in southeastern Niger Delta (Figure 1).The field lies between longitude 4° and 5°E and latitudes 4° and 5°N.Seismic data from the field are often characterized by chaotic and distorted reflections beyond 3s two way time, even after detailed conditioning and processing workflows have been implemented.These distorted zones often time referred to as the fault shadow zones on seismic is situated at the footwall of main boundary faults.Due to insufficient information about their character, efforts made to remove them to enhance interpretation of data leads to loss of the desired seismic reflection signals.Away from the boundary faults, reflections are observed to be more continuous and stratigraphic definition becomes more meaningful. Some authors have researched on the possible causes of seismic reflection distortions (Fault shadow) beyond 3s in Onshore Niger Delta.Aikulola et al. (2010) and Opara (2012) investigated overpressure as the possible cause of reflection distortions beyond 3s in Onshore Niger Delta.In a similar study, Oni et al. (2011) and Kanu et al. (2014) investigated seismic anisotropy as the possible cause of reflection distortions beyond 3s in Onshore Niger Delta.The authors in both studies noted that reflection distortions around the shadow zone exhibits lower than normal seismic velocities, which may likely be due to overpressure or anisotropy.However, accounting for these in subsequent processing work flows did not significantly improve reflections in the shadow zone. According to Dutta (2002), secondary low velocity semblance plots represent optimum stacking velocities for multiples, but added that it has to be established not to be as a result of lithological changes or from abnormal pore pressure.Weiglein et al. (2011) also proposed that interbed multiples can be generated by stronger subsurface reflectors regarded as multiple generators at any depth, more especially, the presence of geologic contacts of differing compactions on the footwall of main boundary faults. Interbed multiple reflections Onshore Niger Delta has remained an exploration problem and no processing approach has so far considered in detail secondary reflections in the fault shadow zone.This attempt is therefore the first of its kind to the knowledge of the authors in Onshore Niger Delta.In the present study, we investigated the presence of interbed multiples in onshore seismic data through detailed velocity analysis of a 3D seismic data on a semblance plot panel.The study accounted for moveouts due to reflections by detailed velocity picking on the section and in the neighborhood of the shadow zone.Depth models were subsequently generated from these velocities and used to analyze the shadow zone. GEOLOGY OF THE STUDY AREA Generally, the geology of southwestern Cameroun and southeastern Nigeria delineates the onshore portion of the Niger Delta province (Figure 2).The Niger Delta sedimentary basin has been the scene of three depositional cycles.The first began with a marine incursion in the middle Cretaceous and was terminated by a mild folding phase in Santonian time.The second included the growth of a proto-Niger Delta during the Late Cretaceous and ended in a major Paleocene marine transgression.The third cycle, from Eocene to recent, marked the continuous growth of the main Niger Delta (Doust and Omatsola, 1990).These cycles (depo-belts) are 30 to 60 km wide, prograde southwestward 250 km over oceanic crust into the Gulf of Guinea (Stacher, 1995), and are defined by syn-sedimentary faulting that occurred in response to variable rates of subsidence and sediment supply (Doust and Omatsola, 1990). The interplay of subsidence and supply rates resulted in deposition of discrete depobelts.When further crustal subsidence of the basin could no longer be accommodated, the focus of sediment deposition shifted seaward, forming a new depobelt (Doust and Omatsola, 1990).Each depobelt is a separate unit that corresponds to a break in regional dip of the delta and is bounded landward by growth faults and seaward by large counterregional faults or the growth fault of the next seaward belt" (Evamy et al., 1978;Doust and Omatsola, 1990). Regionally, extensive anticlines and faults on the down thrown part of regional faults dip southward.These regional faults which controlled deposition (Haack et al., 2000) are of interest in this study.This is because the reflection distortions observed on seismic data exists at the footwall of these faults.The footwall of these main boundary faults has thick and compacted overburden with stronger reflectors than the up-thrown side of the fault.This configuration is known to create adequate acoustic impedance contrast, a condition necessary for the occurrence of interbed multiples in Onshore Niger Delta (Weiglein et al., 2011). The Akata, Agbada and the Benin formations are the major stratigraphic units of the tertiary Niger Delta (Doust and Omatsola, 1990;Reijers et al., 1997).Hydrocarbon accumulation occurs in the sandstone reservoirs of the Agbada formation, within the anticlinal structures in front of growth faults (Stauble and Short, 1967;Michele et al., 1999).The typically over pressured Akata formation at the base of the delta is of marine origin and is composed of thick shale sequences (potential source rock), turbidite sand (potential reservoirs in deep water), and minor amounts of clay and silt (Doust and Omatsola, 1990).The Benin formation is a deposit of alluvial and upper coastal plain sands that are up to 2,000 m thick and main water bearing formation in the Niger Delta (Avbovbo, 1978). MATERIALS AND METHODS A 3D seismic data w as used in this study.The data w as acquired recently using novel acquisition parameters to enhance resolution and signal-to-noise ratio (S/N).Figure 3 is atypical section show ing the fault shadow zone (red circle), the main boundary fault and a line indicating the 3s tw o w ay time beyond w hich the distortions are observed in the study. The semblance velocity analysis tool is sensitive to the variation of velocity w ith depth.In this tool, as the maximum offset increases, the semblance pow er decreases, since the best-fit hyperbolic moveout does not simulate the actual non-hyperbolic moveout (Alkhalifah, 1997).This tool flattens primaries w ithin the gathers and over corrects the gathers for low velocity reflection events Prior to the deployment of the velocity semblance tool, standard data preparation and enhancement procedures w ere carried out to further enhance the signal-to-noise ratio.Subsequently, velocity semblance plots w ere generated from common image point (CIP) gathers for detailed velocity analysis.By considering the gradual increase of effective velocity w ith depth, velocities w ere picked on the semblance plot w indow comprising of tw o panels , A and B w hich are respectively plots of effective velocity and offset versus time seismic data.Primary and low er than normal velocities w ere picked separately on panel A, w hile observing the moveout of the gathers on panel B aw ay and w ithin the shadow zone. These velocities w ere picked manually by steering the pickings aw ay from the clusters corresponding to shadow zone and observing the effect of the picking on panel B. This process w as repeated for the shadow zone and observing the effect of the picking on panel B. These pickings w ere validated by tying them to the corresponding locations on the seismic and time slice extracted at 3 s from the data to ensure geological plausibility of the picked velocities.The picked velocities w ere further converted from effective to interval velocities and subsequently, interval velocity depth models w ere separately generated for the primary and low er than normal velocity events.These velocity models w ere then overlaid on the seismic section for actual mapping of anomalously low seismic velocities in the study. PRESENTATION OF RESULTS Results show that picking primary velocities (Figure 4a) flattened the offset gathers (Figure 4b) on the velocity semblance analysis window.Areas where primary reflections are predominant on the semblance plot panel correspond to areas of continuous seismic reflection events on seismic (red rectangle), which can be tied to the time slice (red circle).Moving away from the main boundary fault, reflection events become more continuous, less chaotic and distorted. Secondary reflection events were identified as semblance clusters (or plots) corresponding to low effective velocities on the velocity semblance analysis window (Figure 5a).Observe the upward curving "events" on the offset gathers (Figure 5b).These events travel with lower than normal velocities, which overcorrect the primary reflection events on the gathers.The locations at which this lower than normal velocities were observed and picked correlate with the shadow zone on the seismic data. Interval velocity models built from the velocity function for primary and lower than normal velocity events are as shown in Figures 6 and 7, respectively.Figure 6 shows the normal increase of velocity with depth for the primary model.The prevalence of slower than normal velocities beyond 3.8 km (3 s) is evident in the lower than normal velocity event model (Figure 7).Note the localized nature Constant increase of velocity with depth is observed in the overlay of the primary model on seismic (Figure 8a). In the overlay of the lower than normal velocity model on seismic (Figure 8b), we also observed constant increase of velocity with depth from 0 s to about 3 s.Beyond this two way time and in the area corresponding to the footwall of the main boundary fault, anomalously low seismic velocities were observed.This is shown by the dip in colorations towards the footwall of the main boundary fault on the figure.This correlates with the region of the seismic section with distorted and non-continuous reflections.The red arrow in Figure 8b indicates the onset of the lower than normal velocity events. DISCUSSION Multiples in Onshore Niger Delta have been investigated through semblance velocity analysis of a 3D seismic data.This involves picking of reflection events in the area of interest (AOI) on the seismic data.Results revealed that picking of the right primary velocities during velocity analysis flattened the gathers and these are more predominant in locations on the seismic away from the footwall of the main boundary fault, while velocities picked around the footwall of the boundary fault; however, overcorrect reflection gathers.This suggests that these reflection events are associated with anomalously low interval velocities than the primary events. The interval velocity-depth models and their overlays on the seismic further validate the occurrence of this lower than normal velocity reflection events on the seismic.Constant increase of velocity with depth as observed on the primary model and overlay seems normal, but this is contrary to the velocity variation with depth delineated beyond 3 s two way time (3.8 km) on the lower than normal velocity model, especially at the footwall of the boundary fault with chaotic and distorted reflections on the seismic . These chaotic and distorted reflections around the shadow zone are attributed to the fact that firstly, sediments at the footwall of the boundary fault are thicker, compacted and stronger reflectors than the corresponding sediments at hanging wall of the fault.These stronger reflectors referred to as multiple generators (Weiglein et al., 2011), are identified as significant sources of interbed short period multiples.Secondly, velocity estimations within the shadow zone did not properly account for this lower than normal velocities during data processing and this is likely responsible for the curtain of noise observed in the shadow zone (Retailleau et al., 2012).Aikulola et al. (2010) related chaotic and distortive reflections observed beyond 3 s at the footwall of regional faults in the delta exhibits lower than normal seismic velocities, and associated these to the onset of overpressure regimes.Oni et al. (2011), in an onshore study in the Niger Delta, submitted that if anisotropy is taken into consideration and corrected during data preparation and enhancement, seismic imaging could be improved behind the fault.Kanu et al. (2014) reviewed velocity anisotropy considerations using different eta values.However, subsequent data processing after these considerations did not significantly improve imaging behind the fault.Although, the works of these researchers generally impacted seismic imaging, we still had poor imaging of seismic reflections beyond 3 s.Thus, having considered and eliminated overpressure and anisotropy as the likely causes of the shadow zone, interbed multiples which are low velocities events are speculated as the possible cause of shadow effects in this study. Furthermore, all analysis so far has blindly assumed that multiples are inexistent Onshore Niger Delta and as such, no study has fully explored the possibility of multiples being responsible for the poor imaging within the shadow zone.Based on the foregoing discussion, these lower than normal velocity events in this study could therefore be attributed to interbed multiple reflections. Conclusions Detailed velocity analysis of 3D seismic data on a semblance plot revealed the presence of two velocity scenarios Onshore Niger Delta.These are the primary and lower than normal velocities predominantly found away and within the shadow zone, respectively.The lower than normal velocity reflections beyond 3 s two way time in the absence of overpressure and anisotropy, which also are causes of low velocity reflections , are attributed to interbed multiple reflections in the study area.We therefore recommend carrying out depth migration with this lower than normal velocities, preferably in the prestack domain to account for reflections at the footwall of the fault in the shadow zone.This will aid attempts to attenuate the multiples and enhance stratigraphy and structure within the fault shadow zone. However, the challenge lies in the fact that the semblance velocity analysis, as employed in this study, involved detailed velocity picking as against the automatic picking that assumes already established regional parameters.This approach gives one advantage of adequately accounting for the velocities of the chaotic reflections beyond 3s in the study area. Figure 1 . Figure 1.Location map of the study area-the w hite star; the study area lies w ithin the given coordinates (Redraw n from SPDC Geosolutions Department). Figure 2 . Figure 2. Tectonic and geologic section of the Niger Delta (w w w .intechopen.com). Figure 3 . Figure 3.Typical 3D seismic section for the study show ing the main boundary fault, the fault shadow zone and the 3 second tw o w ay time. Figure 6 . Figure 6.Velocity/Depth model show ing the increase of velocity w ith depth for primary events . Figure 7 . Figure 7. Velocity/Depth model show ing anomalously low velocity w ith depth for secondary events . of these anomalous velocities to the area corresponding to the location of the footwall of the main boundary fault on the seismic. Figure 8 . Figure 8. Overlay of interval velocity depth models for primaries (a) and low er than normal velocity events, (b) on seismic.
3,833.2
2017-01-16T00:00:00.000
[ "Geology" ]
Accelerated parallel computation of field quantities for the boundary element method applied to stress analysis using multi-core CPUs, GPUs and FPGAs : Computation in engineering and science can often benefit from acceleration due to lengthy calculation times for certain classes of numerical models. This paper, using a practical example drawn from computational mechanics, formulates an accelerated boundary element algorithm that can be run in parallel on multi-core CPUs, GPUs and FPGAs. Although the computation of field quantities, such as displacements and stresses, using boundary elements is specific to mechanics, it can be used to highlight the strengths and weaknesses of using hardware acceleration. After the necessary equations were developed and the algorithmic implementation was summarized, each hardware platform was used to run a set of test cases. Both time-to-solution and relative speedup were used to quantify performance as compared to a serial implementation and to a multi-core implementation as well. Parameters, such as the number of threads in a workgroup Abstract: Computation in engineering and science can often benefit from acceleration due to lengthy calculation times for certain classes of numerical models. This paper, using a practical example drawn from computational mechanics, formulates an accelerated boundary element algorithm that can be run in parallel on multi-core CPUs, GPUs and FPGAs. Although the computation of field quantities, such as displacements and stresses, using boundary elements is specific to mechanics, it can be used to highlight the strengths and weaknesses of using hardware acceleration. After the necessary equations were developed and the algorithmic implementation was summarized, each hardware platform was used to run a set of test cases. Both time-to-solution and relative speedup were used to quantify performance as compared to a serial implementation and to a multi-core implementation as well. Parameters, such as the number of threads in a workgroup ABOUT THE AUTHORS Junjie Gu completed her Master of Applied Science graduate degree in 2016 under the supervision of Dr Zsaki. Her research interests are computer applications in geomechanics and parallel processing. Attila Michael Zsaki is an associate professor in the Department of Building, Civil and Environmental Engineering. He obtained his BE degree from Ryerson University and his MSc and PhD degrees in civil engineering from the University of Toronto. Dr Zsaki's research is focused on modelling and computational aspects of geosciences with particular interest in multiphysics modelling of continuum and discontinuum. His other areas of interest are scientific computing, parallel computing, computer graphics and mesh generation. In addition to academia, Dr Zsaki has worked in the industry as software developer and consultant for a geomechanics analysis software company, and lately on high-performance scientific computing applications for modelling continuum behaviour. His interests are performance optimization and parallel computing on scalable, shared-memory multiprocessor systems, graphics processing units (GPU) and FPGAs. PUBLIC INTEREST STATEMENT Many problems in science and engineering require use of computers to create and analyse models to increase our understanding of the world around us. Most often the computation requires hours if not days to accomplish, thus any means to expedite the process is of interest. This paper presents a novel formulation of a numerical method used in engineering mechanics, developed such that it harnesses the power of various additional computer hardware, such as graphics cards, already found in a computer to achieve considerable reduction in time while maintaining the accuracy of computation. In addition to accelerated computing capabilities, the energy consumption was considered as well when ranking each computer hardware, catering to our energy-consciousness. The paper concludes with recommendations concerning the merits of each hardware accelerator. Introduction The boundary element method (BEM) is one of the established numerical methods for solving partial differential equations often of interest in the fields of engineering and science. The BEM formulation has been applied to solve for stresses and displacements in solid mechanics (Crouch, Starfield, & Rizzo, 1983;Kythe, 1995), flow of fluids in fluid mechanics (Brebbia & Partridge, 1992) and also seen use in the field of electrical engineering and electromagnetism (Poljak & Brebbia, 2005), in the theory of solvation (Molavi Tabrizi et al., 2017) and biophysics (Cooper, Bardhan, & Barba, 2014). Its characteristic approach to solve the differential equations is to cast them as integral equations, and using an appropriate Green's function, the discretized solution is developed as a system of linear equations. Perhaps the greatest benefit of using a BEM formulation, as opposed to finite elements (FEM) or finite differences, is the inherent reduction in the dimension of a problem domain. For physically two-dimensional domains, a BEM discretization is only required on the contour of a domain, and analogously, for three-dimensional physical domains, a BEM solution is set up for the surface of a domain only. Yet, the benefit of reduction in size of the system of linear equations can be potentially offset by the nature of the coefficient matrix; it is densely populated, unlike the ones arising from most FEM formulations. This has an implication on matrix storage requirements and solution time of the system. Other potential concerns with the BEM are its inherent difficulty dealing with material heterogeneity and non-linearity (Crouch et al., 1983;Gu, 2015). In addition, fundamental to the reduction of problem dimension is that a solution of the linear system yields result only on a boundary. If quantities are wanted inside (or outside, depending if it is an interior or an exterior problem Crouch et al., 1983), then further computation is required to obtain them. Although BEM has a widespread application, and there have been initiatives to use GPUs in BEM (Haase, Schanz, & Vafai, 2012;Torky & Rashed, 2017) their focus was on solving the linear system of equations and not solving for displacements and stresses in the domain (these quantities are often called "field quantities"). Thus, this paper focuses on BEM's use in solid mechanics, with particular application for computation of stresses and displacements in geologic media. In this field, the foremost interest lies in the response of a geologic medium, as measured by the developed displacements and stresses in the domain, which sets apart this research from others (Haase et al., 2012;Torky & Rashed, 2017). In geomechanics, often the ratio of computational effort between solving the linear system of equations and field quantities is from 1:100 up to 1:1000 (in 3D). The computation of field quantities using BEM can be formulated such that it is possible to carry it out on a grid of locations (either in two-or three-dimensions). Once a solution is found by solving the dense linear system of equations, the computation of field quantities at any given point can be performed independently from any other point. This independence is the key, so that the computation of field quantities can be accomplished in a massively parallel manner, using an appropriate hardware accelerator (such as a multi-core CPU, GPU, FPGA, or similar). This paper presents a formulation of BEM for stress analysis of underground excavations, such as tunnels, often of interest to practicing engineers. Although the second author has investigated the possibility of use of GPUs in solving for displacements and stresses at field points (Zsaki, 2011), at the time NVIDIA's CUDA (NVIDIA, 2014) platform was used and no comparison was done regarding its performance with other hardware platforms. In this study, the BEM algorithm was implemented to run in parallel with the help of OpenCL (Khronos Group, 2014) for execution on single and multi-core CPUs, GPUs and FPGAs. Performance aspects of each hardware platform will be discussed as compared to a serial implementation, in which the field quantities are sequentially computed. Metrics such a speedup, speedup-per-watt and workgroup sizes will be evaluated and examined in detail along with the effect of single-and double-precision computation on performance and accuracy of computation. In the current climate of competing acceleration frameworks, such as NVIDIA's CUDA (NVIDIA, 2014), the choice was made to use OpenCL since the code with minor modification can be complied on all platforms considered, which is not the case for CUDA, which currently only works on certain GPUs. Thus, the use of a common source code enables a comparison of performance across a wide-range of platforms, perhaps giving valuable insight as to what hardware platforms present the most appropriate option for acceleration. Even though the BEM formulation presented in this paper is specific to a domain of application, the authors feel that there is no loss of generality. The conclusions drawn can be applied to accelerating not only other BEM formulations, but also to other numerical computation using hardware accelerators in general. Accelerated parallel computation of field quantities The main advantage of a BEM formulation over an equivalent FEM one is to reduce the dimensionality of a problem, as discussed in the preceding section. The general formulation of BEM for solid mechanics (Kythe, 1995), in which the solution for displacements (u j ) and/or surface tractions (p j ) on a boundary C is sought, subjected to body forces (B) and can be expressed as follows: Equation (1) is a discrete form of the general integral equation, since it considers a domain discretized into boundary elements, such as the one shown in Figure 1. Generally, Equation (1) results in a linear system of equations with a dense coefficient matrix. As mentioned above, the solution of this matrix equation can pose computational challenges. However, the main focus of this paper is not on the solution of linear systems, since that topic is well-covered in the literature (Haase et al., 2012;Torky & Rashed, 2017). In contrast, the emphasis is on the subsequent solution of field quantities in a domain, because unlike FEM formulations, Equation (1) only gives the surface displacements and tractions. Thus, the BEM's reduction in problem dimension comes at a cost; the response of a solid material needs to be computed after a solution is found on the boundaries. With the unknown quantities in Equation (1) solved for, the displacements (u i ) in an exterior domain can be computed as follows: where coefficients H ij and G ij are matrices in the form of Thus, H ij and G ij can be evaluated as follows (Kythe, 1995): @r @n r ;1 r ;2 À 1 À 2v ð Þ r ;1 n 2 À r ;2 n 1 À Á ! dC r ; @r @n r ;2 r ;1 À 1 À 2v ð Þ r ;2 n 1 À r ;1 n 2 À Á ! dC r ; and where μ is the shear modulus, v is the Poisson's ratio, and n is the normal-to-boundary vector. After the displacements are found, stresses can be computed from where in two dimensions it reduces to with k = 1,2 giving rise to Þr 2 2 @r @n 1 À 2υ ð Þ δ ij r ;k þ υ δ ik r ; j þ δ jk r ;i À Á À 4r ;i r ; j r ;k È É þ 2υ n i r ; j r ;k þ n j r ;i r ;k À Á þ 1 À 2υ ð Þ 2n k r ;i r ; j þ n j δ ik þ n i δ jk À Á À 1 À 4υ ð Þ n k δ ij 2 4 3 5 (9) For the mathematical derivation of Equations (2) through (9), the reader is referred to Kythe (1995). To solve for field quantities, Equations (2) through (9) need to be evaluated at every field point. In a serial or single-core implementation, a loop is created over all field points and displacements and stresses are computed in a sequential manner. However, since there is no inter-dependence amongst Equations (2)-(9) between any pair of field points, they can be computed in parallel, which will be exploited in this paper. The BEM solution for both on a boundary and the subsequent sequential computation of field quantities can be summarized in Algorithms 1 and 2 using a pseudo-code, as follows: Compute stresses (Equations 6-9) end for By separating the computation of field quantities from the solution on a boundary, it is reasonably simple to isolate a part of the code that needs to be parallelized and Algorithms 1 and 2 can be rewritten. To clarify terminology, computation of field quantities will be executed on an accelerator (or "device") and the file input/output and solution of the linear system of equations will be run on a "host". Thus, Algorithms 1 and 2, as implemented using OpenCL, were defined using a three-step approach. In addition to the existing steps in Algorithm 1 (pertaining the solution of a linear system), the first step set up and initialized the OpenCL environment, defined and allocated buffers for data transfer, compiled kernels and finally, transferred all the data needed to the accelerator. The second step ran the kernel on an accelerator, while the third step wrote the data back to the host. The pseudo code, as shown in Algorithms 3 and 4, based on the actual C code, is as follows: Algorithm 3: BEM solution on a boundary and at field points-accelerated, host side Input: Discretization of boundary geometry into elements, material properties, grid dimensions for field points Output: Displacements and stresses on the boundary and at field points Read in input file Allocate memory for data structures (coefficient matrix and H ii and G ii entries) Evaluate H ii and G ii coefficients for the boundary solution using Gaussian quadrature Assemble coefficient matrix Assemble right-hand side (forcing) vector Compute stresses (Equations 6-9) Store results in buffers Note that in Algorithm 2, the computation is carried out over all grid points within each grid, in sequence. There could be multiple grids such that in 2D each grid is a set of points distributed in a rectangle. In 3D, multiple grids can be defined as sets of points enclosed in a volume. Typically, a 3D grid is defined as a sequence of 2D grids that are stacked on top of each other along the third dimension. This definition of 3D grids will become advantageous for certain accelerators, so in Algorithm 3, for each grid, the points are processed in subsets (generally in sheets of 2D grids). The application of this will be discussed in the next paragraph. The development environment was Microsoft Visual Studio Ultimate 2012 (Microsoft, 2012), the programs were developed in C/C++ . The computer was outfitted with 32GB RAM, running Microsoft Windows 7 Professional. For the FPGA, the OpenCL kernels were compiled by Altera's Quartus II 12.0 Suite (Altera Inc, 2014), while for the CPU, Intel's implementation of OpenCL was used (Intel, 2015a). Similarly, on the GPUs, the OpenCL compiler supplied by NVIDIA was used (Khronos Group, 2014). Focusing on the OpenCL implementation of the BEM method in Algorithm 3, the OpenCL environment was set up by querying available accelerator platforms and resources. The appropriate accelerator was selected by specifying CL_DEVICE_TYPE_CPU, CL_DEVICE_TYPE_GPU or CL_DEVICE_TYPE_ACCELERATOR (for FPGAs), as appropriate. Definition of a context and queue was done next, followed by the creation of buffers. These included buffers for grid parameters, material properties, already computed displacements and tractions from the boundary solution and return buffers for the yet-to-be-computed displacements and stresses at field points. Common constants such as material properties were stored in a shared memory on the accelerator, since threads often use them during a computation. The kernel and program were compiled next using the global and local workgroup sizes, and the buffers were "enqueued". As mentioned in the preceding paragraph, 3D grids were processed in subsets of 2D sheets. Reason being is that desktop and laptop GPUs used for display are not allowed to be continuously tied up with computation. A watchdog timer, part of the operating system (Microsoft Windows 7), monitors processes that execute for a long time. Processes that run "too long" trigger a Timeout Detection and Recovery response from the OS and the OS terminates the offending process. On the tested desktop GPU platform, which will be summarized in Section 2.2, this timeout limit was approximately 2.8 s. Literature reports three methods to address the time limit (Khronos Group, 2014; NVIDIA, 2014): • run the simulation on a GPU that is not participating in displaying graphics • disable the OS' watchdog timer responsible for Timeout Detection and Recovery • reduce OpenCL (and equally valid for CUDA) kernel run times Although the first option seems attractive, it is only feasible if the system is equipped with an extra GPU. For most systems, it is not a viable option. The second choice was not considered since it can interfere with the operation of a computer system and can lead to instability of the system. Consequently, the last option was adopted resulting in the subdivision of a 3D grid into subsets at a potential expense in computation time since multiple kernel invocations and data transfer will be required. To estimate the effect of this, a set of experiments were devised: (a) the whole 3D grid was uploaded (and the results downloaded) and (b) a set of sub-grids were uploaded (and the results downloaded) to/from the accelerator. In both cases, the kernel did no actual computing work. The running time associated with this operation was measured. It was found, on average, the extra kernel invocation and data transfer increases the total computation time by less than 3%. Note that only a GPU used for display requires multiple invocations of a kernel, CPUs, non-display GPUs and FPGAs can run the computation continuously. Test model The test model chosen was a two-dimensional horseshoe-shaped excavation, representing a typical tunnel cross-section, characteristic of ones used in railway and road transportation. Geometry and coordinates of the tunnel boundary are shown in Figure 2, where the units are in meters. Rock mass properties used were Young's modulus (E) of 15 GPa and Poisson's ratio (v) of 0.25, representing a typical rock mass, such as sandstone. The rock mass was subjected to a stress field of 10 MPa in both principal directions, inclined 30°from the vertical, in the counter-clockwise direction. The tunnel boundary was discretized using 37 elements, as shown in Figure 2. The discretization was arrived at after performing a mesh convergence study. A number of elements, from 15 to 41, were used to generate models of the same tunnel geometry. Four locations (crown, invert, left and right extremities of the tunnel) were used to monitor the resulting magnitude of displacement. As seen from Figure 3, the values of displacements do not change beyond 37 elements, and thus the corresponding discretization was adopted for the subsequent study. The pattern of displacements around the tunnel excavation, computed on a 500 2 -point grid, is shown in Figure 4. As shown in the figure, the largest displacements occur around the excavation (orange/red area). As an example, a hardware-accelerated computation to obtain these results required 0.0333 s, while the serial computing time was as long as 4.53 s for the same 500 2 -point grid. The same example model will be used to compare the accelerated implementations of our BEM algorithm to the serial one. The performance of the accelerated implementation will be evaluated as a function of increasing grid sizes from 100 2 to 1000 2 in increments of 100 2 and from 1000 2 to 1600 2 in increments of 200 2 . Thus, the results will be computed for 14 grid sizes from 10,000 points to 2.56 million points in total. For each scenario, numbers presented in the subsequent sections are an average of 10 runs, in order to smooth out any performance hits due to external factors, such as Operating System tasks. Hardware platforms The work summarized in this paper considered a set of accelerators: from a single core of a CPU, a multi-core CPU, a desktop GPU, a GPU-based accelerator and an FPGA. When the research was conducted, these accelerators represented a realistic selection of available hardware. The features of each accelerator are summarized in Table 1. Where possible, a reference and a link to the manufacturer's documentation were given. Since there is a communication between a host and an accelerator to move data back and forth, it was anticipated that the high-memory bandwidth available to the CPU will positively affect (e.g. reduce) the computation time as compared to both the GPUs and FPGA, since the latters were limited by the PCI-E bus' relatively modest bandwidth. As seen from Table 1, each accelerator device has unique characteristics when it comes to the number of cores, available memory or power usage. Some of these parameters affect the maximum number of concurrent threads that can be run on an accelerator. In OpenCL, two constants define the number of threads requested to be run; the global worksize and the local worksize. Generally, the global worksize is problem-dependent; for the computation of field quantities, it was taken as the number of field points in a 2D grid, in the range from 100 2 to 1600 2 , as discussed in Section 2.1. However, the choice for local worksize can significantly affect the efficiency of an accelerated implementation. Each hardware platform has an upper limit on the size of local worksize, as summarized in Table 1. However, the actual number used can influence performance. In order to investigate this, a set of local worksizes were used, from 1 to 64 (as 2 i , i = 0 to 6). For certain model sizes, not all local workgroup sizes were used since the OpenCL specification requires that the global worksize to be evenly divisible by the local worksize (Khronos Group, 2014). Although the accelerators can be used with local worksizes greater than 64, it was found that beyond 64 there is no appreciable gain in performance for any of them. The OpenCL implementation permits to omit the specification for local worksize, thus allowing the OpenCL SDK for select the most appropriate number (Khronos Group, 2014). Single core CPU base case The computation of field quantities using BEM Algorithms 1 and 2 was implemented and used to run the test model. This sequential (or serial, non-OpenCL) implementation will be used as a basecase; all subsequent accelerated implementations will be compared against it. Care was taken that most reasonable optimizations were performed on the code such as loop unrolling, pre-computation of constants outside loops, and multiplication with an inverse, instead of division. Although the length of computation was expected to be the same if the code was run multiple times, the run times shown in Figure 5 are an average of 10 runs. As expected, the correlation for both singleand double-precision computation bears an O(n) relationship as the number of field points is increased. The figure shows computing times for both single-and double-precision runs on the single-core of a CPU (see Section 3.2). The longest run-times for single precision were 37.44 s for the model with 1600 2 field points, while the longest run-times for double precision were 46.60 s for the model with 1600 2 field points. Multi-core CPU accelerated BEM The multi-core CPU (MCPU) implementation of BEM Algorithms 3 and 4 using OpenCL was executed using all four cores of a CPU. In accordance with the testing parameters set out in Sections 2.1 and 2.2, Figure 6 summarizes the solution times obtained. Time to solution versus the number of field points are plotted for all combinations of local workgroup sizes considered for both single-and double-precision computations. The serial run time is plotted as well, for reference. Figure 6 reveals that the performance curves are stratified for both single and double precision. For small local worksizes (1 through 4), the performance curves are essentially the same. For local worksizes beyond 4, the performance considerably increases; however, the plots still overlap. It is speculated that for the range of 1-4, each core of a CPU gets a single thread scheduled, and above 4 each core gets two threads. The overhead of using OpenCL is evident for small problem sizes (up to about 300 2 field points), where the serial implementation is actually faster. The ratio of reduction in solution time for local worksizes above 4 to those below 4 is about 3.1 for single precision and 1.96 for double precision. Further discussion on the effect of local worksize is given in Section 3.1, while the difference in single-and double-precision computations in shown in Section 3.2. GPU accelerated BEM The work presented in the paper considered two GPU-based accelerators: a desktop graphics card (GTX 760) and a purpose-built accelerator (NVIDIA Tesla K40c). Both of these were based on NIVIDIA's Kepler microarchitecture (NVIDIA, 2013) and represented two common cards available at the time the research was conducted. The OpenCL-based acceleration of the BEM Algorithms 3 and 4 was used in the testing according to the test parameters and conditions set out in Sections 2.1 and 2.2. Desktop GPU For the Desktop GPU, Figure 7 summarizes run times for all local worksizes considered, both single and double precision. Considering the operating system-imposed time limit of tying up the GPU for computation (as discussed in Section 2.0), the kernel required multiple invocations. The maximum number of field points that can be computed before triggering the watchdog timer was determined by trial-and-error. Run times reported in Figure 7 incorporate the additional time associated with multiple kernel invocations and extra data transfer and, as discussed before, this amounts to an estimated increase in total computation time by slightly less than 3%. Unlike for the MCPU, the curves for the desktop GPU are distinct for each worksize and the run times decrease with the increasing number of local worksizes. As expected, there is an O(n) relationship between the number of field points and solution times. Although not a surprise for desktop GPUs, the doubleprecision performance for small local workgroup sizes (up to 8) is actually worse than the serial base-case, owing to the inherent low double-precision capabilities which is a characteristic of the card. A more detailed discussion of this will be given in Section 3.2. The ratio of best-performing run (with 64 for the local worksize) to the single local worksize case was 32.78 for single-precision and 26.83 for double-precision computations. Tesla GPU Although based on the same microarchitecture, the Tesla GPU card was designed for high-end scientific computation. It is not affected by the watchdog timer timeout as the Desktop GPU. The same set of local worksizes were used for executing the example problem and the run times are plotted in Figure 8. Similar to the Desktop GPU, for each increase in local worksize, the solution time decreased while maintaining an O(n) relationship. Both the single-precision and doubleprecision results are better (lower solution time) than the serial implementation. However, the cost of OpenCL setup and data transfer overhead is observable for small problem sizes. The double-precision performance of the Tesla GPU is considerably better than the Desktop GPU. For small local worksizes, it is marginally better than the serial implementation, further underlining that GPUs were meant to run many concurrent threads of execution to achieve good performance. The ratio of best-performing run (with the local worksize of 64) to a single local worksize case was 32.69 for single-precision and 26.90 for double-precision computations, almost identical to the Desktop GPU. Relative speedups will be discussed in detail in Section 3.0. FPGA accelerated BEM Due to its nature, the FPGA hardware accelerator is different than an MCPU or a GPU. The FPGA is mainly defined by its number of available gates or logic elements, which can accommodate a hardware description of the algorithms to be run. The hardware synthesis of BEM Algorithm 4 can be influenced by the number of cores requested, which is limited by the number of logic elements. Other factors, such as the available memory can influence the design as well. Depending on the size of an algorithm (e.g. number of instructions, type of instruction, quantity and type of data to be operated on, etc.) the FPGA might not have a sufficient number of gates, as was found for the case of double-precision algorithm. Thus, only single-precision results are available in this paper. It is expected, given a larger-size FPGA, that the double-precision implementation can be achieved as well. The BEM Algorithm 4 was synthesized using Altera's Quartus II suite and the example problem was run according to test parameters and conditions set out in Sections 2.1 and 2.2. Run times for the range of field points and local worksizes are shown in Figure 9. The solution times generally decrease with an increasing local worksize, while more-or-less maintaining an O(n) relationship. For local worksizes of 1 and 2, the FPGA's performance is lower than the base serial implementation; a worksize of 1 takes over twice as long as the serial implementation. For worksizes greater than 2, the performance gradually improves, but there is little difference between 32 and 64, signalling an upper limit of performance gains. The ratio of best-performing run (with the local worksize of 64) to the single local worksize was 30.96 for single-precision. Performance comparisons Although results presented in Sections 2.4 through 2.6 show solution times for an example problem for each accelerator along with the unaccelerated, serial computation times, it is hard to evaluate relative performance gains offered by each accelerator. Run times can help to compare the effectiveness of an algorithm, its implementation and the benefits offered by a specific hardware platform. More commonly though, relative speedup, as defined by Pacheco (Pacheco, 1997), can be used to compare different implementations. However, in our energy-conscious world, a growing emphasis is placed on the actual energy used in performing a task. Thus, for our computations, an additional metric is introduced; the speedup-per-watt (Bischof, 2008). While this section focuses on the comparison of relative speedups, the speedup-per-watt will be discussed in Section 3.3. The purpose of any ranking based on relative speedup is to guide our choice when it comes to acquiring new hardware either to replace or supplement what is currently available. Therefore, it is a logical choice to base our comparisons on an un-accelerated, serial implementation running on a single core of a CPU before hardware-accelerated versions are implemented. However, the majority of currently available CPUs have more than one core. Therefore, at no extra investment, we have at our disposal a hardware accelerator. In this case, it is sensible to modify our performance metric (the relative speedup) to compare other accelerators to the multi-core CPU (MCPU). This growing sentiment of what serves as a good base case was voiced in literature as well (Lee et al., 2010). Figure 10 summarizes relative speedups achieved by various accelerators using a single core, serial implementation as a base case. Both single and double-precision results are plotted and both sets of curves exhibit the same characteristics. Thus, without loss of generality, this section considers the singleprecision results only, while Section 3.2 will examine the relative performance based on single or doubleprecision computations. For each accelerator discussed in Sections 2.4-2.6, the best-performing case was selected (the ones with the largest local worksize, as mentioned before). All speedup curves display a similar trend; there is a relatively sharp rise in speedup for small problem sizes (as measured by the number of field points), which levels off as the problem size increases. The MCPU offers speedups from below one for small problem sizes (100 2 to 400 2 ) to as high as 5 for larger problems (1400 2 and greater). Even though the theoretical maximum would be 8 if all cores are fully utilized, in practice it is almost unattainable. Thus, the relative speedup of 5 for a MCPU is a reasonable one, considering that no additional hardware was required to attain it. For the GPU-based accelerators considered, the Tesla GPU achieves the highest speedups, followed by the Desktop GPU. For larger problem sizes (1400 2 and greater) the speedup reached above 134 with a slight drop in speedup as the problem size further increased. The Desktop GPU shows an almost identical trend, albeit with a maximum speedup of about 110 for larger problem sizes. The last hardware accelerator considered, the FPGA, unfortunately shows much more modest performance gains; it only achieves a speedup of about 12.5 for larger problems, positioning it above the MCPU in performance. It appears that GPU-based accelerators offer the greatest achievable speedups among the hardware considered, based on the comparison with the serial implementation. If the relative speedup comparison is based on the MCPU as a base case, Figure 11 shows that both GPU-based accelerators reach a relative speedup of about 20 (maximum of 22 for the Tesla GPU and 18.3 for the Desktop GPU). The GPUs' relative speedup seems to steadily drop after reaching their peak. This is attributable to the combined effect of drop in GPU performance and increase in the MCPU performance for large problem sizes, as already discussed in relation to Figure 10. The FPGA reaches a peak relative speedup of about 2 and drops slightly to 1.7 towards the right of the figure. As expected, if the MCPU is used as the base case, the relative speedups are not nearly as high as for the serial case. Yet, the GPUbased accelerators achieve a speedup as high as 22 over the MCPU. Effect of local workgroup size It is evident from Figure 6 through 9 that the local worksize plays an important role in the resulting computation time. Since all accelerators are based on a principle that multiple concurrent threads of execution are running, if only a single thread is run, the performance will be far from optimum. As an upper limit, the maximum number of concurrent threads in a workgroup is summarized Table 1. Ideally, performance should increase with an increasing number of concurrent threads in a workgroup. However, in that case more threads are accessing common constants and variables in the shared memory, perhaps impacting performance. Another way to look at the data summarized in Figure 6 through 9 is to plot the solution time vs. local worksize for each problem size. For the sake of brevity these plots are only included in Figure 11. Speedups achieved by various implementations as compared to the MCPU case. Figure 12. Solution times as a function of local worksize -MCPU. this paper for the MCPU and Tesla GPU (as shown in Figures 12 and 13), for the single-precision computation only, while similar trends were observed for the Desktop GPU and FPGA as well. The double-precision results exhibited a very similar trend also. For the MCPU, the local worksize between 1 and 4 has little effect on performance. For a worksize of eight, there is a sharp increase in performance (drop in solution time), and beyond which, with increasing worksize, there is little or no further improvement. The worksizes considered in this paper, as presented in Section 2.2, were all integer powers of two. But for example, if worksize of 10 was used, the performance suddenly drops by as much as 40% (if compared to a worksize of 8), as seen on Figure 12. This re-iterates that, although tempting to use local worksizes that we are more accustomed to (powers of 10), there could be a performance drop. However, by using powers of two for local worksize either the global worksize has to be adjusted or some combinations of local and global worksizes are not possible any more. Figure 13 summarizes performance for the Tesla GPU; in contrast to Figure 12, increasing the local worksize starting with one translates into increased performance. Unlike the MCPU, for the Tesla GPU there is no performance drop if non-power of two local worksizes were used; Figure 13 contains data for 10 and 50 as the local worksize, without any performance penalty. Similar to the MCPU, worksizes beyond 50 or 64 offer no appreciable speed increase. Although not shown, the same conclusions can be drawn for both the Desktop GPU and FPGA. Single precision vs. double precisionspeedup and accuracy Accuracy is of importance for numerical computation in engineering and science. Many numerical models and methods, like solution of a system of equations, are sensitive to round-off errors or the number of significant digits in input parameters. Thus, most of these methods generally employ double-precision computation. Even though the accuracy in computation is important, one has to consider the quality of input parameters. For example, in geomechanics, most input parameters, like rock mass properties, are seldom know within 20-30% of their true mean (Starfield & Cundall, 1988), presenting an opportunity for accepting "less-than-accurate" computation. To investigate the potential loss of accuracy and perhaps speed gains, Algorithms 1-4 were modified to use double-precision constants and variables. Also, appropriate arithmetic functions (e.g. going from fabsf() to fabs()) were used to avoid unnecessary casts resulting in speed reduction. All test cases Figure 13. Solution times as a function of local worksize -Tesla GPU. were re-run using double precision for both the serial algorithm and the hardware-accelerated ones, where double-precision computation was possible. Figure 5 shows that for the serial (single-core) case there is an approximately 20% performance drop if double-precision computation is used for larger problems, while the greatest difference in solution, measured using L 1 norm, was 0.00093% across all modelled cases (grid points from 100 2 to 1600 2 ) for the computed displacements. While Figures 6 through 9 show the single-and doubleprecision performance for each accelerator, Figure 10 contains the composite results expressed as relative speedup. Double-precision computations on the OpenCL-accelerated MCPU, for larger problem sizes, take on average 9% longer than in single-precision, while the difference in solution across all modelled cases, using a L 1 norm, was 0.00125%. The mere 9% increase in computation time is not surprising since the MCPU is a general-purpose chip, with good floating-point performance for both single and double precision. Actually, compared to the serial (single-core) case, where the cost of double precision was 20% extra time, the performance of the MCPU was very good. For the GPU-based accelerators, it was found that the double-precision performance of the Desktop GPU was on average 26 times slower than its single-precision computation. In interpreting this low performance, one has to consider that most computer graphics operations on a GPU are optimized for single-precision. Literature reports that the double-precision performance of Kepler microarchitecture desktop GPUs is 1/24 of their single-precision performance (Arrayfire, 2015), which confirms our findings. The difference in solution of displacements across all modelled cases using L 1 norm was 0.00147% on the Desktop GPU. However, the same source reports that the Kepler microarchitecture-based Tesla GPU has a substantially better double-precision floating point performance (1/3 of single-precision). Our results indicate that double-precision computations on the Tesla GPU were on average 3.8 times slower than single-precision ones, which is similar to what literature reports. For this platform, the maximum difference in results, as measured by a L 1 norm, was 0.00132% between the single and double-precision results. As mentioned before, the FPGA-based accelerator was only able to synthesize the single-precision algorithms; thus, we cannot compare the performance. In summary, most hardware accelerators are capable of carrying out computations in both single-and double-precision without considerable degradation in accuracy. However, based on the design philosophy behind each accelerator, its double-precision performance can vary significantly. Even though the Tesla GPU's double-precision performance is about one-quarter of its singleprecision one, it is still almost 40 times faster than the serial (single core) of a CPU. If the basis of comparison is the MCPU, the Tesla GPU is on average five times faster than the MCPU for doubleprecision calculations, as seen in Figure 11. Unfortunately, the Desktop GPU's double-precision performance is on par with the MCPU; thus, it does not offer performance improvement in doubleprecision computations. Speedup-per-watt Although most accelerators are preferred in order to expedite the completion of a computing task, in the current energy-conscious climate, the use of energy is becoming important. Table 1 summarizes the power consumption of hardware accelerators. However, these are individual components only. Many of them cannot function alone; a computer system requires power for the CPU, motherboard, memory, GPU and other accessories. Thus, to measure the minimum power required to run the system, different scenarios were devised based on what accelerator was used, as summarized in Table 2. For each scenario, the largest model (with 1600 2 grid points) was run for 10 times. For each run, the power was sampled at 1-ms intervals using an in-line energy usage monitor. The power consumptions in Table 2 are an average within each run and also averaged over the 10 runs. Interestingly, the actual power consumption of the system is somewhat different than what the sum of active components suggests. For all cases, the system power usages were lower than what the components indicate. For example, Table 1 indicates that the CPU's design power was 84W, while the single-core scenario consumed on average 53.7W and the multi-core one needed 72.1W, both of which were below the values indicated in Table 1, considering that the motherboard and memory consumed some energy as well. Similarly, for the Tesla GPU, Table 1 lists 235W and the system power consumption (including CPU, memory, etc.) required 178.5W in total. Only for the FPGA the system power consumption was in line with Table 1, where 23.7 W was predicted for the FPGA. Adding this on top of the single-core CPU scenario resulted in approximately the measured power usage. Having determined the actual power consumption of the system for each scenario, the data in Figure 10 can be updated by dividing relative speedup by the power usage, expressed as a speedup-per-watt metric. This new metric is shown in Figure 14. For single-precision computations, the lower relative power consumption of the Desktop GPU (in comparison to the Tesla GPU) results in the best performance in contrast to Figure 10. The FPGA ranks third, slightly above the MCPU; thus, the FPGA presents a viable alternative to CPUs where low-power acceleration is required. However, for double precision, the Tesla GPU still achieves the highest performance, followed by the MCPU and Desktop GPU, reversing the ranking for the last two. In closing, overall the Tesla GPU appears to be the highest performer even when the power consumption is considered. Conclusions Computation in engineering and science can be challenging. Both the formulation of a mathematical model and its evaluation in a timely manner can be detrimental in the model's widespread Note: a The system was run headless, accessed remotely via VNC. Figure 14. Speedup-per-watt achieved by various implementations as compared to the serial case. use. A method such as the BEM, often used in geomechanics, was selected as an example of a numerical method, which can be benefitted from acceleration. The computation of field quantities, such as stresses and displacements, in a BEM model can be performed in parallel. This paper considered the acceleration of computation on a wide variety of hardware accelerators from multiple cores of a CPU, through the use of GPUs, and finally considering FPGAs. Each hardware accelerator presented different challenges because some of them were designed for parallel execution using a large number of concurrent threads (GPUs), while others were designed to run a relatively small set of simultaneous threads (e.g. the number of cores on a multi-core CPU). FPGAs, relative newcomers amongst the OpenCL-based accelerators, were considered in this study for their low power consumption, representing an energy-conscious alternative. Upon performing computation on the hardware accelerators and evaluating their relative performance using various metrics, such as relative speedup, the following conclusions can be drawn: • If maximum performance is needed, currently Tesla GPUs present the best option. Even though their power requirements are the highest, their performance ranks highest in both single and double-precision computation. • While for no additional investment, the multi-core CPU OpenCL version of the BEM algorithm offers a modest (five-fold) speedup. • Although Desktop GPUs were not meant for double-precision computation, if a single-precision formulation of a numerical model can be used, their performance is very high. • Even though FPGAs offer an energy conscious alternative, currently their comparatively low speedup and long kernel compilation times could detract from their potential to be used as accelerators, at least for this class of BEM computation. • Intentionally, the cost of hardware and software needed was not included in the study; however, if cost is a consideration and single-precision computation is adequate, commonly available Desktop GPUs offer the best value.
10,333.4
2018-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Exclusive Cutaneous and Subcutaneous Sarcoidal Granulomatous Inflammation due to Immune Checkpoint Inhibitors: Report of Two Cases with Unusual Manifestations and Review of the Literature Recent emergence of immune checkpoint inhibitors (ICIs) has revolutionized the treatment of cancers and produced prolonged response by boosting the immune system against tumor cells. The primary target antigens are cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4), a downregulator of T-cell activation, and programmed cell death-1 receptor (PD-1), a regulator of T-cell proliferation. This enhanced immune response can induce autoimmune adverse effects in many organs. Although skin toxicities are the most common, sarcoidal inflammation with exclusive cutaneous involvement is a rare occurrence with only 6 cases reported to date. We report 2 cases with unusual features. One patient is a female who was treated for metastatic renal cell carcinoma with combination of ipilimumab (anti-CTLA-4) and nivolumab (anti-PD-1). She developed deep nodules showing sarcoidal dermatitis and panniculitis on histopathologic exam. The second patient is a male with melanoma of eyelid conjunctiva who was treated prophylactically with ipilimumab. He presented with papules/plaques confined to black tattoos, where the biopsy revealed sarcoidal dermatitis. By a comprehensive literature review, we intend to raise awareness about this potential skin side effect in the growing number of patients receiving targeted immunotherapies. It is crucial to have a high index of suspicion and perform timely biopsies to implement appropriate management strategies. Introduction Despite their tremendous success in treatment of cancer, ICIs are capable of inducing a variety of immune-related adverse events in many organ systems. Skin is reported to be the most common organ affected among other organs such as gastrointestinal, hepatic, endocrine, and renal [1]. The incidence of dermatologic toxicities from ipilimumab (anti-CTLA-4) in metastatic melanoma patients ranges from 49% to 68%, compared to 24% risk of toxicity used for other cancers such as urothelial carcinoma, pancreatic adenocarcinoma, renal cell carcinoma, and non-small cell lung carcinoma [2]. The most common cutaneous side effects related to ipilimumab are pruritus, morbilliform rash, maculopapular eruptions resembling a dermal hypersensitivity reaction, vitiligo, and lichenoid reactions [3]. With anti-PD-1 drugs there may be a 34-39% chance of such adverse cutaneous reactions [1,4]. Other less common cutaneous toxicities collectively include lichenoid mucositis (tongue, buccal, gingiva, and lips), exacerbation of psoriasis, immunobullous lesions, erythema multiforme, exfoliative dermatitis, prurigo nodularis, pyoderma gangrenosum-like ulceration, Sweet syndrome, DRESS syndrome, and toxic epidermal necrolysis [5][6][7]. Sarcoidal-type granulomatous dermatitis, a rare occurrence, was first introduced by Eckert et al. in 2009 as an adverse side effect of ipilimumab for metastatic melanoma [8]. In addition to ICI, it is noteworthy that sarcoidal lesions can also appear during treatments with kinase inhibitors such as BRAF/MEK inhibitors [6]. ICI-induced cutaneous sarcoidal reactions have been reported in only six patients in the literature to date [2,[9][10][11][12]. We present two new cases of such reactions with unique and exclusively skin manifestations following immune checkpoint inhibitors. Case # 1. A 49-year-old female was referred by her oncologist for evaluation of deep nodules on the left elbow and left forearm for 2 months. She had a history of renal cell carcinoma, clear cell type, and was treated by radical nephrectomy one year prior to her visit. The tumor was reported to be limited to the kidney cortex with no lymphovascular invasion or regional lymph node metastasis (TNM:T2b, NX). Seven months later, the patient developed metastatic lung lesions. She was then treated with nivolumab (opdivo) and ipilimumab (yervoy). The patient started to develop slowly enlarging subcutaneous lesions on her left forearm and elbow one month after the first round of therapy. The patient has a family history of Fragile X syndrome in two of her three sisters and in two brothers, one of whom is also blind. Her parents and children are healthy. On physical examination, there were large nontender firm subcutaneous nodules and plaques on her left forearm and elbow, which were more palpable than visible. A skin biopsy was performed that revealed sarcoidal-type granulomatous inflammation in the dermis and subcutaneous tissue (Figures 1(a), 1(b), and 1(c)). Examination with polarized light failed to reveal foreign material. Special stains for fungi (PAS/periodic acid-Schiff) and atypical mycobacteria (AFB and Fite) were negative. In addition, due to the patient's immunocompromised state, appropriate cultures from the affected skin were also obtained that yielded negative results. The sarcoidal dermatitis and panniculitis was therefore believed to be secondary to combination therapy with opdivo and yervoy. Upon consultation with the treating oncologist, the checkpoint inhibitor therapy was decided to be discontinued after the third round. Systemic workup failed to reveal sarcoidal lesions elsewhere in her body. On the subsequent follow-up visit in three weeks, the patient reported that the lesions were decreasing in size and firmness. She started a new regimen at this time. Case Reports in Dermatological Medicine Case # 2. A 58-year-old male presented with lesions occurring only within his black tattooed skin on the chest, shoulders, back, left forearm, and right thigh for the past 3 months. The lesions were tender (only upon pressure), with no itching or pain. All tattoos were present for more than 5 years. The patient has a medical history of hypertension and eczema, with a family history of colon cancer in both parents. He was diagnosed with malignant melanoma on the left eyelid conjunctiva 8 months earlier, measuring 1.8 mm with ulceration (TNM stage: pTN2b). Sentinel lymph node from the left preauricular region was negative. Melanoma was treated with Mohs surgery and wide local excision. Metastatic workup was negative. He was later started on four rounds of adjuvant ipilimumab prophylaxis. The rash appeared after the first month of treatment. On physical examination, there were erythematous, scaly tender papules, plaques, and nodules, only confined to the black tattooed areas on his chest, shoulders, upper back, left forearm, and right thigh. The red, yellow, and green tattoos were completely uninvolved (Figures 2(a) and 2(b)). With the clinical diagnosis of possible allergic reaction, he was initially treated with oral prednisone 10 mg/day and 0.1% triamcinolone cream for two weeks with some improvement; however, the rash was persistent. Treatment was switched to topical clobetasol cream and he was given an intralesional triamcinolone acetonide (kenalog) injection to an area on the right upper arm. In his 4week follow-up visit, due to the lack of significant clinical improvement, a punch biopsy from the left upper arm was performed that revealed sarcoidal-type granulomatous inflammation, associated with only the black tattoo areas (Figures 3(a) and 3(b)). Since the tattoos were present for many years prior to this occurrence with no such reactions, we concluded that the sarcoid reaction was secondary to his ICI therapy. The results were communicated to his oncologist and the ICI treatment was decided to be stopped. A systemic workup failed to reveal lesions elsewhere in his body. In subsequent follow-up visits, the lesions started to improve without further treatment. He is currently being seen by his oncologist at regular intervals, who will continue to monitor the patient for internal disease. Discussion Immune-related adverse events are well-recognized consequences of immunotherapies. Sarcoidal lesions can appear during treatments with both kinase inhibitors such as BRAF/MEK inhibitors and immune checkpoint inhibitors [6,12]. During ICI therapies, sarcoidal reactions most commonly involve hilar, mediastinal, or thoracic lymph nodes and also pulmonary parenchyma. It is not certain if the development of sarcoidal lesions carries a better prognosis in patients receiving ICI treatments. In 71% of patients with sarcoidal reactions due to ICIs, the malignancy showed either a partial clinical response, remained stable, or went into remission. In 29% of reports, the malignancy progressed. More than 90% of sarcoidal lesions resolved or improved, irrespective of the medical intervention [13]. In 38-49% of the patients, immunotherapy was discontinued, 44-57% of patients were given systemic steroids for their lesions, and local steroid treatment was used in 8 to 24% of reported cases [4,13]. Both of our patients showed only cutaneous and/or subcutaneous involvement, with no systemic involvement and there was no prior history of sarcoidosis in either one. Of note, the therapies were given for metastatic renal cell carcinoma (Pt1) and as adjuvant therapy and prophylactically for a conjunctival melanoma (Pt2). Sarcoidal lesions are mostly reported in the setting of treatment for metastatic melanomas. To our knowledge, we report the first case of sarcoidal granulomatous inflammation following ICI therapy that has remained confined within the black tattoo on the skin. Only one case of tattoo sarcoid has been reported, but additional skin areas and the hilar lymph nodes were also involved [14]. One interesting note is that papulonodular reactions in black tattoos are strong markers of sarcoidosis. The "Rush phenomenon" begins with a recent tattoo triggering a local papulonodular reaction. It is characterized by a concomitant reaction in many other black tattoos on the individual and has proven to be a sarcoidal reaction in the majority of cases. Sarcoidosis is estimated to be increased 500-fold in papulonodular reactions compared to the prevalence in the general population [15]. In our patient, the tattoos were present for more than 5 years with no history of any reactions. Therefore, we deduce that the ICI therapy must be the main culprit in producing this manifestation. Table 1 summarizes all the previously reported cases of sarcoid/sarcoid-like reactions from ICI therapy so far that clinically involved the skin, with or without other organ involvement [13,14,[16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. In summary, of 36 total cases (including our current two cases) reported to date, 24/36 or 67% were female and 12/36 or 33% were male. Exclusive cutaneous/subcutaneous involvement was reported in 8/36 or 22%, including our present cases. The most common site of skin involvement was upper and lower extremities. Other locations included face, scalp, chest, and trunk. Two cases showed tattoo involvement where in one, the sarcoid reaction was only confined to black tattoo (current report). In addition, localization to dermal scars was seen in two patients. Lymph nodes were the most common extracutaneous organ involved in 15/36 or 45% of cases, followed by pulmonary parenchyma in 11/36 or 30%. Ipilimumab was the culprit in 11/36 or 31%, nivolumab and pembrolizumab, each in 8/36 or 22%, and combination therapy was reported in 9/36 or 25%. The most common type of underlying cancer was melanoma in 30/36 or 83%, which is consistent with the previously published research on ICI-induced sarcoidal reactions reported to occur in more than %75 of the patients under melanoma treatment [13]. Melanomas are highly immunogenic and the neoantigen environment in these cells has a tremendous impact on antitumor activity of cytotoxic T cells and response to ICI. Enhanced destroying of melanoma cells induced by ICI therapy exposes additional neoantigens presenting by antigen presenting cells that promote Th1 response and release of cytokines that promote the development of granulomatous/sarcoidal lesions in ICI therapy. The pathogenesis of sarcoidal granulomas is complex and involves the interaction of mononocytes/macrophages and CD4 + Th1 cells. In response to antigens and possibly neoantigens secondary to destruction of melanoma in ICI therapy, macrophages produce TNF-alpha and interleukins that recruit CD4+ Th cells [33]. Cytokines that enhance Th1 differentiation are found to be upregulated in sarcoidosis where they secret IFN-gamma and IL-17 and organize granulomatous structure by promoting the maturation of epithelioid histiocytes and multinucleated giant cells. Sarcoidosis seen during ICI therapy supports a hyperactive immune response. Recent reports also highlight the possible role of Th17 cells in the pathogenesis of sarcoid, specifically a subset of CD4+T cells that produce IFN-gamma and IL-17 [34,35]. Although the development of sarcoidal lesions in immunotherapies may represent a favorable sign of potential therapeutic response, it is not yet completely elucidated and requires further studies in larger scales for clearer guidelines in the clinical management of these patients. Conclusion Immune checkpoint targeted agents can induce nonspecific enhanced immune response and overstimulation of inflammatory pathways, leading to a spectrum of autoimmune side effects. Among these, sarcoidosis or sarcoid-like lesions are reported with the majority of cases, presenting with lymph node and pulmonary involvement, and less frequently skin and other organs. By reporting two new cases of exclusively cutaneous/subcutaneous sarcoid secondary to ipilimumab and nivolumab immunotherapies (so far there are a total of 8 cases in the literature) and a thorough review of existing published data, we intend to raise awareness of this potential adverse effect. To our knowledge, we report the first case of sarcoidal granulomatous dermatitis confined solely to black tattoo areas with no systemic involvement. In light of increased utilization of successful ICI therapies today, clinicians should have a high index of suspicion and perform timely biopsies of any newly developing, unusual, or persistent cutaneous lesions in the course of the treatment to avoid misinterpreting sarcoid reactions as progressive or recurrent disease and implement proper management strategies.
2,942.8
2019-07-09T00:00:00.000
[ "Medicine", "Biology" ]
High-sensitivity high-resolution X-ray imaging with soft-sintered metal halide perovskites To realize the potential of artificial intelligence in medical imaging, improvements in imaging capabilities are required, as well as advances in computing power and algorithms. Hybrid inorganic–organic metal halide perovskites, such as methylammonium lead triiodide (MAPbI3), offer strong X-ray absorption, high carrier mobilities (µ) and long carrier lifetimes (τ), and they are promising materials for use in X-ray imaging. However, their incorporation into pixelated sensing arrays remains challenging. Here we show that X-ray flat-panel detector arrays based on microcrystalline MAPbI3 can be created using a two-step manufacturing process. Our approach is based on the mechanical soft sintering of a freestanding absorber layer and the subsequent integration of this layer on a pixelated backplane. Freestanding microcrystalline MAPbI3 wafers exhibit a sensitivity of 9,300 µC Gyair–1 cm–2 with a μτ product of 4 × 10–4 cm2 V–1, and the resulting X-ray imaging detector, which has 508 pixels per inch, combines a high spatial resolution of 6 line pairs per millimetre with a low detection limit of 0.22 nGyair per frame. X-ray flat-panel detector arrays with high spatial resolution and sensitivity can be created using a two-step manufacturing process that separates the fabrication of microcrystalline methylammonium lead triiodide absorber wafers from their integration on pixelated backplanes. T he use of artificial intelligence (AI) in medical imaging is steadily growing [1][2][3] . Currently, task-specific AI applications are able to match and occasionally exceed human intelligence, and it has been predicted that AI will surpass the skills of a radiologist in the next 50 years 4 . The development of such technology requires advances in computing power and algorithms, but also advances in curation and imaging capabilities. X-ray detectors with increased resolution can, in particular, provide improved medical images that further enhance the benefits of AI technologies 5 . X-ray detectors can be divided into two main classes: [6][7][8] indirect-conversion detectors and direct-conversion detectors. Indirect converters exhibit high sensitivity but suffer from low spatial resolution 9 . Direct converters can capture high-resolution images (up to 10-15 line pairs per millimetre (lp mm −1 )) but at a relatively high applied electric field (1-10 V μm −1 ) and relatively low sensitivity [10][11][12][13][14] . The requirement is an X-ray detector that combines high resolution (to enhance AI performance) with high sensitivity (to reduce patient X-ray dose). Hybrid inorganic-organic perovskites, such as methylammonium lead iodide (CH 3 NH 3 PbI 3 , abbreviated as MAPbI 3 ), offer high electron and hole diffusion lengths due to their high charge carrier mobilities (µ) and long carrier lifetimes (τ), and have demonstrated promising characteristics for use as direct X-ray converters [15][16][17][18][19][20][21] . For example, polycrystalline and single-crystalline hybrid inorganic-organic perovskites have exhibited a μτ product of 2.0 × 10 -4 and 1.2 × 10 -2 cm 2 V −1 , respectively, which are in the same range as polycrystalline cadmium zinc telluride (CZT) 13,[15][16][17][18][19][20][21] . The high X-ray absorption coefficient of MAPbI 3 over large parts of the energy spectrum used in healthcare also makes it an ideal candidate for use in next-generation imaging systems. However, the integration of direct X-ray converter layers onto a pixelated electrode substrate (often referred to as a backplane), which converts the generated X-ray signal of each pixel into a two-dimensional digital image, can be challenging from a manufacturing perspective. Thus, while X-ray detection with direct-conversion perovskites has been demonstrated, the integration of perovskite detecting layers with pixelated backplanes has only received limited attention 17,22 . In this Article, we report a two-step manufacturing process to create X-ray flat-panel detectors that combine high spatial resolution and high sensitivity. Our approach separates the fabrication of an X-ray absorber layer (which is made of microcrystalline MAPbI 3 ) with a thickness of several hundreds of micrometres from integration onto the backplane. The integration is subsequently performed at room temperature, and thus, backplane limitations regarding the temperature budget are not a factor. Our approach also allows for the independent optimization of the MAPbI 3 absorber formation. X-ray imager architecture The layer stack of the imaging X-ray detector is shown in Fig. 1a. The glass-based backplane is a self-aligned dual-gate indium gallium zinc oxide thin-film transistor array, which is described and characterized elsewhere 23,24 . The range of applicable voltages on the backplane is limited due the missing high-voltage protection of the pixels. Bottom electrodes are made of a molybdenum-chromium alloy. On top of the backplane, a grid structure made of an approximately 10-µm-thick photoresist is used as a mechanical anchoring structure for the thick absorber layer. Without this grid, the mechanical adhesion between MAPbI 3 and backplane was found to be poor, indicated by the release of the wafer after a few days. However, with the grid, no spontaneous detachment can be seen in several months. Pull tests revealed a tensile strength of 100 mN mm −2 . The grid is High-sensitivity high-resolution X-ray imaging with soft-sintered metal halide perovskites Sarah Deumel 1,2 ✉ , Albert van Breemen 3 , Gerwin Gelinck 3,4 , Bart Peeters 3 , Joris Maas 3 , Roy Verbeek 3 , Santhosh Shanmugam 3 , Hylke Akkerman 3 , Eric Meulenkamp 3 , Judith E. Huerdler 1 , Manognya Acharya 1 , Marisé García-Batlle 5 , Osbel Almora 5 , Antonio Guerrero 5 , Germà Garcia-Belmonte 5 , Wolfgang Heiss 2 , Oliver Schmidt 1 and Sandro F. Tedde 1 ✉ To realize the potential of artificial intelligence in medical imaging, improvements in imaging capabilities are required, as well as advances in computing power and algorithms. Hybrid inorganic-organic metal halide perovskites, such as methylammonium lead triiodide (MAPbI 3 ), offer strong X-ray absorption, high carrier mobilities (µ) and long carrier lifetimes (τ), and they are promising materials for use in X-ray imaging. However, their incorporation into pixelated sensing arrays remains challenging. Here we show that X-ray flat-panel detector arrays based on microcrystalline MAPbI 3 can be created using a two-step manufacturing process. Our approach is based on the mechanical soft sintering of a freestanding absorber layer and the subsequent integration of this layer on a pixelated backplane. Freestanding microcrystalline MAPbI 3 wafers exhibit a sensitivity of 9,300 µC Gy air -1 cm -2 with a μτ product of 4 × 10 -4 cm 2 V -1 , and the resulting X-ray imaging detector, which has 508 pixels per inch, combines a high spatial resolution of 6 line pairs per millimetre with a low detection limit of 0.22 nGy air per frame. filled with liquid MAPbI 3 that acts-after its recrystallization-as an adhesion promoter for the attachment of the X-ray absorber, which consists of a 230-µm-thick MAPbI 3 layer; this thickness was chosen with respect to the limited applicable bias voltage at the imager. As the cathode, a chromium (Cr) layer is deposited on top of MAPbI 3 . The active area of the imager is encapsulated with a barrier foil to avoid environmental influences, as shown in Fig. 1b. This direct-conversion X-ray detector can capture objects with a very high resolution of 6 lp mm −1 (Fig. 1c) and shows an unprecedented low detection limit of 0.22 nGy air per frame at an applied electrical field of 0.03 V µm −1 . The resolution capability was tested by X-ray imaging a phantom having structures made of three lines at different spacings of 5.0, 5.5 and 6.0 lp mm −1 . The detection limit is calculated taking-for each frame-the average signal of pixels from a region of interest (ROI) for a 1-s-long exposure at different doses and considering a signal-to-noise ratio (SNR) of >3, as shown in Fig. 1d 25 . The beam quality was generated using an X-ray source with an anode bias of 70 peak kilovoltage (kVp), and filtering the resulting radiation with 21 mm aluminium (Al) and additional attenuation with 1 mm Pb (Supplementary Information provides details about the X-ray setup). The signal is presented in the least significant bit (LSB), the smallest possible unit of the analogue-to-digital converter in the ROIC, which-in the used settings-is equivalent to 48 electrons. The dose per frame was varied between 0.33 and 20.61 nGy air , and the SNR was determined from the resulting height (H) of the pulses and variance of the background noise (h) according to the following equation: SNR = 2H/h ≥ 3 (ref. 25 ). The SNR is inversely proportional to the dose (Fig. 1d, inset) and can be linearly fitted, which leads to a dose of 0.22 nGy air per frame at SNR = 3. This is due to the impressive stability of the mean value of the ROI in the range of 0-2 LSB in the observed period. Electrical stability is reached after 90 min of biasing of the imager at an electrical field of 0.03 V µm −1 , which is the time required to achieve an almost ionic equilibrium state ( Supplementary Fig. 1a). With a frame rate of 28.6 frames per second, the detection limit is 6.3 nGy air s −1 , which is 20% lower compared with the best reported detection limit of perovskites: Wei and co-workers made a 1-mm-thick single crystal of methylammonium lead tribromide (MAPbBr 3 ) alloyed with Cl, achieving the lowest detectable dose rate of 7.6 nGy air s −1 for a photon energy of 8 keV (ref. 26 ). Previously reported limits for MAPbI 3 are even higher: 19.1 µGy air s −1 for single-crystal MAPbI 3 (ref. 19 ). With A-site cation variation, Huang et al. could reduce the detection limit to 16.9 nGy air s −1 (ref. 20 ). Two-step manufacturing process The manufacturing process of our perovskite X-ray detector consists of two phases: the first focuses on the production of the X-ray absorber layer. We chose a mechanical soft-sintering process, which is described in detail in ref. 18 . The used microcrystalline MAPbI 3 powder is commercially available, where the grain size varies between 0.1 and 100.0 µm, whereas 85% of the measured sizes are Fig. 2a,b). For the investigation of the structural properties of the powder, an X-ray diffraction measurement has been performed ( Supplementary Fig. 2c). The mean diffraction peaks at 14.1°, 23.4°, 24.5°, 28.1° and 28.4° are related to the lattice planes (110), (211), (202), (004) and (220), respectively, and fit well with the values of the tetragonal phase with the I4cm space group of MAPbI 3 found in the literature 27,28 . To form a freestanding stable wafer, the powder was compacted for 30 min at room temperature using a hydraulic press with a pressure of 75.5 MPa. The compactness of these wafers with a resulting thickness of 230 µm is 88% of the theoretical limit calculated from the simulated lattice parameters 27,28 and is in good agreement with the density versus pressure curve shown in Supplementary Fig. 2 (generated as the calibration plot). The second manufacturing phase is shown in Supplementary Fig. 4. First, a photoresist grid with 10 µm height is build up on the backplane by photolithography. A top view of the regular grid structure with a pixel pitch of 50 µm and fill factor of 58% is shown in the scanning electron microscopy (SEM) image in Fig. 2a. The total active area of the X-ray detector is 3.2 × 2.4 cm 2 . Figure 2b shows a micrograph of the grid structure, in which the bottom electrode array can be seen as bright areas. In the second step, the grid on the backplane is filled with MAPbI 3 powder and then liquefied under a methylamine atmosphere. More details on this reaction can be found elsewhere [29][30][31] . In the timescale of seconds, the previously prepared MAPbI 3 wafer is placed on the liquid phase. Fixation occurs after the liquefied MAPbI 3 recrystallizes due to the evaporation of excessive methylamine during the annealing step at 50 °C, acting as an adhesion promoter between the wafer and backplane, as shown in Fig. 2c. To investigate the recrystallized grains, SEM cross-section images were obtained, as shown in Fig. 2d-f. During the bonding process, the liquification of the MAPbI 3 wafer due to methylamine vapours is not excluded, but it will be very limited since the thickness of the recrystallized layer slightly exceeds the grid height. The bottom part of the MAPbI 3 wafer bonded to an indium tin oxide glass with grid structures at the glass interface is shown in Fig. 2d. The surface of the cross section is rough because of the breaking mechanism of the wafer and glass. Morphology differences between the recrystallized and soft-sintered perovskite can be identified. A closer look inside the soft-sintered absorber is shown in Fig. 2e. The grain size varies between 0.5 and 5.0 µm and some pores are visible between the powder grains, which are attributed to mechanical soft sintering. In contrast, recrystallized MAPbI 3 has fewer pores between the grains (Fig. 2f), which has a smaller size of up to 2 µm. The sensitivity and thus the charge transport properties of methylamine-treated MAPbI 3 wafer show no notable change compared with a non-treated one. Nevertheless, additional defect states may have been introduced by changing the grain size [32][33][34] . A great advantage of the presented manufacturing process of the X-ray imager is the possibility to execute quality control of the MAPbI 3 wafer before attachment to the backplane. In fact, freestanding wafers can be X-rayed themselves, and the homogeneity of X-ray absorption can be controlled using a commercially available flat-panel detector (Supplementary Fig. 5a-c). characterization of freestanding wafers For a better understanding of the imaging X-ray detector properties, a closer look at the compact wafer is required. Therefore, we have performed impedance measurements on a single device made of a soft-sintered MAPbI 3 wafer with platinum (Pt) and Cr electrodes ( Supplementary Fig. 6a,b). A schematic of the wafer stack is shown in Supplementary Fig. 11a. According to the resulting geometrical capacitance and sample dimensions, the dielectric constant of the wafer is ε = 75. This is in good agreement with previously reported values [35][36][37] . For dark current measurements, a 977-µm-thick, 86% dense wafer has been used. The Cr electrode was grounded during the measurements, and the applied voltage was in the range of −200 to +200 V, corresponding to an electrical field of ±0.2 V µm −1 . The electrical field was at first decreased from 0 to −0.200 V µm −1 in steps of 0.001 V µm −1 ; thereafter, it was increased from 0 to +0.2 V µm −1 . In the current density versus electrical field (J-E) plot (Fig. 3a), the dark current density reaches the maximum value of 8.40 × 10 -4 mA cm −2 for negative fields, the so-called reverse bias direction, and 1.98 × 10 -3 mA cm −2 for positive (forward) bias. This leads to a dark resistivity of ~2.4 × 10 9 Ω cm, which is in the same order of magnitude to the values reported in ref. 38 . This small rectifying behaviour could be caused by the used electrodes and their different work functions of ϕ m,Cr = 4.5 eV and ϕ m,Pt = 5.7 eV (ref. 39 ) or ions and their associated vacancies 32 . The response of the wafer to 2-s-long irradiation with the RQA5 (according to the IEC 61267:2005 standard) X-ray spectrum and a dose rate of 213 µGy air s −1 at different bias voltages was captured with a Keithley 2400 source meter. This results in the maximum value of 9,300 μC Gy air −1 cm −2 for the sensitivity of our wafer at an electrical field of 0.17 V µm −1 (Fig. 3b). Fitting the data by the Hecht equation 10 (red curve in Supplementary Fig. 11b) results in a µτ product of approximately 4 × 10 -4 cm 2 V −1 . A comparison with previously reported perovskite materials shows that the measured sensitivity is in good agreement with printed MAPbI 3 samples 17 , 3.5 times higher than the soft-sintered MAPbI 3 wafers 18 and over 100 times better than single-crystal MAPbBr 3 (ref. 16 ). Another important figure of merit in comparing X-ray detectors with different thicknesses and materials is the electron-hole pair (EHP) generation energy W ± . W ± is the amount of absorbed radiation energy needed to create a single free EHP (Supplementary Information provides more information) 40,41 . In Fig. 3c, the values of W ± are plotted as a function of the applied bias for two dose rates. W ± decreases with an approximately hyperbolic (f(x) = 1/x) behaviour with increasing electrical field and approaches the empirical limit of W ± = 3E G ≈ 4.5 eV (E G is the bandgap), which is known as the Klein rule 40 . W ± values (on a linear scale) for a total of six different dose rates are shown in Supplementary Fig. 12a. Within the measurement accuracy, W ± is independent of the dose rate for electrical fields higher than 0.05 V µm −1 , indicating an almost full extraction of the generated charges. To investigate the dependencies of W ± with the X-ray photon energy, we performed a series of X-ray response measurements with different photon spectra obtained by varying the anode voltages on the X-ray tube. The simulated X-ray photon spectra are shown in Supplementary Fig. 12b. With the help of these photon energy densities, the absorbed energy for the MAPbI 3 wafer can be determined: for an anode voltage of 50 kVp, the wafer absorbed 98.63% of the emitted X-ray spectrum. The absorbed energy decreases to 71.33% for increasing anode voltages up to 120 kVp (Supplementary Table 1). The wafer was irradiated for 1 s each time with a dose of 40 µGy air under an 40 . In the double logarithmic representation (inset), the hyperbolic behaviour of W ± is visible. d, EHp creation energy versus exposed X-ray spectrum. The MApbI 3 wafer was exposed with a dose of 40 µGy air in the RQA5 spectrum and an applied electrical field of 0.043 V µm −1 . The X-ray tube voltage was varied from 50 to 120 kVp (black points) and back to 50 kVp thereafter (red dots). applied electrical field of 0.043 V µm −1 . For a better statistical evaluation and excluding possible degradation factors, the anode voltage was initially increased from 50 to 120 kVp and then decreased from 120 to 50 kVp. The corresponding X-ray pulses show an increasing pulse height with higher anode voltages with the maximum value at around 100 kVp (Supplementary Fig. 12c). The increase is proportional to the photon energy fluence of the X-ray tube. The resulting average W ± values are 5.99 ± 0.20 eV (Fig. 3d), that is, W ± is independent of the X-ray photon energy. This is of great importance since X-ray photons with a lower energy carry higher diagnostic information. Furthermore, no degradation caused by irradiation has been observed after a cumulative dose of 11 Gy air (Supplementary Fig. 13). Similar to MAPbI 3 , the theoretical limit of W ± in amorphous selenium (a-Se) ranges between 5 and 6 eV. In practice, however, values of 40-50 eV at an electrical field of 10 V µm −1 are achieved 42 . In addition, a-Se shows decreasing W ± for increasing X-ray photon energy 42 . All the results presented here with MAPbI 3 wafer devices show the great potential of this material class for X-ray detector application. characterization of X-ray detectors With the knowledge of freestanding wafer measurements, we were able to manufacture a pixelated MAPbI 3 X-ray imaging detector with outstanding performance. X-ray characterization was performed by varying the dose range over five orders of magnitude initially using the RQA5 spectrum and adding different filters later. For the corresponding photon energy densities, see Supplementary Fig. 14a. The level of absorption of these three spectra for different thicknesses of MAPbI 3 is shown in Supplementary Fig. 14b. With the effective thickness of our wafer (red dashed line), we obtained theoretical absorption values of 50%, 46% and 36% for the RQA5, RQA5 with 15 mm Al filter and RQA5 + 1 mm Pb spectra, respectively. For the investigation of the imager response to 1-s-long X-ray pulses, video sequences were taken with an integration time of 35 ms per frame. Frame sequences were acquired in the rolling shutter mode without synchronization between the X-ray source and X-ray detector. The offset-corrected signals under the maximum applicable electrical field of 0.03 V µm −1 using the RQA5 spectrum with doses ranging from 44 to 8,390 nGy air per frame are presented in Fig. 4a. The pulses show good time response, as indicated by the steep increase and decrease. To evaluate the linearity of the imager responses for a wider dose range, we used the RQA5 spectrum in combination with additional 15 mm Al filtration. This detector is impressive, with good linearity over a dose range of three orders of magnitude ( Fig. 4b and Supplementary Fig. 15a). Supplementary Fig. 15b shows a close-to-ideal dose dependence (slope, 0.99). Image lag describes the amount of charge that was carried over from the previous to the next image frame 21 . In Supplementary Fig. 15c, image lags for different doses after 1 frame (35 ms), 5 frames (175 ms) and 10 frames (350 ms) are shown. In comparison to a-Se and amorphous silicon, MAPbI 3 shows higher lag: lags of around 1.2% for amorphous silicon and 0.7% for a-Se are detected after 330 ms (refs. 43,44 ). The image lag of MAPbI 3 after 1 frame is similar to polycrystalline CZT and better for higher frames 14 . Since the applied electrical field (0.03 V µm −1 ) on our detector is rather low, we expect a much lower image lag for higher electric fields. The sensitivity varies between 1,010 and 1,060 μC Gy air −1 cm −2 ( Supplementary Fig. 16a). A comparison with previously reported detector materials shows that the measured sensitivity is four times higher than CZT (refs. 12,13 ) and up to 60 times better than a-Se (ref. 10 ). Another key parameter for the imager performance is the modulation transfer function (MTF), that is, the spatial resolution or relative response as a function of the spatial frequency 44 . The MTF plot (Fig. 4c) shows that for the measurement range used here, the MTF is independent of the X-ray dose. For comparing with another X-ray imaging detector mostly used for radiography application, an indirect-conversion complementary metal-oxide-semiconductor detector (Xineos-2222HS, Teledyne DALSA) was chosen. The associated MTF (Fig. 4c, red curve) is obviously worse than that of the MAPbI 3 detector. A further comparison of the two detectors is shown in Fig. 4d. The image made by the indirect-conversion detector is blurred, and none of the resolution phantoms can be resolved. In contrast, 5.0 lp mm −1 can be resolved using our MAPbI 3 detector. To prove the good resolution, part of a hearing aid and a coronary stent (diameter, 2 mm; length, 15 mm; rectangular mesh cross section, 100 µm x 200 µm ( Supplementary Fig. 17)) were X-rayed with an exposure of 4.17 µGy air per frame in the RQA5 spectrum. The photograph of the objects is shown in Fig. 4e. The resulting X-ray images offer a detailed view of the structures examined; even the mesh of the stent (Fig. 4f, right) is clearly visible. These results show the outstanding potential of the MAPbI 3 imaging detector fabricated by our two-step manufacturing process for use in the medical field. conclusions We have reported a two-step manufacturing process for MAPbI 3 X-ray flat-panel detectors based on the mechanical sintering of a freestanding absorber layer and integration of this layer on a pixelated backplane. A photoresist grid functions as a mechanical anchor and recrystallized MAPbI 3 acts as the adhesion promoter for the soft-sintered thick MAPbI 3 absorber layer. We used our approach to create a pixelated X-ray detector with a resolution of 6 lp mm −1 , sensitivity of 1,060 μC Gy air −1 cm −2 and detection limit of 0.22 nGy air per frame at an applied electrical field of 0.03 V µm −1 . The EHP creation energy W ± of 12.4 eV is still higher than the empirical limit of 4.5 eV given by the Klein rule 40 , but we have shown-via measurements on freestanding wafers-that a W ± value of 5.99 eV should be achievable when applying an electrical field higher than 0.05 V µm −1 . These freestanding wafers were produced with the same soft-sintering approach as the X-ray detector and confirmed the electrical transport properties of MAPbI 3 . A value of 9,300 μC Gy air −1 cm −2 could be achieved for sensitivity and 4 × 10 -4 cm 2 V −1 for the µτ product. We also showed that the W ± value of our freestanding MAPbI 3 devices is independent of the X-ray photon energy. Hence, the freestanding compact MAPbI 3 wafer has a very high potential for stable and excellent detection over the whole energy range of X-ray applications. We have illustrated that this technology can be scaled to large detection areas via integration on a 508 pixels per inch backplane with 640 × 480 pixels. To further improve the performance (sensitivity and dynamic behaviour) of the pixelated MAPbI 3 detector to the level shown for freestanding wafers, the backplane must be tailored to accommodate a higher electric field. Additional interlayer engineering of the detector stack is required to further reduce the dark current. The development of such X-ray detectors with high resolution and sensitivity can-we believe-speed up the translation of AI to routine clinical practice in X-ray imaging applications and help improve healthcare outcomes. In the short term, applications in the field of general radiography, fluoroscopy, angiography and neurology could benefit from improved resolution, which today is an option only in mammography. Moreover, mammography will benefit from improved sensitivities, which are typical for indirect-conversion detectors. Improved sensitivities can also lead to lower X-ray doses but with similar or improved image quality. Currently, these different applications rely on different technologies-our MAPbI 3 X-ray imaging detector could potentially provide a single technology for all of them. Methods Device manufacturing. For this paper, two different devices were used. For the first device, freestanding CH 3 NH 3 PbI 3 (MAPbI 3 ) wafers with a diameter of 15 mm and thickness between 880 and 1,000 μm have been produced using commercially available MAPbI 3 powder (Xi'an Polymer Light Technology). The powder was sieved with a 50 μm mesh. The powder was filled into a height-adjustable powder container and a cylinder of stainless steel with a polished surface was placed above this. The hydraulic press (PerkinElmer) can apply up to 9 t, which corresponds to a pressure of 495 MPa for the wafers. The soft sintering of the wafers was done at a pressure of 110 MPa (2 t), applied for 30 min at a temperature of 70 °C. On the two-wafer surfaces, electrodes with an area of 1 cm 2 were deposited via physical vapour deposition. For that purpose, 100 nm Cr on one side and 100 nm Pt on the other side were sputtered. The second device is the X-ray image sensor having the following parts. The backplane consists of a indium gallium zinc oxide thin-film transistor (TFT) array comprising 640 × 480 pixels with a pixel pitch of 50 µm and a resolution of 508 pixels per inch. The negative photoresist grid, with a height of 10 µm, comprised a mixture of SU-8 50 (Kayaku Advanced Materials and distributed by micro resist technology) and cyclopentanone (Sigma-Aldrich) (5:1); it was deposited on the backplane by spin coating at 3,000 rounds per minute and structured by optical lithography. MAPbI 3 powder was filled into the grid-like structure and liquefied under a methylamine gas (Sigma-Aldrich) atmosphere. Fabrication of the absorber layer was performed using a slightly modified procedure presented in ref. 1 . The wafer was pressed at 9 t (75.5 MPa) and at room temperature for 30 min. On top of this wafer, an 80-nm-thick Cr electrode and 100 nm gold (Au) were sputtered. The manufactured wafer was placed on the liquid MAPbI 3 and fixed after its recrystallization. To ensure good evaporation of excess methylamine gas, the sample was placed on a heating plate at 50 °C for 30 min. The stacked layers were encapsulated by laminating a high barrier film (TESA 61572) to avoid degradation due to external stimuli. Image readout and processing for pixelated X-ray detector. Images from the 480 × 640 pixels TFT panel were read by a commercially available ROIC (AD71124, Analog Devices). The signal at the input was simultaneously integrated, amplified, low-pass filtered and converted from analogue to digital with a 16-bit converter. The integrator feedback capacitance C f was 0.125 pF and the integration time was 35 ms. The readout occurred by the rolling shutter principle, and the X-ray tube and detector were not synchronized. The acquiring frame rate was ~28.6 frames per second. To eliminate fixed pattern noise (offset compensation), an offset map was generated by averaging 100 dark images, which was subtracted from the images. The mean value and standard deviation were deduced from the ROI (Supplementary Fig. 18). The standard deviation in the ROI represents the electronic noise in LSB (~11 LSB). Noise in the image sensor in terms of electrons (considering 48 electrons per LSB) is 535 electrons. X-ray recordings from objects were offset and compensated by subtracting the offset map and flat-field corrected by calculating the gain factor of each pixel for a flat-field image. The MTF is determined by the slanted-edge method 45 . First, an object with a sharp tungsten edge is placed on the X-ray detector and the edge profile is derived from the resulting X-ray image. The line-spread function is derived by differentiating the edge profile. The Fourier transform of the line-spread function defines the MTF. Electrical characterization of freestanding wafers. The current density measurement was performed with a Keithley 2400 source meter sampling at 10 Hz that was connected to the sample holder filled with argon gas. The MAPbI 3 wafers are measured in an inert atmosphere and therefore protected from moisture and light. For measurement during X-ray exposure, the sample holder is placed in an Al box underneath a MEGALIX Cat Plus 125/40/90 (Siemens Healthineers AG) X-ray source with a tungsten anode. The distance between the sample and X-ray source is 120 cm. The X-ray dose was varied by changing the tube current by over two orders of magnitude, measured with a PTW Diados T11003-001896 dosimeter and adjusted with correction factors provided in its datasheet for anode voltages other than 70 kVp. For more details, see Supplementary Information. SEM. To obtain high-resolution SEM images of the MAPbI 3 perovskite layers, a Schottky field-emission SEM (JEOL JSM-7610F) was used at an acceleration voltage of 15 kV. For the cross-sectional image, the glass of the imager was scratched with a diamond pen and then broken by hand. SEM images of the grid photoresist were obtained by an FEI Quanta 3D FEG microscope, using a 5 kV electron beam and a secondary electron detector. X-ray diffraction measurement. The structural analysis of the MAPbI 3 powder consists of XRD measurements performed by classical ex situ Bragg-Brentano geometry using a Panalytical X'pert powder diffractometer with filtered Cu Kα radiation and an X'Celerator solid-state strip detector. Data availability The datasets analysed in this study are available from the corresponding authors upon reasonable request.
7,351.4
2021-09-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Gamification in education: threats or new opportunities . This article discusses the expediency and efficiency of implementing gamification in education. This study is a meta-analysis of the modern experience of gamification application obtained from numerous international studies, practical cases, and reports of educational entities. The influence of gamification technologies on cognition and learning processes was proved by hardware neurobiolo gical research of students’ brain activity during a game using electroencephalography, functional magnetic resonance tomography, and functional near-infrared spectroscopy. On the basis of an analysis of gamification effects regarding efficiency improvement due to perceptive learning, the conclusions were obtained used in a generalized list of opportunities, problems, and threats of application of gamification elements in education. This study systematizes practical results presented by corporate cases and reference literature, which mention the necessity to use specialized online platforms or automated control systems for systematic implementation of gamification into education. The practical cases in the context of education considered the results of gamification application mainly as positive; however, at the same time, negative results were mentioned, which should be analyzed and which also should be included in the list of threats in this study. Introduction In the recent decade, the course of transformation of national economies has been aimed at the transition to business digitization. The widespread introduction of digital technologies is formulating new realities for their functioning. Peculiar role in Russia is played by development of an education system for teaching specialists in new professions, which assumes application of network functioning principle of educational entities involving all their resources [1], as well as supplementing conventional education scheme with new educational practices and technologies. Thus, scientific investigation into gamification concept becomes an obvious trend [2], allowing to combine gaming functions with cognitive. However, quantitative indices available at present for analysis are insufficient to prove efficiency and benefits of gamification in education. Herewith, the studies in the field of game sector indicate at increasing number of gamers in the world: there were about 2.5 billion persons in 2019. In Russia more than 65 million people are participants in various computer games [3]. Respectively, application of gamification technologies for involvement of students into education seems to be attractive, though, it requires systematization of theoretical knowledge of main aspects of this concept as well as determination of already revealed advantages and disadvantages of gamification [4,5]. Methods This study was based on the following general research methods: searching, collection, systematization, analysis, data comparison, verification of scientific hypothesis about the influence of gamification on a student (research object 1) and on education efficiency (research object 2). In addition, foreign publications were analyzed, as well as practical cases of the Russian and foreign educational entities. The following criteria were applied for selection of publications: the scientific character of the research presented; description of experiment and practically verified results. The analysis involved the results of studies performed using functional near-infrared spectroscopy, diffuse optical tomography, functional magnetic resonance tomography; electroencephalography. In addition, technical capabilities of equipment and results of their operation were analyzed. Results The presented experimental results of gamification implementation in education allowed to highlight the most significant opportunities and threats of gamification in education as summarized in Table 1. Discussion Modern principles, methods, and elements of gamification concept are described in details in reports of major world universities, global conferences, and international forums. Thus, according to the report of Massachusetts Institute of Technology of 2009 [6], the gamification principles are comprised of three main properties of its ancestor, game: unified and clear set of rules, rapid feedback system, as well as distinctly formulated target. These principles are used also in traditional education; however, after addition of game mechanics, learning becomes more attractive for audience not only involving participants in the game but also promoting them to self-development and achievement of results. In the report by World Governmental Summit 2016, the elements of gamification are subdivided into three categories: Mechanical elements most often are arranged in the storytelling format and based on adaption of learning material using increment progressing. Personal elements are based on formation of student's profile, where his personal and group achievements are reflected. Emotional elements use the psychological technique of flow state, according to which students involved in the game transit into mental state and are totally concentrated on activity and achievement of results. Peculiar influence on successful implementation of gamification is exerted by freedom of participants' actions comprised of 5 freedoms of game: the freedom to fail; the freedom to experiment; the freedom of self-expression, the freedom of efforts, the freedom of interpretation [6]. The influence of visualization on brain neural activity during game is confirmed by various neurobiological investigations [7] using electroencephalography, functional magnetic resonance tomography, and functional near-infrared spectroscopy. As a consequence, the conclusions about improvement of efficiency and attention have been made. The works [8][9][10][11] describe advantages of gamification in higher education and provide proofs of improvement of team relations, involvement, motivation, susceptibility to learning, practical skills deepening, satisfaction and achievements of students. Many authors specializing in gamification of education mention its positive effects: improvement of motivation and involvement into education, development of healthy competition [9,12]. Let us consider some examples of successful adaptation of gamification technologies for education. Minecraft is a computer game, worldwide sales are over 120 million units; and Minecraft EDU is a special version of the game adapted for use in schools: used in more than 100 countries of the world [5]. ClassDojo is a service allowing teacher to provide instant feedback to pupils in game form. Kahoot is a learning platform allowing to develop gamified tests; for instance, a teacher demonstrates on screen test questions with variants of answers, the students should response as quick as possible to obtain higher score. The platform is used by more than 3.5 million teachers, the investments to the project are about USD 16.5 million [13]. Duolingo is a service for independent learning of foreign languages in game form. It is used by more than 15 million customers. The results of implementation of gamification into LinguaLeo are as follows: activation increased by 30%, user retention increased by 15%, virtual traffic is 10-30% of main one [14]. Conclusion While developing own digital educational program, the teacher should play various roles, to be an innovator like entrepreneurs, an empath like psychologists, a manager [15]. Herewith, the importance of practical research based on comparison of reference and experimental groups of students to reveal weak and strong sides of programs with gamification elements and search for ways of their improvement, cannot be overestimated. The systematized in this work opportunities and threats will assist future design thinkers to consider them upon development of own gamified educational programs.
1,554
2021-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Enhancement of Crystallization Behaviors in Quaternary Composites Containing Biodegradable Polymer by Supramolecular Inclusion Complex Novel multi-component composites composed of the biodegradable polymer poly(ethylene adipate) (PEA), the water-soluble polymer poly(ethylene oxide) (PEO), poly(vinyl acetate) (PVAc), and a supramolecular-like inclusion complex (IC) made by α-cyclodextrin (α-CD) and poly(ε-caprolactone) (PCL) (coded as PCL–CD–IC) are discussed in this work. The PCL–CD–IC was used to increase the crystallization rate of the miscible PEA/PEO/PVAc ternary blend that crystalized slower than neat PEA. Higher resolution SEM and TEM images displayed that PCL–CD–IC did not assemble notably in the quaternary composites. For the results of isothermal crystallization, the analysis of the Avrami equation demonstrated that the rate constant k increased with the addition of PCL–CD–IC in the composites, suggesting that PCL–CD–IC provided more nucleation sites to promote the crystallization rate. The nucleation density increased with the addition of PCL–CD–IC, and the amount of spherulite also increased. Wide angle X-ray results showed that the composites displayed similar diffraction patterns to neat PEA, meaning PEO, PVAc, and PCL–CD–IC would not change the crystal structures of PEA in the composites. The PCL–CD–IC, the supramolecular nucleation agent, demonstrated its superior ability to enhance the multi-component composites of biodegradable polymer in this study. Introduction Biodegradable polymers can be biodegraded to form simpler compounds and then redistributed through elemental cycles such as carbon and nitrogen [1][2][3][4][5][6]. Thus, they are more sustainable and environmentally friendly than traditional plastics. Poly(ethylene adipate) (PEA) is a biodegradable aliphatic polyester made from glycol and diacid. However, PEA shows a slower crystallization rate and poor thermal stability, which limits its application [7,8]. The crystalline properties of PEA can be enhanced by blending it with several nucleation agents [7,8]. It has been reported that graphene oxide (GO), as a nucleation agent, has good dispersion in a PEA/GO composite. Graphene oxide not only enhanced the nucleation density of PEA but also improved its spherulite growth rate [7]. Tang et al. [8] used a diamide derivative, N,N'−ethylenebis(12−hydroxystearamide) (EBH), as a nucleation agent to promote the crystallization of PEA. It was found that the addition of EBH significantly reduced the crystallization time of PEA; moreover, the crystallization rate was accelerated. Addition of a nucleation agent is a useful way to modify the crystallization of PEA for its future applications. Poly(ethylene oxide) (PEO) is a water-soluble polymer with low toxicity and good biocompatibility [9]. It can be used as a plasticizer to produce a plasticizing effect that can increase the chain mobility to was added to the α-CD solution and stirred vigorously at 60 • C for 3 h for the complexation process. The entire complexation process was carried out by further stirring the solution overnight at room temperature, and PCL-CD-IC was obtained as the precipitate in the solution. After filtrating out the PCL-CD-IC from the solution, the PCL-CD-IC was also washed with acetone and distilled water several times to remove the uncomplexed PCL and α-CD, respectively. The PCL-CD-IC after washing away the uncomplexed PCL and α-CD was vacuum dried at 45 • C for one week before using. A similar method to prepare PCL-CD-IC has also been reported in the literature [18]. The quaternary PEA/PEO/PVAc/PCL-CD-IC composites were fabricated by the solution blending method. To prepare a PEA/PEO/PVAc/PCL-CD-IC solution, PCL-CD-IC was first dispersed in chloroform by ultrasonic treatment (20 min) to make the PCL-CD-IC solution, and then a certain amount of it was added to a PEA/PEO/PVAc solution (with chloroform as the solvent), continuously stirring for 10 min. It should be noted that the concentration of the PEA/PEO/PVAc/PCL-CD-IC solutions was approximately 3 wt %. For the content of PEA, PEO, PVAc, and PCL-CD-IC in the solutions, it was presented by their relative weight ratio, for example, PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/0.5. The PEA/PEO/PVAc/PCL-CD-IC solutions were casted by evaporating the solvent at 40 • C and vacuum drying for 2 days at 45 • C. The obtained specimens were used for the following characterizations and measurements. Investigations on Thermal and Crystallization Behaviors We studied the thermal properties and crystallization behaviors with a differential scanning calorimeter (DSC, PerkinElmer DSC-8500, PerkinElmer, Waltham, MA, USA). In order to detect the general thermal behaviors of the specimens, a scan rate of 20 • C/min was used. Regarding the isothermal crystallization treatment of PEA/PEO/PVAc/PCL-CD-IC composites, the specimens were first heated and kept at 100 • C for 3 min to erase thermal history, and then rapidly cooled to various crystallization temperatures (T c = 16, 18, 20, 22 • C) to complete crystallization. Relevant information about isothermal crystallization was recorded for further analysis. Structural Identification with 1 H Nuclear Magnetic Resonance ( 1 H NMR) Spectra To identify the structure of PCL-CD-IC, the 1 H nuclear magnetic resonance ( 1 H NMR) spectra were measured by the Varian Unity Inova 400 (Agilent, Santa Clara, CA, USA). The solvent of DMSO-d 6 was selected to dissolve the specimens. The signal of DMSO-d 6 at δ = 2.5 ppm was used as the internal standard for our characterization. Studies of Fourier-Transform Infrared Spectroscopy (FTIR) Spectra Fourier-transform infrared spectroscopy (FTIR) spectra of the composites were recorded with the PerkinElmer Frontier TM (Perkin Elmer, Waltham, MA, USA) at wavenumbers from 400 to 4000 cm −1 . The samples for FTIR measurements were prepared on KBr disks using the solution-casting method. The resolution of each measurement was 2 cm −1 . Discussions on Crystalline Structures with Wide-Angle X-ray Diffraction (WAXD) We employed wide-angle X-ray diffraction (WAXD) to discuss the crystalline structures of PEA/PEO/PVAc/PCL-CD-IC composites. A Bruker D2 PHASER diffractometer (Bruker, Billerica, MA, USA) with Cu-Kα radiation (λ = 0.154 nm) was applied. The samples used for the WAXD studies were crystalized isothermally at 22 • C before the measurements. The WAXD experiments were performed at a scan rate of 1 • /min between 2θ = 5 • and 50 • . Morphological Observation Scanning electron microscopy (SEM, Hitachi S3000, Hitachi, Tokyo, Japan) and transmission electron microscopy (TEM, JEM-1400, JEOL, Tokyo, Japan) were used to study the dispersion and Crystals 2020, 10, 1137 4 of 10 morphology of the PCL-CD-ICs in composites. The samples for SEM were coated with gold so that the conductivity could be increased. The samples for TEM were prepared on copper grids by solution casting. The spherulite morphology of PEA in the composites was also discussed and observed through a polarization optical microscope (POM, Olympus CX41, Olympus, Tokyo, Japan) equipped with a Linkam THMS-600 hot stage. Prior to POM observation, the specimens were melted at 100 • C for 3 min and then crystalized isothermally at 30 • C. Figure 1 shows the DSC thermograms of neat PCL, α-CD, and PCL-CD-IC. Obviously, for PCL-CD-IC, the melting transition of PCL was diminished. This result indicates that the crystallization of PCL was affected because of the formation of inclusion complexes between PCL and α-CD. The crystallization of PCL was restricted because most of the PCL chains were covered by α-CD during the formation of the PCL-CD-IC. Scanning electron microscopy (SEM, Hitachi S3000, Hitachi, Tokyo, Japan) and transmission electron microscopy (TEM, JEM-1400, JEOL, Tokyo, Japan) were used to study the dispersion and morphology of the PCL-CD-ICs in composites. The samples for SEM were coated with gold so that the conductivity could be increased. The samples for TEM were prepared on copper grids by solution casting. The spherulite morphology of PEA in the composites was also discussed and observed through a polarization optical microscope (POM, Olympus CX41, Olympus, Tokyo, Japan) equipped with a Linkam THMS-600 hot stage. Prior to POM observation, the specimens were melted at 100 °C for 3 min and then crystalized isothermally at 30 °C. Figure 1 shows the DSC thermograms of neat PCL, α-CD, and PCL-CD-IC. Obviously, for PCL-CD-IC, the melting transition of PCL was diminished. This result indicates that the crystallization of PCL was affected because of the formation of inclusion complexes between PCL and α-CD. The crystallization of PCL was restricted because most of the PCL chains were covered by α-CD during the formation of the PCL-CD-IC. The stoichiometry of PCL-CD-IC was estimated from the 1 H NMR spectra shown in Supplementary Materials Figure S1. According to the spectra in Figure S1, the molar ratio of PCL repeating unit to α-CD and the coverage ratio of PCL-CD-IC can be related to the ratio of the integral intensity of 4.8 ppm (CH of α-CD) to the integral intensity of 1.5 ppm (CH of PCL). Similar identifications and estimations have also been mentioned in the literature [22,23]. The molar ratio of the PCL repeating unit to α-CD was estimated to be 1.15, and the coverage ratio of PCL-CD-IC was approximately 76%. The related results show that most of the PCL chains in the PCL-CD-IC were covered by α-CD. The FTIR spectra of neat PCL, α-CD, and PCL-CD-IC are demonstrated in Figure S2 in the Supplementary Materials. We found that the hydroxyl-stretching band of α-CD (3367 cm −1 ) was different with that of PCL-CD-IC (3391 cm −1 ). In addition, the carbonyl-stretching band of PCL (1725 cm −1 ) was different than that of PCL-CD-IC (1739 cm −1 ). The shifts of the hydroxyl-stretching band and the carbonyl-stretching band revealed by FTIR spectra further indicate the presence of inclusion between PCL and α-CD and the interactions between the hydroxyl groups of α-CD and the carbonyl groups of PCL in the PCL-CD-IC. Similar spectral features have also been shown to characterize the occurrence of inclusion in the literature [24,25]. Results The miscibility of the PEA/PEO/PVAc/PCL-CD-IC quaternary composite was confirmed by a DSC thermal scan, and the typical result of the PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/1 composite The stoichiometry of PCL-CD-IC was estimated from the 1 H NMR spectra shown in Supplementary Materials Figure S1. According to the spectra in Figure S1, the molar ratio of PCL repeating unit to α-CD and the coverage ratio of PCL-CD-IC can be related to the ratio of the integral intensity of 4.8 ppm (CH of α-CD) to the integral intensity of 1.5 ppm (CH of PCL). Similar identifications and estimations have also been mentioned in the literature [22,23]. The molar ratio of the PCL repeating unit to α-CD was estimated to be 1.15, and the coverage ratio of PCL-CD-IC was approximately 76%. The related results show that most of the PCL chains in the PCL-CD-IC were covered by α-CD. The FTIR spectra of neat PCL, α-CD, and PCL-CD-IC are demonstrated in Figure S2 in the Supplementary Materials. We found that the hydroxyl-stretching band of α-CD (3367 cm −1 ) was different with that of PCL-CD-IC (3391 cm −1 ). In addition, the carbonyl-stretching band of PCL (1725 cm −1 ) was different than that of PCL-CD-IC (1739 cm −1 ). The shifts of the hydroxyl-stretching band and the carbonyl-stretching band revealed by FTIR spectra further indicate the presence of inclusion between PCL and α-CD and the interactions between the hydroxyl groups of α-CD and the carbonyl groups of PCL in the PCL-CD-IC. Similar spectral features have also been shown to characterize the occurrence of inclusion in the literature [24,25]. The miscibility of the PEA/PEO/PVAc/PCL-CD-IC quaternary composite was confirmed by a DSC thermal scan, and the typical result of the PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/1 composite is shown in Figure S3 in the Supplementary Materials. As shown in Figure S3, a single glass transition temperature (T g ) was found for the PEA/PEO/PVAc/PCL-CD-IC composite, demonstrating the miscibility of the Crystals 2020, 10, 1137 5 of 10 composite. The isothermal crystallization kinetics of the PEA/PEO/PVAc/PCL-CD-IC quaternary composites were thoroughly investigated in this study. Figure 2 displays the DSC measurements of neat PEA and the PEA/PEO/PVAc/PCL-CD-IC quaternary composites at different crystallization temperatures (T c ' s ). The typical results of PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/x composites are presented herein. It should be noted that the "x" values were 0, 0.5, 1, and 2, which represent the relative weight ratio of PCL-CD-IC in the composite. Firstly, it was found that for PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/0 specimen without adding PCL-CD-IC, the crystallization peak moved to a longer crystallization time. This result means that the presence of PEO and PVAc in the composite would slow down the isothermal crystallization of PEA. On the other hand, adding PCL-CD-IC can further promote the isothermal crystallization of PEA in the composites. As shown in Figure 2a Table 1. We found that the n values of neat PEA and PEA/PEO/PVAc/PCL-CD-IC quaternary composites were similar and between 2 and 3, which implies that the crystallization mechanism of PEA would not be significantly influenced with the addition of PEO, PVAc, and PCL-CD-IC. The values of rate constant k and 1/t 0.5 tended to increase with an increase in the PCL-CD-IC content in the composites, indicating that PCL-CD-IC increased the crystallization rate of the composites. The PCL-CD-IC can be an effective nucleation agent for the crystallization of PEA in the quaternary PEA/PEO/PVAc/PCL-CD-IC composites. Crystals 2020, 10, x FOR PEER REVIEW 5 of 10 is shown in Figure S3 in the Supplementary Materials. As shown in Figure S3, a single glass transition temperature (Tg) was found for the PEA/PEO/PVAc/PCL-CD-IC composite, demonstrating the miscibility of the composite. The isothermal crystallization kinetics of the PEA/PEO/PVAc/PCL-CD-IC quaternary composites were thoroughly investigated in this study. Figure 2 The typical SEM and TEM images of the composite are demonstrated in Figure 4a,b. The images were acquired from the measurements of the PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/1 composite, with SEM and TEM magnifications of 2000× and 25,000×, respectively. As shown in the SEM image of Figure 4a, we found that the PCL-CD-IC was well dispersed in the composite, and there was no Crystals 2020, 10, 1137 7 of 10 obvious agglomeration between each other, showing particle sizes of approximately a few micrometers. In addition, as shown in Figure 4b, the PCL-CD-IC can also be observed by TEM, and a particle size of approximately 3-5 µm at the submicron scale was confirmed. The typical SEM and TEM images of the composite are demonstrated in Figure 4a,b. The images were acquired from the measurements of the PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/1 composite, with SEM and TEM magnifications of 2000× and 25,000×, respectively. As shown in the SEM image of Figure 4a, we found that the PCL-CD-IC was well dispersed in the composite, and there was no obvious agglomeration between each other, showing particle sizes of approximately a few micrometers. In addition, as shown in Figure 4b, the PCL-CD-IC can also be observed by TEM, and a particle size of approximately 3-5 μm at the submicron scale was confirmed. The typical SEM and TEM images of the composite are demonstrated in Figure 4a,b. The images were acquired from the measurements of the PEA/PEO/PVAc/PCL-CD-IC = 80/10/10/1 composite, with SEM and TEM magnifications of 2000× and 25,000×, respectively. As shown in the SEM image of Figure 4a, we found that the PCL-CD-IC was well dispersed in the composite, and there was no obvious agglomeration between each other, showing particle sizes of approximately a few micrometers. In addition, as shown in Figure 4b, the PCL-CD-IC can also be observed by TEM, and a particle size of approximately 3-5 μm at the submicron scale was confirmed. (a) (b) The quaternary compositions of the PEA/PEO/PVAc/PCL-CD-IC composites shown here were 80/10/10/0 and 80/10/10/1. Due to the reflection from the (111), (110), and (020) crystalline planes [28][29][30], neat PEA presented three main diffraction peaks at 2θ = 20.4 • , 22.3 • , and 24.7 • , respectively. We found that PEA had no peak shift after forming the composite. The WAXD patterns of PEA were very similar to those of the PEA/PEO/PVAc/PCL-CD-IC composites, which means that PEA will not change its crystalline structures after adding of PEO, PVAc, and PCL-CD-IC. In addition, since the X-ray reflections of the quaternary composite seem to be less pronounced, the crystallinity of the quaternary composite should be slightly lower. Figure 6 presents the WAXD results of neat PEA and PEA/PEO/PVAc/PCL-CD-IC composites. The quaternary compositions of the PEA/PEO/PVAc/PCL-CD-IC composites shown here were 80/10/10/0 and 80/10/10/1. Due to the reflection from the (111), (110), and (020) crystalline planes [28][29][30], neat PEA presented three main diffraction peaks at 2θ = 20.4°, 22.3°, and 24.7°, respectively. We found that PEA had no peak shift after forming the composite. The WAXD patterns of PEA were very similar to those of the PEA/PEO/PVAc/PCL-CD-IC composites, which means that PEA will not change its crystalline structures after adding of PEO, PVAc, and PCL-CD-IC. In addition, since the X-ray reflections of the quaternary composite seem to be less pronounced, the crystallinity of the quaternary composite should be slightly lower. Conclusions The novel quaternary biodegradable polymer composites of PEA/PEO/PVAc/PCL-CD-IC were investigated in this study. In studies of quaternary biodegradable polymer composites, this work was the first to use supramolecular inclusion complexes in enhancing the crystallization behaviors. An inclusion complex of PCL-CD-IC was successfully prepared and studied by DSC, FTIR, and 1 H NMR. The PCL-CD-IC was further used to promote the crystallization rate of quaternary PEA/PEO/PVAc/PCL-CD-IC composites. We found that PCL-CD-IC can be well-dispersed in the quaternary composites of PEA/PEO/PVAc/PCL-CD-IC. In the analyses of isothermal crystallization by the Avrami equation, the rate constant k was increased when the content of PCL-CD-IC was increased. By this result, it can be indicated that PCL-CD-IC promoted the crystallization rate of PEA in the composite. According to the WAXD results, the quaternary composites of PEA/PEO/PVAc/PCL-CD-IC displayed similar diffraction patterns to neat PEA. The POM observations revealed that the nucleation density of the quaternary composites can be significantly increased with the presence of PCL-CD-IC. The PCL-CD-IC with a supramolecular structure can effectively promote the crystallization of the novel biodegradable polymer composites composed of PEA, PEO, PVAc, and PCL-CD-IC. The composites investigated in this study may have the potential to be applied in biodegradable plastics and materials for biomedical and agricultural end uses. Conclusions The novel quaternary biodegradable polymer composites of PEA/PEO/PVAc/PCL-CD-IC were investigated in this study. In studies of quaternary biodegradable polymer composites, this work was the first to use supramolecular inclusion complexes in enhancing the crystallization behaviors. An inclusion complex of PCL-CD-IC was successfully prepared and studied by DSC, FTIR, and 1 H NMR. The PCL-CD-IC was further used to promote the crystallization rate of quaternary PEA/PEO/PVAc/PCL-CD-IC composites. We found that PCL-CD-IC can be well-dispersed in the quaternary composites of PEA/PEO/PVAc/PCL-CD-IC. In the analyses of isothermal crystallization by the Avrami equation, the rate constant k was increased when the content of PCL-CD-IC was increased. By this result, it can be indicated that PCL-CD-IC promoted the crystallization rate of PEA in the composite. According to the WAXD results, the quaternary composites of PEA/PEO/PVAc/PCL-CD-IC displayed similar diffraction patterns to neat PEA. The POM observations revealed that the nucleation density of the quaternary composites can be significantly increased with the presence of PCL-CD-IC. The PCL-CD-IC with a supramolecular structure can effectively promote the crystallization of the novel biodegradable polymer composites composed of PEA, PEO, PVAc, and PCL-CD-IC. The composites investigated in this study may have the potential to be applied in biodegradable plastics and materials for biomedical and agricultural end uses.
4,411.4
2020-12-12T00:00:00.000
[ "Materials Science" ]
Nuclei-and condition-specific responses to pain in the bed nucleus of the stria terminalis The bed nucleus of the stria terminalis (BST) is a basal forebrain structure considered to be part of a cortico-striato-pallidal system that coordinates autonomic, neuroendocrine and behavioural physiological responses. Recent evidence suggests that the BST plays a role in the emotional aspect of pain. The objective of the present study was to further understand the neurophysiological bases underlying the involvement of the BST in the pain experience, in both acute and chronic pain conditions. Using c-Fos as an indicator of neuronal activation, the results demonstrated that a single toe-pinch in rats produced nuclei-and condition-specific neuronal responses within the anterior region of the BST (antBST). Specifically, acute noxious stimulation increased c-Fos in the dorsal medial (dAM) and fusiform (FU) nuclei. Chronic neuropathic pain induced by chronic constriction injury (CCI) of the sciatic nerve decreased the number of c-Fos positive cells following acute mechanical stimulation in the dAM and FU nuclei, and increased c-Fos immunoreactivity in the ventral medial (vAM) aspect of the BST. In addition, the results revealed a nuclei-specific sensitivity to the surgical procedure. Following noxious stimulation to animals that received a sham surgery, c-Fos immunoreactivity was blunted in the FU nucleus while it increased in the oval (OV) nucleus of the BST. Altogether, this study demonstrates that pain induces nuclei-and condition-specific neuronal activation in the BST revealing an intriguing supraspinal neurobiological substrate that may contribute to the physiology of acute nociception and the pathophysiology of chronic pain. Introduction Pain is more than a sensory-discriminative experience: it has emotional and cognitive aspects as well.Emotion-induced increases in nociceptive thresholds are necessary for several critical physiological functions, including childbirth and escape from predators (Jorum, 1988).Similarly, emotion-induced hyperalgesia is adaptive since it promotes protection of injured tissue and hence, allows time for healing (Imbe et al., 2006).Like many other physiological functions, there is a fine line between normal regulation (of nociception) and the development of pathophysiological states.Sustained analgesia makes individuals vulnerable to tissue damage whereas hyperalgesia can, under certain circumstances, contribute to persistent or chronic pain conditions (Urban and Gebhart, 1999;Vanegas and Schaible, 2004).Furthermore, pathological emotional states such as anxiety impair normal nociceptive and healing processes (Imbe et al., 2006). Evidence suggests that modulation of nociception originates in the brain, although the exact neurobiological bases are still poorly understood.A better understanding of the neurobiological systems involved in the cognitive and emotional aspects of pain will help in developing better approaches to treat and manage chronic pain.In both rodents and humans, supraspinal sites involved in the negative emotional component of pain include the anterior cingulate cortex and the amygdala (Johansen and Fields, 2004;Johansen et al., 2001;Neugebauer and Li, 2002;Neugebauer et al., 2004;Rainville et al., 1997).Evidence also suggests the bed nucleus of the stria terminalis (BST) as a critical brain site contributing to the physiological manifestation of the emotional aspect of pain.The BST is a basal forebrain structure consisting of 14 distinct nuclei that forms, with the amygdaloid nuclei, a complex referred to as the extended amygdala (Alheid et al., 1995).The anterior region of the BST (antBST), comprises 8 distinct nuclei: fusiform (FU), dorsal and ventral anteromedial (dAM and vAM), dorsal and ventral anterolateral (dAL and vAL), oval (OV), rhomboid (RH) and juxtacapsular (JX) nuclei.Efferent and afferent projections of the antBST suggest a role in coordinating neuroendocrine, autonomic, and somatomotor responses that together, could contribute to the peripheral manifestations of emotions (LeDoux, 2000). Evidence for the contribution of the BST in the emotional aspect of pain is currently twofold: anatomically, the BST receives afferents from a subpopulation of glutamatergic Cfibers (IB4 positive) that terminate primarily at limbic targets (BST, globus pallidus, hypothalamus) rather than sensory-discriminative regions of the brain (lateral thalamus, somatosensory cortex) (Braz et al., 2005).Physiologically, BST lesions block pain-induced conditioned place aversion, a measure of the emotional aspect of pain in rats (Deyama et al., 2007). The objective of the present study was to further understand the neurophysiological bases underlying the involvement of the BST in both acute and chronic pain conditions.Using c-Fos as an indicator of neuronal activation, we found that a single toe pinch, in rats, produced nuclei-and condition-specific neuronal activation within the BST. Animals Adult male Sprague-Dawley rats weighing 180-220 grams at the beginning of the experiments (Charles River Canada, Montréal, Québec) were housed in pairs on a reverse 12-hour light/dark cycle in a temperature-controlled setting with free access to standard rat chow and water.Experiments were conducted in the afternoon and early evening.The rats were acclimatized to the animal facility for no less than 3 days prior to any experimental procedures or surgical manipulation.Animal protocols were approved by the Queen's University Animal Care Committee in accordance with the guidelines set by the Canadian Council on Animal Care.All efforts were made to ensure that the number of animals used and suffering was kept to a minimum. On testing day, rats were taken from the animal care facility and brought into the laboratory where they remained in their home cage for a minimum of 1.5 h prior to testing.Lights were turned off in the laboratory as to not affect circadian rhythms established by the reversed light/dark cycle. Surgical procedures Chronic constriction of the sciatic nerve induced neuropathic pain (Bennett and Xie, 1988).Rats were deeply anesthetized with isoflurane (2.5%, by inhalation).Upon absence of a tail flick response, a small incision was made at the mid-thigh of the left hind limb to expose the underlying muscle tissue and a blunt dissection was used to expose the sciatic nerve.Connective tissue was carefully removed from the sciatic and 4 loose ligatures of chromic gut suture (CP Medical, Portland, OR) were knotted around the nerve proximal to its trifurcation point.Sutures were ligated approximately 1 mm apart and care was used to ensure that sutures were not disrupting perineural blood flow.Muscle and skin were sutured shut with monocryl (Ethicon, Somerville, NJ).Sham rats received the same surgical treatment, but the sciatic nerve was not manipulated.Pre-anesthesia, rats were orally administered 0.5 ml of liquid children's Tylenol (1.7 mg/kg; McNeil, Fort Washington, PA).Pre-operatively, the rats received subcutaneous injection of 5 ml of lactated Ringer's solution (LRS), 0.013 ml/100 g Tribissen antibiotic and eye gel (Novartis, Mississauga, Ontario, Canada).On the following day, rats were given the same dose of children's Tylenol and LRS.Eleven days following CCI, all animals displayed ipsilateral hind paw deformities and contralateral paw favouring -two reliable characteristics of a successful CCI.All animals used in this study displayed both these phenotypes and as such, were considered neuropathic.In previous experiments conducted in our laboratory where Von-Frey mechanical allodynia scores were assessed, a very high percentage of CCI animals record significantly reduced thresholds to mechanical stimulation (Holdridge and Cahill, 2007). Experimental procedure A flat-tipped, 1″ fixed-gauge metal alligator clip (from a local electronic shop) applied to the 4th knuckle of the left hind paw was used to evoke an acute noxious injury in this experiment.Prior to the onset of the noxious stimulus, rats were wrapped in a towel with their left hind limb exposed.The stimulus was applied until vocalization (approximately two seconds). Experiment 1 assessed the effect of acute noxious stimulation on c-Fos-IR in the BST.To determine if acute noxious (toe pinch -TP) or innocuous (light touch -LT) mechanical stimulation (S+) caused neuronal activation in the BST, rats were randomly assigned to one of the following conditions.Condition 1 served as a control group, where rats did not receive the stimulus (S−).Groups were divided based on time elapsed between TP or LTstimulation and euthanasia.Rats in the toe pinch S+1 h group (TP S+1), received a toe pinch, were returned to their home cage and euthanized 1 h following the pinch.TP S+2 and TP S+5 were euthanized 2 and 5 h following toe pinch, respectively.Two additional groups receiving LT rather than TP were included to confirm that neuronal activation evident in the BST was in fact a result of noxious stimulation: LT S+1 and LT S+2.Using the same protocol as the noxious stimulation groups, rats were taken from the home cage, wrapped in a towel and the alligator clip set in the open position was lightly rubbed against the 4th knuckle of the left hind paw for 2 s before returning the animal to their home cage. Experiment 2 assessed the effect of CCI on noxious stimulus-induced neuronal activation.Seventeen rats received a CCI of the left common sciatic nerve and were randomly divided into 3 groups following the same protocol of noxious stimulation as described in experiment 1; CCI S-, CCI S+1 and CCI S+2.Additionally, 18 sham operated rats were divided into 3 groups where Sham S-received a sham surgery and no stimulation, Sham S+1 received a toe pinch and were euthanized one hour following S+and Sham S+2 received a toe pinch and were euthanized two hours following mechanical nociceptive stimulation.Since a reliable decline in c-Fos expression was observed in control conditions 5 h following noxious stimulation, a five-hour time period was not included in the CCI experiments in order to keep the number of animals used in this study to a strict minimum. After the experimental manipulations, rats were deeply anesthetized with sodium pentobarbital (70 mg/kg) and perfused via the aortic arch with 500 ml of 4% paraformaldehyde (PFA) in 0.1 M phosphate buffer (PB), pH 7.4 at 4 °C.Brains were extracted and post-fixed for 24 h at 4 °C in 4% PFA solution.Following post-fixation, brains were cryoprotected in 30% sucrose made in 0.1 M PB overnight or until they sank. Brains were mounted on a freezing sledge microtome and 30-micron transverse slices were obtained starting at the most rostral point of the BST as outlined in (Swanson, 2005) and concluded when no more BST was evident.Every second slice was collected in 0.1 M TBS (Trizma® base solution, pH 7.4) for immunohistochemical analysis. Immunohistochemical detection of c-Fos Immunohistochemical detection of c-Fos was used as an indirect marker of neuronal activation (Herrera and Robertson, 1996).Following sectioning, free-floating brain slices were incubated in a solution consisting of 0.3% H 2 O 2 and TBS for 10 min to reduce endogenous peroxidase activity.Following 3 five-minute washes with TBS, sections were incubated in solution containing 5% BSA and 0.1% H 2 O 2 in TBS-T (0.1 M TBS with triton X-100) at 4 °C for 2 h s to reduce non-specific immuno-labeling.Following incubation with blocking solution, sections were incubated at 4 °C overnight in a 1:5000 dilution of rabbit anti-c-Fos (Lot 124 K4881, Sigma, St. Louis, MO) prepared in TBS-T and 1% BSA.Sections were then incubated in biotinylated goat anti-rabbit IgG (1:1500; Vector Laboratories, Burlingame, CA) and c-Fos labeling was amplified according to the avidinbiotinylated-horseradish-peroxidase complex (ABC; Vector Laboratories, Burlingame, CA) followed by revelation with 1,3-Diaminobenzidine (DAB) solution (0.15 mg/ml) in TBS. Discussion The present study reveals that acute mechanical noxious stimulation increases nuclear c-Fos immunoreactivity in the anterior region of the BST.Since c-Fos is a widely used indicator of neuronal activation in the central nervous system, we suggest that noxious stimulation induces neuronal activation in the BST.More specifically, noxious stimulation-induced c-Fos expression was restricted to specific sub-regions of the antBST and was modified by chronic neuropathic pain or in certain cases, by sham surgical procedures.Given that the BST is thought to contribute to the emotional rather than the sensory-discriminative aspect of pain, our results may provide new insights into the neuronal bases of the emotional aspect of pain. Noxious stimulation-induced neuronal activation in the BST We measured c-Fos immunoreactivity in response to acute noxious mechanical stimulation in normal Sprague-Dawley rats, or in rats that developed peripheral neuropathy following chronic constriction injury of the sciatic nerve.We observed that acute mechanical noxious stimulation increased c-Fos immunoreactiv-ity in the BST and that the response returned to baseline a few hours after the stimulus, which is consistent with the transient expression of c-Fos (Herrera and Robertson, 1996).This is the first study to demonstrate that noxious stimulation triggers nuclei-specific c-Fos expression in the BST.Since this study evaluated each nuclei of the anterior BST specifically, we observed c-Fos expression in response to acute noxious stimulation in only two (FU and dAM) of the eight antBST nuclei. We observed altered acute noxious stimulation-induced c-Fos expression in chronic neuropathic pain conditions.Chronic nerve constriction completely blunted mechanical pain-induced expression of c-Fos in some BST regions (dAM, FU) whilst stimulating expression in previously silent regions (vAM).Similarly, we saw that the sham surgical procedure altered acute noxious stimulation-induced c-Fos expression in certain regions of the antBST (increased in OV and decreased in FU).It is unclear how these modifications in c-Fos expression alter antBST activity, however, given the strong correlation between noxious stimuli-induced c-Fos expression and neuronal activation, at least in the dorsal horn of the spinal cord, the most plausible hypothesis is that the change in c-Fos expression is due to modifications in neuronal responding, habituation or sensitization in the BST (Bullitt, 1990;Coggeshall, 2005).Alternatively, noxious information from the periphery to the BST might be modified after chronic pain or sham surgery.Alteration in signaling pathways leading to c-Fos expression could have also affected c-Fos immunoreactivity.There is, however, little evidence from the literature suggesting alterations in the signaling pathways leading to c-Fos protein expression whereas abundant evidence supports underlying modifications in neuronal activity with associated changes in behaviours (Bullitt, 1990; PMC Canada Author Manuscript PMC Canada Author Manuscript PMC Canada Author Manuscript Coggeshall, 2005).It is unclear why the pain-induced c-Fos-IR was blunted in the FU in both CCI and shams animals, although both inflammation and simple surgical incisions alter c-Fos expression in the spinal cord (Prewitt and Herman, 1998).It is plausible, then, that sham surgery alone is sufficient to produce lasting changes to c-Fos expression in supraspinal areas, including the BST. The consequences of altered BST c-Fos expression, neuronal activation, or both on the pain experience are currently unknown.However, a better understanding of how the BST contributes to the pain experience might shed light on the potential impact of these modifications. Significance of noxious stimulation-induced neuronal activation in the BST We hypothesized that noxious stimulation would trigger neuronal activity and hence c-Fos expression in the BST since it receives direct (and potentially indirect) afferents from the dorsal horn of the spinal cord (Braz et al., 2005;Burstein and Giesler, 1989;Cliffer et al., 1991).Our observations confirm that these afferents carry noxious (but not innocuous) information from the periphery to the BST.Noxious stimulation-induced c-Fos expression was sub-region-specific, which reinforces the idea that the BST is a cluster of nuclei with specific functions rather than a homogeneous brain structure.Given the small size of each BST nuclei, it is currently difficult to lesion a single BST nucleus and thus, assessing the behavioural or physiological functions of individual BST nuclei is currently not attainable.Nonetheless, our study demonstrates that the dAM and the FU are regions of the antBST that respond to acute mechanical noxious stimulation and might contribute to the pain experience.Conversely, several antBST nuclei (AL, JX) showed no apparent response to acute noxious insult. There is evidence from the literature that the BST contributes to the emotional rather than the sensory-discriminative or cognitive components of pain.Lesions to the antBST block pain-induced conditioned place aversion in rodents, a measure of the emotional component of pain in rats (Deyama et al., 2007).Furthermore, we saw that noxious stimulus-induced c-Fos expression was bilateral in the BST, an observation consistent with brain systems that mediate emotional aspects of pain (Bernard and Besson, 1990;Chudler and Dong, 1995;Chudler et al., 1993).In contrast, sensory-discriminative brain regions such as the lateral thalamus or the sensory cortex display lateralization (usually contralateral to the stimulus). The projection pattern of the AM aspect suggests that it contributes in coordinating neuroendocrine, autonomic, and behavioural or somatic responses associated with maintaining energy balance (Dong and Swanson, 2006a).Similarly, the FU projects along approximately 4 distinct pathways that terminate in the central amygdala, hypothalamus (PVH), midbrain (ventral tegmental area) and lower brainstem (lateral periaque-ducal gray, raphe) (Dong et al., 2001b).Each of these descending pathways corresponds to important physiological functions triggered by nociceptive stimuli such as defensive behaviours, descending analgesia, autonomic control of breathings, cardiovascular responses, arousal, and activation of HPA axis (Choi et al., 2007;Dick and Coles, 2000;Rossi et al., 1994;Satoh and Fibiger, 1986;Vertes, 1991).Given this broad projection pattern of each BST Conclusions The neurobiological construct of the emotional aspect of pain is at the supraspinal level and more specifically involves brain regions such as the anterior cingulate cortex, the amygdala, and the BST (Borszcz, 2006;Deyama et al., 2007;Johansen et al., 2001;Neugebauer et al., 2004;Rainville et al., 1997).The BST is ideally located anatomically and functionally to receive incoming noxious information, process this information along with additional incoming neocortical (prefrontal cortex) and other limbic structure (amygdala, hippocampus) information, and relay this message to the periphery (midbrain and brainstem) (Dong et al., 2000;Dong et al., 2001a;Dong et al., 2001b;Dong and Swanson, 2003, 2004a, b, 2006a, b, 6c). This study reveals that the BST is responsive to pain stimuli in a nuclei-and conditionspecific manner and that not all aspects of the BST contribute to the pain response.Given its afferents, the BST could be critically involved in the resulting autonomic, neuroendocrine, and behavioural responses observed after noxious insults and thus could contribute to the peripheral manifestation of emotions (LeDoux, 2000).Our observation that chronic conditions such as peripheral neuropathy or in some cases, a simple surgical procedure can modify neuronal activity in the BST is intriguing.These findings could lead to a better understanding of the role of supraspinal sites in the pathophysiology of chronic pain. Fig. 2 . Fig. 2. Effect of mechanical noxious stimulation on c-Fos-IR in the dAM nucleus of the BST.Top and middle panels: photomicrographs representing c-Fos immunoreactivity in no-stimulation (S−) or one-hour after (S+) toe-pinch in acute, sham, and CCI rats (from left to right).White arrows indicate specific nuclear expression of c-Fos.Black arrows indicate non-specific cytoplasmic labeling.Scale bar=500 μm.Bar histogram representing the number of c-Fos expressing neurons as a function of time after toe-pinch in acute, sham, and CCI animals.Numbers above bars indicate number of rats.Error bars represent SEM. Fig. 3 . Fig. 3. Effect of mechanical noxious stimulation on c-Fos-IR in the FU nucleus of the BST.Top and middle panels: photomicrographs representing c-Fos immunoreactivity in no-stimulation (S−) or one-hour after (S+) toe-pinch in acute, sham, and CCI rats (from left to right).White arrows indicate specific nuclear expression of c-Fos.Black arrows indicate non-specific cytoplasmic labeling.Scale bar=250 μm.Bottom panel: Bar histogram representing the number of c-Fos expressing neurons as a function of time after toe-pinch in acute, sham, and CCI animals.Numbers above bars indicate number of rats.Error bars represent SEM. Fig. 4 . Fig. 4. Effect of mechanical noxious stimulation on c-Fos-IR in the vAM nucleus of the BST.Top and middle panels: Photomicrographs representing c-Fos immunoreactivity in no-stimulation (S−) or one-hour after (S+) toe-pinch in acute, sham, and CCI rats (from left to right).White arrows indicate specific nuclear expression of c-Fos.Black arrows indicate non-specific cytoplasmic labeling.Scale bar=500 μm.Bottom panel: Bar histogram representing the number of c-Fos expressing neurons as a function of time after toe-pinch in acute, sham, and CCI animals.Numbers above bars indicate number of rats.Error bars represent SEM. Fig. 5 . Fig. 5. Effect of mechanical noxious stimulation on c-Fos-IR in the other nuclei of the antBST.Number of c-Fos expressing neurons as a function of time after toe-pinch in acute, sham, and CCI animals.Number of animals per group is the same as displayed in dAL.Error bars represent SEM. -nuclei, noxious stimuli-induced activation of the BST should result in generalized physiological responses consistent with an emotional response. sub Prog Neuropsychopharmacol Biol Psychiatry.Author manuscript; available in PMC 2014 May 06. Prog Neuropsychopharmacol Biol Psychiatry.Author manuscript; available in PMC 2014 May 06. Prog Neuropsychopharmacol Biol Psychiatry.Author manuscript; available in PMC 2014 May 06.
4,568.2
2008-04-01T00:00:00.000
[ "Biology" ]
Defect Detection in Tire X-Ray Images Using Weighted Texture Dissimilarity Automatic defect detection is an important and challenging problem in industrial quality inspection. This paper proposes an efficient defect detection method for tire quality assurance, which takes advantage of the feature similarity of tire images to capture the anomalies. The proposed detection algorithm mainly consists of three steps. Firstly, the local kernel regression descriptor is exploited to derive a set of feature vectors of an inspected tire image. These feature vectors are used to evaluate the feature dissimilarity of pixels. Next, the texture distortion degree of each pixel is estimated by weighted averaging of the dissimilarity between one pixel and its neighbors, which results in an anomaly map of the inspected image. Finally, the defects are located by segmenting this anomaly map with a simple thresholding process. Different from some existing detection algorithms that fail to work for tire tread images, the proposed detection algorithm works well not only for sidewall images but also for tread images. Experimental results demonstrate that the proposed algorithm can accurately locate the defects of tire images and outperforms the traditional defect detection algorithms in terms of various quantitative metrics. Introduction Due to unclean raw materials and undesired manufacturing facilities used in the tire manufacturing process, tire components may be contaminated by various defects, such as metallic or nonmetallic impurities (e.g., steel threads, screws, and plastic fragments), bubble, and overlap.When a vehicle with the defective tire is at high speeds, these defects often lead to a blowout of the tire.Therefore, the nondestructive defect detection technique based on X-ray imaging is essential for tire quality assurance.The traditional quality inspection process is mostly performed by human inspection, which often occurs from inaccurate and undetected inspection results due to visual fatigue and leads to low efficiency with high labor costs [1].As a result, computer vision based detection techniques have become an important and efficient tool to improve quality of the products and increase manufacturing efficiency [2]. Automatic quality inspection of industrial products has been a popular research topic in the image processing and computer vision communities.Many methods based on different theories, such as texture analysis and spectral analysis, have been proposed to address the limitations of manual inspection.In the texture based detection methods, the detection of defects is carried out by comparing texture features of different image patches.Therefore, a key issue for such methods is texture feature extraction.Latif-Amet et al. [3] used the subband cooccurrence matrices (CM) to characterize texture features of multiscale subbands of inspected images.A major disadvantage of CM is the high computational complexity for large-size images.In [4], Tajeripour et al. proposed a defect detection method which applies the local binary pattern (LBP) to extract texture features.However, for low-quality images, LBP has poor performance in the feature description. Meanwhile, there also exist many transform-based methods for defect detection.Due to its capability of singularity analysis and noise immunity, the wavelet transform is well suited for finding the location of anomalies in textured images.Tsai and Hsiao [5] proposed a wavelet-based method for automatic surface inspection, which generates an image with enhanced local anomalies by reconstructing the selected wavelet coefficients according to a synthesis strategy.Serdaroglu et al. [6,7] use independent component analysis (ICA) and topographic ICA to generate feature vectors of wavelet subbands.Then the defects are detected according to the Euclidean distance between feature vectors.In addition, the Gabor transform is also a popular tool for extracting local frequency information of textured images.In [8], Kumar and Pang utilized Gabor filters to detect fabric defects, in which a foreground image is firstly extracted and then the defects are located by segmenting it straightforwardly.However, this method is sensitive to the choice of filter parameters.Although the optimized filters have been developed for defect detection in [9], they have a high computational cost.Unlike the above methods that use fixed basis functions to represent images, Tsai et al. [10] proposed a method for defect detection in solar modules, which uses ICA to learn a set of basis functions from defect-free training images.Each image under inspection can be represented as a linear combination of the learned basis functions.By analyzing the reconstruction error between the image under inspection and the reconstructed image from representation coefficients, defects can be detected.Similarly, some detection methods based on sparse representation have been introduced in different application fields [11,12].These methods use a sparse constraint to learn an adaptive representation dictionary from test images.Unfortunately, the learning process is computationally expensive. Most of existing automatic inspection techniques are focused on textile [1][2][3][4], solar wafer [10,13], and flat steel [14].Recently, a few research works for tire defect detection have been reported in the literature.In [15], Guo and Wei proposed a detection method based on the image component decomposition (ICD) technique, which exploits the local total variation filtering and the vertical mean filtering to separate defects from inspected images.From the point of view of edge detection, Zhang et al. [16,17] detected defects by using the multiscale geometric transform and the edge detection operator.Based on the dictionary representation technique, Xiang et al. [18] proposed a dictionary based detection method for tire defects by analyzing the distribution of representation coefficients.However, these methods for tire defects detection were designed only for tire sidewall images.Consequently, all of them fail to work for tire tread images due to their complex structures.Although a density projection based method [19] was presented for detecting tire tread images, this method only provides the orientation information of defects and cannot accurately locate them. The reason for the weakness of the above methods is that they do not effectively capture texture distortions of images.To address this problem, a simple and efficient detection method is proposed in this paper, which takes advantage of the feature similarity of tire images.Specifically, for an inspected image, the proposed method firstly estimates the texture distortion degree of each pixel by weighted averaging of the dissimilarity between this pixel and its neighbors within a local window and produces an anomaly map of the inspected tire image.Then the defects are located by segmenting this anomaly map with a simple thresholding function.Experimental results on tire X-ray images show that the proposed method can effectively detect the defects both in the tire sidewall and in the tire tread.To the best of our knowledge, this is the first work that can accurately locate the defects on tire thread images.In addition, as the computational core of the proposed algorithm, computing the feature dissimilarity of image pixels can be implemented independently in parallel, which makes the proposed method feasible for tire online inspection. Characteristics of Tire Images Due to imperfect raw materials and manufacturing process, tire sidewall and tire tread may contain various types of defects such as impurity, bubble, and overlap.Figure 1 shows the main defects of tire sidewall and tire tread in the quality inspection.Figure 1(a) displays a tire sidewall image that contains a metallic impurity, in which the region of impurity is darker than its neighbors.On the contrary, a bubble on the tire sidewall shown in Figure 1(b) is brighter than the surrounding pixels.Figure 1(c) is an example of the overlap which results in an irregular texture.Figures 1(d) and 1(e) further show two tire tread images, which, respectively, contain an impurity and an overlap. From Figure 1, it is observed that tire tread images contain more complicated textures with lower contrast than tire sidewall images, which leads to difficulties in detecting defects in the tire tread.Apart from this observation, it is further noted that both tire sidewall images and tire tread images are dominated by a texture which manifests a high regularity and defects locally break the regularity of a texture.This regularity generally means a similarity between pixel features.Therefore, the defects can be located by analyzing the feature dissimilarity of a pixel and its surrounding ones.In the following section, the proposed detection method based on the feature dissimilarity analysis is described in detail. The Proposed Method The similarity between pixels of a textural image implies that there exists an implicit dependency between one pixel and its neighbors.Therefore, a pixel can be represented as a weighted linear combination of the surrounding pixels.Based on the above considerations, we propose a detection method by using weighted texture dissimilarity to measure perceptual texture distortion, in which the distortion of a pixel is defined as the dissimilarity between the original feature value and the represented one of pixels.Figure 2 shows a block diagram of the proposed method, which consists of three main steps: extracting texture features, evaluating structural dissimilarity, and segmenting the defects.Texture features of each pixel are extracted using the local kernel regression (LKR) descriptor [20] and represented as vectors of features.Then the anomaly of each pixel is determined by weighting the dissimilarity between this pixel and its local neighbors.Finally, the defects are located by segmenting the anomaly map with a simple thresholding process. In order to determine the distortion of each pixel, we introduce an anomaly value for each pixel to quantize its distortion.Specifically, the anomaly value of a pixel in an image is defined as follows: where , is a weight determined from the input data, is a spatial window centered at pixel , and ( , ) is a feature dissimilarity metric between pixels and .The key issue in calculating the anomaly value ( ) is how to determine the feature dissimilarity metric ( , ).Therefore, we will describe the feature descriptor adopted to represent the texture structure of inspected images first, the definitions of ( , ), and , afterwards. 3.1.Texture Feature Extraction.Different from the intensitybased metric for similarity measurement, the proposed method uses the feature correlation as a quantitative measure of the similarity between pixels and their neighbors, which is motivated by the following two reasons: on the one hand, texture feature extraction of inspected images is essential for measuring the perceptual texture distortion, which is an efficient way to avoid the negative influences of illumination changes for similarity measurement.On the other hand, we use the correlation metric to calculate patch similarity which has the advantage of being insensitive to outliers.Due to its robustness to noise and other perturbations, we adopt the LKR descriptor to derive a set of feature vectors of an image, which acquires the textural structures of images by analyzing the local gradient information.Unlike scaleinvariant feature transform (SIFT) and histogram of gradient (HOG) which use the quantization of oriented gradients to reduce computational cost, LKR computes the texture feature between oriented gradients without the quantization step, which leads to a better feature description power.In general, LKR is more invariant to the shift and rotation transformations than the conventional feature descriptors based on gradients and key points.For a pixel at position , the LKR feature vector is defined as a self-similarity between this pixel and its neighbors, which is derived by the following form: where is the spatial coordinates, is the number of pixels in a local window Ω with size √ × √, and C is a gradient covariance matrix.In the numerical calculation, C is estimated by averaging a collection of spatial gradient vectors within the local window and can be written as follows: where 1 (⋅) and 2 (⋅) are the first derivatives along 1 and 2 directions, respectively. From (3) it is seen that C can be interpreted as averaging geodesic distances in a patch.Thus it is robust to noise and other perturbations.In addition, as a key part of LKR, C can capture the singularities of images, which makes LKR suitable for describing the texture feature of images.For complete details, we refer the interested reader to the work of Takeda et al. [20].In order to validate the effectiveness of LKR, the LKR magnitudes (for a LKR feature vector x = ( 1 , 2 , . . ., ) , we define its LKR magnitude as the square root of the sum of the squares of its elements; i.e., ‖x‖ LKR = √ 2 1 + 2 2 + ⋅ ⋅ ⋅ + 2 ) of two tire defective images are displayed in Figure 3.It can be observed that the LKR descriptor can effectively capture local texture distortion of tire images. Feature Dissimilarity Evaluation. In general, the computation of pixel-based dissimilarity metric ( , ) is quite sensitive to noise so that the resulting anomaly values of pixels are unstable in the presence of noise.Instead of making use of a pixel for similarity measurement, the proposed method exploits a patch-based metric to improve the robustness of measuring similarity (patch-based metric has been extensively used in the image processing community.Its robustness against noise has also been demonstrated in the literature.See [21,22] for some application examples).Thus the anomaly model (1) can be generalized as where and are the patches of pixels centered at and , respectively.For the simplification of notations, we use , instead of ( , ). In (4), the dissimilarity metric (x, y) plays a significant role in computing anomaly.In this paper, we exploit a correlation metric based on the LKR feature descriptor to measure the dissimilarity between image patches, which is defined as follows: where Corr (x, y) = ⟨x, y⟩ √⟨x, x⟩√⟨y, y⟩ (⋅) denotes a LKR feature extraction operator, and 1 is an adjustment parameter.The correlation defined in ( 6), also called cosine similarity, is a measure of the cosine angle between x and y because the inner product ⟨x, y⟩ depends exclusively on the angle between two vectors.Due to the advantage of robustness to noise, the cosine similarity outperforms the conventional metrics for the similarity measurement, such as Euclidean (L2) distance or Manhattan (L1) distance [23].In fact, the cosine similarity is the uncentered Pearson correlation coefficient, which avoids subtracting the vector means and permits a simpler calculation.We evaluated the performance of the cosine similarity and Pearson correlation coefficient and found that the performance of cosine similarity is very close to Pearson correlation coefficient.Therefore, we use it to measure the similarity of LKR feature vectors. In essence, (1) is equivalent to applying a weighted filter to the dissimilarity values between pixels.The simplest form of a weighted filter is the uniform weight that assigns the same weight to all data; for example, all weights set to 1.However, the uniform weight generally leads to an oversmoothed result.In the literature, there exists various weight functions such as Gaussian weight, Geman-McClure weight, and Hebert-Leahy weight [24].For calculating the anomaly map, the weight value should be inversely proportional to the degree of dissimilarity.Thus, we adopt the following weight function due to its simplification: where min = min( , ), ∈ , and = 0.05 is a small constant that avoids division by zero. For each pixel , we can yield its anomaly value by substituting ( 5) into (4).The smaller the value of ( ) is, the smaller the possibility of belonging to a defect is.Therefore, we can use ( ) as a dissimilarity measure to distinguish defect pixels from the inspected image.In essence, defines an anomaly map of the image.Anomaly maps of six tire tread images are displayed in Figures 6(b) and 7(b).It can be noted that the entire defect regions are clearly highlighted in the anomaly maps, which facilitates an accurate segmentation of defects by using a simple segmentation algorithm. Defect Segmentation. After generating the anomaly map, defects can be located by way of a thresholding process.Specifically, if the anomaly value of a pixel is larger than a given threshold , this pixel is considered to be a part of a defect and its value is set to 1. On the contrary, it is set to 0 if its anomaly value is below the threshold.This simple thresholding process can be formulated as follows: The output of the above process is a binary image, in which white pixels denote defective regions and black pixels denote defect-free regions. Although the defects can be detected by using the above thresholding process, there exists a small amount of defectfree pixels which are detected as defective.To resolve this problem and improve the robustness of the proposed algorithm, the pixels contained within defective regions further determined the identity of a defect by the local variance analysis.The local variance of a pixel is calculated over a local window centered at , which can be formulated as where is a normalized Gaussian weight and is the local mean defined as By integrating the local variance into (8), the thresholding function is then modified as where is a given threshold.The complete operational procedure of the proposed detection method can be algorithmically summarized in Algorithm 1. Algorithm 1: The proposed detection algorithm. Experiments In order to validate the effectiveness of the proposed algorithm, this section gives our experimental results performed on a test dataset that is provided by Linglong Tyre Co. Ltd.This dataset consists of 40 defective sidewall images, 20 defect-free sidewall images, 40 defective tread images, and 20 defect-free tread images, in which different types of the defects like impurity, bubble, and overlap are included.The size of all test images is 256 × 256.The performance of the proposed method is evaluated by comparing with ICD-based method (ICDM) [15] and the improved waveletbased method (IWaveM) [5].It needs to point out that we improve the performance of the original wavelet-based method [5] by extracting local wavelet features from small image patches and introducing the vertical mean filtering as used in ICDM.These methods are implemented in MATLAB and performed on an Intel-i7 1.73 GHz computer system with 8 GB RAM.Although there are other methods for tire inspection in the literature, we cannot comprehensively quote them in the paper due to the page limitation.In addition, the codes of these methods are not available online.In all comparisons, the scale parameter for ICDM is set to 7, the Daubechies wavelet transform is exploited for IWaveM with four vanishing moments over three decomposition levels, and for the proposed method we utilize a local window with size 5 × 5 and set the patch size 7 × 7, = 0.5, = 100. Performance Evaluation. The performance of these detection methods mentioned above is quantitatively evaluated by detection rate (DR) [4,10], precision (), recall (), and measure () [25,26], which are deeply related to the ROC analysis [27] and, respectively, defined as follows: where is the total number of test images, tp is the number of detected defective images, tn is the number of detected defect-free images, TPs are the number of truepositive pixels, FPs are the number of false-positive pixels, FNs are the number of false-negative pixels, and we set = 0.5 as in [28]. corresponds to the anomaly pixels detection performance, while is the fraction of detected anomaly pixels to ground-truth.As an overall performance measurement, the -measure is the weighted harmonic mean of and [29]. In Table 1, the detection rates of different detection methods for test images are listed for quantitative comparisons.From this table we can observe that for tire sidewall images of the proposed method outperform ICDM and IWaveM in terms of DR; for tire tread images it also achieves high DR.However, ICDM and IWaveM are not applicable (N/A) to detect the defects of tire tread images.To further compare the performance of these detection methods, the average precision, recall, and -measure on tire sidewall images are shown in Figure 4.It is obvious that the proposed method offers better detection performance than ICDM and IWaveM.In general, ICDM and IWaveM perform well on simple texture images.When the images are with complex structures, they do not work as well.The major reason is that the low contrast and the complicated texture of tire tread images reduce the saliency of defects, which causes them to produce unsatisfactory results. For visual comparisons, Figure 5 shows the detection results of these detection algorithms on tire sidewall images, respectively.As shown in this figure, all detection methods can locate the defects of tire sidewall images, and the proposed method obviously outperforms ICDM and IWaveM, which is also justified in Table 1 and Figure 4. To demonstrate the effectiveness of our method detecting the defects of tire tread images, Figures 6 and 7 illustrate the anomaly maps of two types of defects (impurity and overlap) of tire tread images and their detection results produced by our method, respectively.It can be observed that the proposed anomaly model is quite sensitive to texture distortion and the saliencies of defects are well highlighted in the anomaly maps. Parameters Selection. There are four main parameters in the proposed method, the size of local windows and 1 for feature dissimilarity evaluation and the thresholds and for defect segmentation.The parameter is used to segment the anomaly map which describes the feature dissimilarity of image pixels.Therefore, it ranges from 0 to 1.A small value is sensitive to noise and may produce false alarms in defect-free images, whereas a large value results in a loose control limit and may lead to miss a subtle defect.Figure 8 shows the performance curves of the proposed method with varying from 0.1 to 0.9.It can be observed that the proposed algorithm is insensitive to in the range [0.4,0.6].Thus, we choose = 0.5 in our experiments, which is a trade-off between 1 and 0. For different types of inspected images, the parameter can be determined by using a manual tuning method.We find that its performance curves are similar in shape to the ones shown in Figure 8.In all experiments, we empirically set = 300 for tire sidewall images and set = 80 for tire tread images, which yield a well overall detection performance. The size of local window has an important influence on the detection performance of our method.The optimal choice of this parameter depends on the size of defects.By observing defective images, we find that the pixels in the vertical direction are very similar and defects locally break the similarity of pixels.Therefore, a rectangular window with the size (2 + 1) × 256 is preferred.In order to evaluate the influence of varying the window size , Figure 9 shows the change curves of three quantitative metrics , , and with varying from 0 to 10.It reveals that our method works well when = 2, that is, the size of local window being 5×256.In consideration of detection effectiveness and computational cost, we choose the size of 5 × 256 for all experiments.In addition, we find that our algorithm is insensitive to 1 , and we empirically set 1 = 0.15. Computational Cost. To evaluate the computational cost of three detection methods, we compare the running time on the test dataset.The CPU implementations of ICDM, IWaveM, and our method require 2.6 s, 1.2 s, and 17.3 s on the average, respectively, which are performed on Intel Core i7 3.60 GHz.The proposed method for LKR and the dissimilarity metric is computationally expensive.In general, the LKR step takes approximately 18% of the execution time, whereas 73% of the time is spent in calculating the dissimilarity metric.However, our method is suitable for parallel processing because LKR and the dissimilarity metric for each pixel can potentially be calculated independently.In the practical application, therefore, we can accelerate it on the GPU. Conclusions Automatic quality inspection is strongly desired by tire industry to take the place of the manual inspection.This paper presents an efficient detection method for automatic quality inspection, which takes advantages of feature similarity of tire images and captures the texture distortion of each pixel by weighted averaging of the dissimilarity between this pixel and its neighbors.Different from the existing tire defect detection algorithms that fail to work for tire tread images, the proposed detection algorithm works well not only for sidewall images but also for tread images.Experimental results performed on Figure 1 :Figure 2 : Figure 1: Examples of defective images.(a) Impurity in tire sidewall, (b) bubble in tire sidewall, (c) overlap in tire sidewall, (d) impurity in tire tread, and (e) overlap in tire tread. Figure 4 : Figure 4: Performance evaluation for different methods on sidewall images. Table 1 : Detection rate (%) of different algorithms on test images.
5,558
2016-03-15T00:00:00.000
[ "Computer Science" ]
Falling through the social safety net? Analysing non-take-up of minimum income benefit and monetary social assistance in Austria Non-take-up of means tested benefits is a widespread phe-nomenon in European welfare states. The paper assesses whether the reform that replaced the monetary social assistance benefit by the minimum income benefit in Austria in 2010/11 has succeeded in increasing take up rates. We use EU-SILC register data together with the tax-benefit microsimulation model EUROMOD/SORESI. The results show that the reform led to a significant decrease of non-take-up from 53 to 30% in terms of the number of households and from 51 to 30% in terms of expenditure. Following the three-t's (threshold, trigger, and trade-off) introduced by Van Oorschot, estimates of a two-stage Heckman selection model as well as expert interviews indicate that the taken measures include both threshold and trade-off characteristics. Elements such as the higher degree of anonymity within the claiming process, the provision of health insurance, binding minimum standards, the limitation of the maintenance obligations, new regulations related to the liquidation of wealth, as well as the general coverage of the benefit reform in the media and in public discussions led to an improved access to the benefit. This paper exploits the 2010/2011 social assistance benefit reform in Austria to analyse how welfare states can shape take-up. The main aim of the policy change was to combat poverty by introducing nationwide binding uniform standards and facilitating access to the benefit. As such, increasing the take-up of the benefit of last resort was an inherent and important part of the reform. Our research aims at analysing whether the chosen measures have improved take-up and had an impact on barriers to claim the benefit. The analysis offers insights into the target efficiency of the benefit of last resort and evaluates the policy reform. We first compare the size of non-take-up for monetary social assistance in 2009 and the reformed minimum income benefit in 2015 to study the effect of the policy reform on non-take-up. Second, we analyse the social determinants of non-take-up and whether they have changed from one benefit system to the other. Our analysis furthermore contributes to the existing literature by using register data, which allows us to reduce the potential measurement error in reported incomes, a main source of bias in research on non-take-up of means-tested benefits (Frick & Groh-Samberg, 2007; Hernandez & Pudney, 2007; Matsaganis, Levy, & Flevotomou, 2010). The analysis is based on Austrian European Union statistics on income and living conditions (EU-SILC) data together with the tax-benefit microsimulation model EUROMOD/SORESI, which allows us to simulate the intended effect of the benefit and to compare it with the actual situation. We furthermore apply a mixed method design to complement the quantitative estimations with a qualitative in-depth analysis of the reform and potential further improvements using expert interviews. participation rates, is, share of the with higher degrees of or deprivation. | INTRODUCTION The degree to which benefits reach the desired target groups has become a key performance indicator of social protection programs. International organizations like the Organization for Economic Cooperation and Development (OECD) and the European Commission call for "well-targeted income-support policies" (OECD 2011, p.40) that reach those in need at times when they need support (European Commission, 2013). However, in many well-developed welfare states, means-tested benefits tend to be characterized by access and non-take-up issues, that is, failing to reach the defined target groups and to encourage eligible households to claim financial support (Eurofound, 2015;Matsaganis, Ozdemir, & Ward, 2014;Warin, 2014). The causes of non-take-up are manifold and can be driven by individual concerns and personal moral beliefs of eligible individuals but may also point to a failure of the welfare system. The latter can be caused by nontransparent and complex schemes, poor information, or institutional barriers, which may in turn also strengthen subjective barriers (Eurofound, 2015;Kayser & Frick, 2000). Low take-up distorts the intended welfare effect of targeted social transfers (Bargain, Immervoll, & Viitamäki, 2012) and prevents the welfare state from successfully combating poverty. Especially for benefits of last resort, the consequences of this failure can be severe as it amplifies disparities within the society as well as among eligible clients if some are discouraged from claiming by structural or individual barriers. This can have long-term financial and social consequences as persistent poverty and precarious financial circumstances contribute among others to chronic health problems and reduce equal opportunities for children growing up in affected households (Eurofound, 2015;Hümbelin, 2016). From a social policy point of view, non-take-up reduces the capacity to anticipate social outcomes and financial costs of policy reforms as in the case of high non-take-up rates the number of benefit recipients is only of limited informative value (Engels, 2001;Hernanz, Malherbet, & Pellizzari, 2004;Kayser & Frick, 2000). On the other hand, a less problem-focused interpretation describes non-take-up as a selection process that encourages those with the most prevalent support needs to claim the benefit, whereas it excludes people with less severe needs (Bargain et al., 2012). This, however, assumes that barriers of non-take-up are solely driven by economic deprivation rather than institutional and societal factors. This paper exploits the 2010/2011 social assistance benefit reform in Austria to analyse how welfare states can shape take-up. The main aim of the policy change was to combat poverty by introducing nationwide binding uniform standards and facilitating access to the benefit. As such, increasing the take-up of the benefit of last resort was an inherent and important part of the reform. Our research aims at analysing whether the chosen measures have improved take-up and had an impact on barriers to claim the benefit. The analysis offers insights into the target efficiency of the benefit of last resort and evaluates the policy reform. We first compare the size of non-take-up for monetary social assistance in 2009 and the reformed minimum income benefit in 2015 to study the effect of the policy reform on non-take-up. Second, we analyse the social determinants of non-take-up and whether they have changed from one benefit system to the other. Our analysis furthermore contributes to the existing literature by using register data, which allows us to reduce the potential measurement error in reported incomes, a main source of bias in research on non-take-up of means-tested benefits (Frick & Groh-Samberg, 2007;Hernandez & Pudney, 2007;Matsaganis, Levy, & Flevotomou, 2010). The analysis is based on Austrian European Union statistics on income and living conditions (EU-SILC) data together with the tax-benefit microsimulation model EUROMOD/SORESI, which allows us to simulate the intended effect of the benefit and to compare it with the actual situation. We furthermore apply a mixed method design to complement the quantitative estimations with a qualitative in-depth analysis of the reform and potential further improvements using expert interviews. The paper is organized as follows: after an introduction of the Austrian benefit of last resort in Section 2 and a literature review on the extent and the determinants of non-take-up in Section 3, Section 4 describes the data and method used for the empirical analysis. Section 5 discusses the results leading to the conclusions in Section 6. 2 | BENEFIT OF LAST RESORT IN AUSTRIA AND THE 2010/11 REFORM The Austrian benefit of last resort is a universal benefit in terms of coverage based on subjective rights and diversified at the local level, in contrast to other European countries with categorical coverage based on rather discretional rights defined at the national level (Crepaldi, da Roit, Castegnaro, & Pasquinelli, 2017). This holds for the social assistance as well as the minimum income benefit that replaced the social assistance scheme in 2010/11. Individuals are legally entitled to the benefit if they lack sufficient means for subsistence and housing from their own resources, resources of their (nuclear) family, from other prior-ranked benefit entitlements, or support through other means. The eligibility of the benefit is conditional on an income and wealth-based means test as well as on the willingness and availability to work if the beneficiary is of working age and fit for work. The benefit is administered by the nine Austrian Federal States (Länder) and financed by general taxes. A detailed overview on the policy rules and benefits amounts before and after the reform is provided in the Appendix (Tables A2 and A3). The reform of the social assistance benefit in 2010/11 changed the narrative of the benefit by renaming it to minimum income benefit. Although the core of the benefit remained the same, the reform tackled important issues like increasing the benefit amount to the level of the minimum pension top-up, limitation of the maintenance obligation to the nuclear family, new regulations related to the liquidation of wealth, and the integration of beneficiaries into the public health insurance scheme (BMASK, 2012;Dimmel & Pfeil, 2014;Dimmel & Pratscher, 2014;Stanzl & Pratscher, 2012). The reform also included improvements on the application side by providing more transparent and accelerated processes, more legal certainty, and an increased anonymity as claims can now be submitted at the district headquarters rather than at municipality offices only. It furthermore promotes a stronger focus on the reintegration of beneficiaries into the labour market. All reform changes together provide a strong argument for a barrier reducing effect as well as an improved take-up of the minimum income benefit. Indeed, external statistics show clear signs of increases in the number of beneficiaries and government expenditure ( Figure 1). In 2009, 174,000 persons, that is, 2.1% of the total population, living in 102,000 households received F I G U R E 1 Recipients and expenditure of monetary social assistance/minimum income benefit the benefit of last resort, leading to a total expenditure of EUR 407 million (0.14% of gross domestic product; Pratscher, 2011). Since the reform in 2010/2011, the number of beneficiaries and the total expenditure have steadily increased up to 284,000 beneficiaries (3% of the total population) living in 168,000 households and EUR 765 million (0.22% of gross domestic product) in 2015 (Pratscher, 2016). In an international comparison, the generally low number of recipients is driven by a comparably low (long-term) unemployment rate, and the unemployment assistance scheme that provides support for unemployed after their right for the unemployment insurance benefit has expired. Around 70% of the benefiting households do not receive the full benefit but only the top-up amount between their income from other sources like unemployment benefits, maintenance payments or employment income, and the defined minimum income standard (Statistik Austria, 2019). This is due to the relatively high share of precarious employment with low earnings and unemployment benefits below the amount of the social assistance/minimum income benefit. The increase in beneficiaries and expenditure provides another strong argument for an increase in benefit takeup. This increase may however simply be the artefact of worsening conditions-unfavourable labour market and economic developments-rather than the outcome of higher take-up rates. Additionally, the increase in the average benefit level (i.e., defined minimum income standard) rendered more people eligible for the minimum income benefit. The aim of this analysis is to shed light on these different assumptions related to changes in take-up behaviour after the reform. | EXTENT AND DETERMINANTS OF NON-TAKE-UP Empirical evidence from several European countries shows the considerable magnitude as well as the persistence of the problem of non-take-up of means-tested benefits (Table 1). Estimated rates range between 11% and 79%, with rates above 50% being no exception. In general, non-take-up in terms of claimants is higher than in terms of payments, as households are more likely to claim benefits if they are entitled to higher benefit amounts. A broad body of literature (Anderson & Meyer, 1997;Blank & Ruggles, 1996;Engels, 2001;Eurofound, 2015;Hernanz et al., 2004;Kayser & Frick, 2000;Riphahn, 2001) provides theoretical models of the determinants of (non-) take-up. Among others, Van Oorschot's (1991) "three-t-model" (threshold, trigger, and trade-off) presents a theoretical approach that takes various actors (the claimant but also the case worker) and a wide range of factors contributing to non-take-up into account. As such, it provides a good starting point for the empirical analysis and the classification of results. He distinguishes between threshold characteristics, such as information about the benefit and a potential eligibility, and trade-off characteristics, that is, perceptions about ones need and stability of the situation, but also attitudes towards welfare, the benefit specifically and the application process. He furthermore introduces the concept of trigger events leading to take-up. Triggers can be a change in one's personal situation, such as income volatility, but also a more direct influence on the decision to claim through advice and more hands-on information about the benefit. As such, also the reform of the benefit itself may have been a trigger to some claimants. Most empirical studies focus on trade-off characteristics. A basic hypothesis is that households apply for a certain social transfer if the anticipated benefit exceeds the anticipated costs, similar to a cost-benefit equation. This consideration relates to direct as well as indirect costs of applying, including both objective components like the level of benefit, the expected duration of receipt, information costs (about benefit and eligibility regulations as well as application procedures), administrative costs (e.g., queuing, filling forms, need to report detailed information to the welfare agency, and checks on the willingness to accept suitable job offers), and the uncertainty of success (Bruckmeier, Pauser, Walwei, & Wiemers, 2013;Eurofound, 2015;Hümbelin, 2016) as well as subjective motives such as stigmatization, self-esteem, or personal moral beliefs (Frick & Groh-Samberg, 2007;Warin, 2014). Empirical evidence of the covariates of (non-)take-up suggests that participation rates, that is, share of eligible claimants taking up the benefit, increase with higher degrees of need or deprivation. For households just below the eligibility threshold, the costs of claiming often do not pay off the utility from receiving the benefit (Bargain et al., 2012;Bruckmeier et al., 2013;Bruckmeier & Wiemers, 2010;Frick & Groh-Samberg, 2007;Hümbelin, 2016;Wilde & Kubis, 2005). Accordingly, administrative costs play an important role for take-up (Currie, 2004), whereas information costs seem to be of minor interest (Bruckmeier & Wiemers, 2010) and only relevant for cases at the margin of eligibility, for example, for individuals owning their home or being self-employed (Bargain et al., 2012). The literature is inconclusive to what extent stigma and related psychological barriers hamper take-up. Although some show that it significantly affects non-take-up (Frick & Groh-Samberg, 2007;Wilde & Kubis, 2005), others report only small effects (Bruckmeier & Wiemers, 2010;Currie, 2004). Independent of attitudes and economic structure, Hümbelin (2016) finds an effect of the population density, which he uses as a proxy for (lacking) anonymity. Additionally, he T A B L E 1 Estimates of non-take-up of social assistance benefits in Europe Source: Bruckmeier et al., 2013;Eurofound, 2015;Fuchs, 2009;Hümbelin, 2016;Matsaganis et al., 2014. points to the fact that households in areas with right-wing/conservative political preferences feature higher rates of non-take-up. Although a distinction between different types of non-take-up is beyond the scope of the current analysis and available data, it should be mentioned that non-take-up is not only influenced by the actions and decisions of eligible individuals but also by the accuracy of administrative decisions, for example, errors in evaluation procedures, discretionary decisions based on loosely defined program rules, or responses to individual circumstances (Hümbelin, 2016;Matsaganis et al., 2014). This human error in the application process, leading to a rejection of actually eligible people, is defined as secondary non-take-up ( Van Oorschot, 1991). Following the literature and related available empirical evidence, we expect non-take-up to decrease considerably after the Austrian reform given the broad range of encouraging elements of the new benefit (changes in minimum standards, reduction of access barriers, and destigmatization). and selected to provide a timely assessment of the non-take-up incidence. In 2012, the collection of the Austrian EU-SILC data has been changed from survey to register data. Data for 2008-2011 originally collected through interviews were reproduced using register data (Statistik Austria, 2014). This allows for a more accurate assessment of non-take-up rates, as the impact of potential measurement errors related to reported income data in surveys is reduced. | Simulation of non-take-up For the quantitative analysis of non-take-up, the tax-benefit microsimulation model EUROMOD/SORESI is used. It contains the Austrian part of the EU-wide model EUROMOD (Sutherland & Figari, 2013) with specific adaptations to the tax-benefit system in Austria (Fuchs & Gasior, 2014). The areas of policies covered include social security contributions, income tax, and cash transfers. For the current study, the model has been expanded to cover the detailed According to specific means test regulations in the respective Federal States, the level of the household disposable income is adjusted by deductible incomes (e.g., transfers like family allowance, child tax credit, and care benefit) and deductible expenditure in the form of maintenance payments. If the household's adjusted disposable income is below the calculated total household need, the household is considered eligible for minimum income benefit or monetary social assistance in terms of the means test related to incomes. In practice, the eligibility for the benefit is not only based on the income situation but also on the wealth possessed by the household. Unfortunately, the underlying EU-SILC data do not contain sufficient information in this regard. Thus, non-take-up rates are estimated by using a proxy for the wealth test: In order to test the robustness of the simulated results, several validity and sensitivity checks are performed. To provide a robustness test for the wealth condition, two additional scenarios, one without a wealth test and one where home ownership is considered as a proxy, are evaluated. Additionally, beta error rates, defined as the share of households who report the receipt of the benefits of last resort in the survey of those simulated as non-eligible, are calculated. The sensitivity of the simulation model is evaluated by increasing or decreasing the modelled needs by 5-15%. | Regression model In the second part of the analysis, drivers of non-take-up are assessed. Due to a potentially nonrandom selection process (e.g., of non-employed) into eligibility to the benefit, a limitation of the regression analysis to the group of eligible households might introduce a bias to resulting coefficients. To account for this possible endogeneity bias, a two-stage Heckman selection model is used (Heckman, 1976). In the first step, the selection equation explaining eligibility is calculated. Here, all households of the dataset are included. Those simulated as eligible for monetary social assistance or minimum income benefit take the value 1, those who are not the value 0. The explanatory variables of the selection model include the activity status of the household head (employed, unemployed, inactive, or retired), as the participation in the labour market is considered an important factor in terms of eligibility. In addition, homeownership and personal characteristics like the number of children below the age of 18, age specified in a quadratic term as well as the highest education level achieved by the household head, are included in the selection model. | Expert interviews To check for plausibility, expert interviews discussing the empirical results were conducted as the last step of the analysis. The interviews were based on a semistructured interview guide to ensure the coverage of all relevant aspects. This qualitative approach not only validates the quantitative research results but also complements them by more in-depth knowledge of the experts as proposed by the methodological literature (see for example Schnell, Hill, & Esser, 1993). The experts provided an assessment of the efficiency of the reformed benefit and the institutional processes following the policy change. We were furthermore interested in an expert's evaluation of what problems still exist with minimum income benefit scheme and what could be done for further improvement. The selection of the experts is based on their professional background with the aim to cover different perspectives, including that of a government official responsible for the benefit design and provision, of two nongovernmental organizations representing benefit receivers and persons in need as well as of an academic researcher. We carried out three face-to-face interviews and one telephone interview: • City of Vienna, Department "Social Affairs, Social and Health Law": Peter Stanzl, head "Reporting, Strategy and | The effect of the policy reform on non-take-up rates Our analysis clearly indicates a substantial impact of the reform in improving the target efficiency of the benefit of last resort. Comparing the situation in 2009 and 2015, estimated non-take-up rates dropped considerably from 53% to 30% in terms of caseload and from 51% to 30% in terms of expenditure. While in 2009, 114,000 households eligible for monetary social assistance did not claim and abstained from EUR 423 million; this number decreased to 73,000 households and EUR 328 million for minimum income benefit in 2015. The reform led to a significant increase in take-up rates confirmed both by the 95% confidence interval for the number of non-take-up households and by the sensitivity analysis where the simulated needs have been adjusted by ±5% (Table 2). Whereas beta errors amount to 30-40%, disposable incomes of respective households are comparably high. This indicates that the proxy of using households instead of benefit units constitutes a certain measurement error but also suggests that non-take-up rates are rather underestimated. When using an alternative wealth test specification with home ownership as a proxy, non-take-up rates increase by about five percentage points in 2009 and 10 percentage points in 2015. If no wealth test is applied, non-take-up increases by about 10 and 20 percentage points (Table A1). Although this sensitivity analysis per se cannot test the validity of the chosen proxy for the wealth test, it shows at least that it reduces the number of households simulated as eligible to a significant extent. Using EU-SILC data based on survey instead of register data for 2009 considerably reduces estimated non-takeup rates (caseload 42% and expenditure 40%). This is driven by significant over-reporting of incomes-in particular employment incomes-at the lower end of the income distribution (Statistik Austria, 2014). Thus, basing this analysis on register data clearly improves the quality of results. | Drivers of (non-)take-up The second part of the analysis focuses on population groups more likely to be eligible for the benefit and socioeconomic characteristics driving take-up. The first step of the Heckman selection model explains eligibility for the benefit including all households ( number of children only explains eligibility in 2015. As expected, households owning their home are less likely to be eligible, as they are in many cases better off and do not pass the wealth test. In the second step of the Heckman selection model, (non-)take-up is assessed for eligible households only. The relative income gap is used as a proxy for material urgency. The results only partly support the hypothesis of pecuniary determinants: the higher the potential benefit amount, the more likely is the benefit claim in 2009 but not after the reform. This is in line with the finding that non-take-up in terms of claimants is higher than in terms of expenditure in 2009, whereas they are equally high in 2015. Explanations for this change could be the improvement of application processes and better information that decreased the costs of claiming the benefit. Another proxy for application costs is the migration background defined as the country of birth. The overall explanatory power of the migrant status is rather weak, although experts point out that non-EU migrants are more likely to participate than EU migrants (once being eligible for the benefit) due to the lack of alternative resources outweighing potential information deficits (Stanzl, 2018). We also control for household composition and find that in the specification for 2009, participation among lone parents is significantly higher than for single adults. Beside a higher acceptance probability by officials due to the special family situation also lower application costs (expected longer eligibility spell related to child care obligations) and higher family responsibilities (Schenk, 2018) might support the decision of lone parents to take-up. The employment status yields significant coefficients in 2009, where households with an unemployed or inactive head have a higher likelihood to claim benefits than households with an employed head. This finding meets the hypothesis that those households are likely to have a higher degree of needs. Additionally, as they are in most cases already receiving welfare benefits, they may be better informed about their entitlements and, thus, have lower information costs. Also, the self-assessment related to later earnings potential may be rather pessimistic. On the other side, working poor, that is, households with an employed household head with low income, often abstain from claiming for top-up benefits as they might not be aware of the entitlement (Schenk, 2018). Again, there seem to be important changes to this behaviour and these assumptions after the reform. The employment status no longer constitutes a barrier to take-up after the reform which might point to a greater awareness of working poor about their rights. In both years, lower educated heads are more likely to take up the benefit. The financial need of highly educated households often represents a short-term financial crisis, which can be bridged by other means like family resources, while claiming the benefits would contradict their self-perception (Schenk, 2018). An additional obstacle is the wealth means test (Kargl, 2019). Households owning their home are less likely to take up the benefit in 2009, as they assume that they must mortgage or even liquidate their house in order to be eligible for the benefit. This no longer constitutes a barrier after the reform due to a change in rules that might have reduced related uncertainties. Basically, social and psychological costs are approximated by the size of municipality. We find a significant positive effect on take-up in 2015, which is somehow surprising given that the reform provided improvements that should result in reduced stigma. However, housing costs are considerably higher in urban areas, which could lead to higher benefit dependency and, thus, also higher take-up. At the same time, experts point out that information flows are better in bigger cities (Kargl, 2019). Thus, the size of the municipality might be regarded as an indicator going beyond the function as a proxy for anonymity. Altogether, this suggests that the reform of the social assistance benefit has not only resulted in higher participation rates but has also significantly reduced barriers to take-up for specific subpopulation groups. Based on considerable improvements in overall take-up, non-take-up behaviour is less driven by observed characteristics than before the reform. However, the qualitative results shed light on still existing problems and needs for further action. Experts identify a persisting need for low threshold information and support in completing the benefit application for loweducated and deprived clients (Kargl, 2019). Additional support is also needed for low income workers and unemployment benefit recipients who often find it challenging to apply for the top-up benefit (Stanzl, 2018). Experts also point to unrealised elements of the reform, such as the introduction of an emergency aid to provide immediate support rather than receiving the benefit only after 3 months of legal decision period and the planned one-stop-shop for "able-to-work" recipients at the job centres (Schenk, 2018). Finally, the coverage of housing cost within minimum income benefit and/or within (general) housing allowances is still far from being transparent with very different practices across Federal States (Pfeil, 2018). | CONCLUSION The paper studies the effects of the 2010/11 social assistance benefit reform in Austria on non-take-up. The reform changed the social assistance scheme in place to the minimum income benefit which in substance is quite similar to its predecessor but introduced a more uniform and on average higher minimum living standard, accelerated and simplified the application process, and provided (better) inclusion into the health insurance scheme and labour market programmes, aimed at reducing access barriers and destigmatization of benefit recipients. By studying the change in non-take-up, a problem that most means-tested benefits in European welfare states struggle with, this paper contributes to the existing literature in three ways: First, it offers insights into the target efficiency of the benefits of last resort in Austria before and after the 2010/11 policy reform. Second, it analyses the social determinants of take-up. Third, it contributes to the methodology of analysing non-take-up rates. By relying on register data but comparing results with estimates based on survey data, the underestimation of non-take-up due to misreported incomes in survey data becomes evident. Results show that non-take-up of monetary social assistance in 2009 amounted to 53% in terms of caseload and 51% in terms of expenditure. In 2015, after the policy reform, estimated non-take-up rates of minimum income benefit dropped to 30% for both the number of households and expenditure. Applying several sensitivity analyses and taking confidence intervals into account, the results indicate that the reform has led to a significant increase in participation rates, that is, improved take-up behaviour of those in need of support. Although results still confirm the considerable magnitude and persistence of non-take-up prevalence pointed out in previous literature, welfare states can tackle a considerable share of the problem. As also suggested in the literature, at least in the Austrian case, a significant part of non-take-up was caused by nontransparent and complex schemes, poor information, and institutional barriers-dimensions that the reform managed to deal with. Following the three t's introduced by Van Oorschot (1991), the taken measures include both threshold and trade-off characteristics. Elements such as the higher degree of anonymity within the claiming process, the provision of health insurance, binding minimum standards, the limitation of the maintenance obligations, new regulations related to the liquidation of wealth as well as the general coverage of the benefit reform in the media and in public discussions led to an improved access to the benefit-shown by the increase in take-up and confirmed in more details by the analysis of the expert interviews. The new name has changed the narrative of the benefit as from social support to a social right to a minimum living standard (Pfeil, 2018). This may not only have contributed to a higher perceived eligibility (threshold character) and improved attitude towards welfare and the benefit but may have also been a trigger to realise that providing a minimum living standard is the inherent purpose of the welfare system. Nevertheless, experts point to several still existing problems and provide guidance for future political action (Kargl, 2019;Pfeil, 2018;Schenk, 2018;Stanzl, 2018). This includes non-realised elements of the reform-that is, an emergency aid and a one-stop-shop for employable benefit receivers that would provide better support to people already receiving unemployment benefits or low employment incomes. In terms of coverage of housing costs, a complete separation of housing benefits from minimum income benefit and the solely provision of extended (general) housing allowances by the Federal States could be discussed. All these measures would increase the acceptance of such (top-up) benefits, both among entitled clients and the general population. Finally, they would also save administrative costs and enable better political governance. Note: In Sbg., Styria and Tyrol, all stipulated long-term recipients receive special payments. Abbreviations: BRA, basic rent amount; FBH, family allowance; HA, heating allowance; LP, lone parent; MS, minimum standard; p.P., per Person; RA, rent/housing allowance.
7,203
2020-02-14T00:00:00.000
[ "Economics" ]
HMNPPID—human malignant neoplasm protein–protein interaction database Background Protein–protein interaction (PPI) information extraction from biomedical literature helps unveil the molecular mechanisms of biological processes. Especially, the PPIs associated with human malignant neoplasms can unveil the biology behind these neoplasms. However, such PPI database is not currently available. Results In this work, a database of protein–protein interactions associated with 171 kinds of human malignant neoplasms named HMNPPID is constructed. In addition, a visualization program, named VisualPPI, is provided to facilitate the analysis of the PPI network for a specific neoplasm. Conclusions HMNPPID can hopefully become an important resource for the research on PPIs of human malignant neoplasms since it provides readily available data for healthcare professionals. Thus, they do not need to dig into a large amount of biomedical literatures any more, which may accelerate the researches on the PPIs of malignant neoplasms. Background The research on protein-protein interactions (PPIs) is critical to understand how proteins function within the cell. Therefore, hundreds of thousands of PPIs generated by highthroughput methods such as yeast two-hybrid screening and affinity purification coupled to mass spectrometry have been collected together in specialized biological databases such as Database of Interacting Proteins (DIP) 1 [1], Biomolecular Interaction Network Database (BIND) 2 [2], IntAct 3 [3], Human Protein Reference Database (HPRD) 4 [4], and Biological General Repository for Interaction Datasets (BioGRID) 5 [5]. However, these high-throughput methods are associated with high error rates (both false-positive and false-negative rates). For example, some genome-wide screens might be associated with false-positive rates exceeding 50% [6][7][8][9]. On the other hand, the rapidly growing biomedical literature provides a significantly large and readily available source of PPI interaction data and numerous PPIs have been manually curated by biomedical curators into the PPI databases [10,11]. Furthermore, PPI data is used globally for the prediction of protein properties, systematic network analysis, and evaluation of novel datasets of PPIs produced in a high-throughput fashion [12]. To this goal, several integrated PPI databases have been constructed. For example, HIPPIE 6 (Human Integrated Protein-Protein Interaction rEference) is a human PPI dataset with a normalized scoring scheme that integrates multiple experimental PPI datasets including DIP, IntAct, BIND, HPRD, BioGRID, Molecular INTeraction database (MINT) 7 [13], and MIPS 8 [14]. The HIPPIE web tool allows researchers to conduct network analyses focused on likely true PPI sets by generating subnetworks around proteins of interest at a specified confidence level. IID 9 (Integrated Interaction Database) is an online database of known and predicted eukaryotic proteinprotein interactions in 30 tissues of model organisms and humans, which covers six species (S. cerevisiae (yeast), C. elegans (worm), D. melanogaster (fly), R. norvegicus (rat), M. musculus (mouse), and H. sapiens (human)) and up to 30 tissues per species [15]. The STRING 10 database consolidates known and predicted protein-protein association data for a large number of organisms [16]. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text mining of the scientific literature, and (iv) computational transfer of interaction knowledge between organisms based on gene orthology. In addition, there are also some protein-pathway association databases. For example, PathDIP 11 integrates data from 20 source pathway databases, "core pathways," with physical protein-protein interactions to predict biologically relevant protein-pathway associations, referred to as "extended pathways" [17]. Since the dysfunction of some PPIs leads to many diseases (e.g., cancer), the analysis of PPI networks has become one of the powerful approaches to elucidate the molecular mechanisms underlying the complex diseases on the system level [18,19]. Some efforts have been made to construct the cancer-related PPI databases. Among others, CancerNet 12 is a cancer-specific database that provides cancer-specific molecular interaction networks across multiple cancer types [20]. Currently, 33 human cancer types are included. The interactions contain PPIs, miRNA-target interactions, and miRNA-miRNA synergistic interactions. Experimentally detected PPIs were assembled from five major PPI databases (BioGRID, DIP, HPRD, IntAct, and MINT) and miRNA-target interactions were considered as the combination of the predicted targets from six algorithms and two experimentally validated data sets. Human Cancer Pathway Protein Interaction Network (HCPIN) 13 is a collection of proteins from cancer-associated signaling pathways together with their protein-protein interactions [21], which was constructed by combining proteins from seven KEGG (Kyoto Encyclopedia of Genes and Genomes) 14 [22] classical cancer-associated signaling pathways together with protein-protein interaction data from the HPRD. Reference [23] constructed initial networks of protein-protein interactions involved in the apoptosis of cancerous and normal cells by use of two human yeast two-hybrid data sets [24,25] and four online interactome databases such as BIND, HPRD, IntAct, and Himap [26]. Their method allows identification of cancer-perturbed protein-protein interactions involved in apoptosis and identification of potential molecular targets for the development of anti-cancer drugs. Currently, the PPIs in these cancer-related PPI databases are manually extracted and curated by human experts from literatures. However, since the number of biomedical literatures regarding PPIs is growing at an explosive speed, automatically extracting PPIs from the literature is adopted to improve the efficiency of PPI information extraction. To this end, in this work, a Human Malignant Neoplasm Protein-Protein Interaction Database (HMNPPID) was constructed, whose data was extracted by an automatic PPI extraction tool, named PPIExtractor [27], from a large number of PubMed 15 abstracts involving human malignant neoplasms. The main contributions of our work are listed as follows. First, HMNPPID provides the readily available PPIs of specific malignant neoplasm for healthcare professionals, which can boost the efficiency of the PPIs research of human malignant neoplasms. Then, HMNPPID can hopefully become an important resource for this research. In addition, we provided a visualization program VisualPPI to help the experts analyze the PPI networks of specific malignant neoplasms and thus discover the molecular mechanisms behind them. Implementation The protein-protein interaction extraction system for biomedical literature The number of biomedical literatures involving PPIs is increasing at an explosive speed and, for PPI database curators, it is extremely difficult to curate them efficiently. Therefore, we have developed PPIExtractor in our previous work to automatically extract the PPIs from biomedical literature [27]. Given a MEDLINE abstract, PPIExtractor first applies feature coupling generalization (FCG) [28] to tag protein names in text, next uses the extended semantic similarity-based method to normalize them, then combines feature-based, convolution tree and graph kernels to extract PPIs. To our knowledge, PPIExtractor is the first PPI extraction system publicly available which integrates named-entity recognition (NER), normalization, PPI extraction, and visualization. In addition, the technique used in each stage of PPIExtractor can achieve state-of-the-art performance. Therefore, PPIExtractor was utilized to extract the PPIs of human malignant neoplasm from biomedical texts in this work. The extraction of PPIs of malignant neoplasms According to the International Classification of Diseases (ICD) uniform method established by World Health Organization (WHO) and according to the disease etiology, pathology, clinical presentation, anatomical location, and other characteristics, ICD-10 version 2016 (https://browse10/browse10/2016/en) classifies the diseases, making them an orderly combination and representing them with the coding method. According to the classification in ICD-10, we chose 171 kinds of malignant neoplasms (they are listed on the web site http://2 02.118.75.18:8082/HMNPPID.asp and divided into 13 categories as shown in Table 1), then downloaded their related PubMed, and finally extracted the PPIs from these abstracts using PPIExtractor. To obtain the relevant abstracts of all these malignant neoplasms, constructing the accurate query string for PubMed search is the first step. For example, the query string for the disease Malignant neoplasm of lung is "((Malignant AND neoplasm) OR cancer) AND lung AND protein." The second step is to retrieve the relevant abstracts from PubMed using the query string. In addition. the filters "Humans" and "English" are activated to obtain only English abstracts associated with human species, and the query time is set as December 1, 2015. In the last step, the downloaded abstracts are input into the PPIExtractor to extract the PPIs. Each PPI is assigned a confidence score by PPIExtractor to reflect its reliability. Usually with a confidence score equal to or greater than zero, one PPI can be regarded as reliable. However, in HMNPPID, the PPIs with the confidence scores higher than − 0.6 are retained since, due to the complexity of natural language expression, PPIs with the confidence scores less than 0 may be true ones. The reason why the threshold is − 0.6 is that, in our previous study of protein complex detection in PPI networks [29], the introduction of the PPIs higher than − 0.6 into the original PPI networks achieved the best results in the experiments. In addition, the interactions between two identical proteins were filtered out. File format In HMNPPID, two PPI file types (i.e., text and Excel formats) are provided for each malignant neoplasm. As shown in Table 2, Each PPI record contains seven columns, including the sentence from which the PPI was extracted with which users can also judge the confidence degree of the PPI according to the sentence by themselves besides the confidence score assigned by PPIExtractor. Overview of HMNPPID According to the classification in ICD-10 (version 2016), we extracted the PPIs of 171 kinds of human malignant neoplasms and obtained a total of 266,107 PPIs (with threshold − 0.6). By contrast, the number of PPIs with a confidence score greater than or equal to zero is 72,866. The number of specific neoplasm related abstracts downloaded from PubMed and the number of the PPIs extracted from those abstracts can be found on the web site. Figures 1 and 2 show the numbers and proportions of the PPIs of different malignant neoplasms, respectively. As can be seen from the figures, there is a significant difference among these malignant neoplasms. For example, malignant neoplasms of digestive organs (C15-C26), breast (C50), and stated or presumed to be primary, of lymphoid, hematopoietic and related tissue (C81-C96) have much more PPIs than malignant neoplasm of bone and articular cartilage (C40-C41). In addition, the occurrence frequencies of unique PPIs in 13 categories of malignant neoplasms are presented in Fig. 3. The majority of PPIs are only associated with a particular category (i.e., the occurrence frequency of the PPI is one). 44,220 PPIs are associated with any two categories; 15,565 PPIs associated with any three categories; 7,374 PPIs associated with any four categories of malignant neoplasms. It is noteworthy that, as shown in Table 3, 27 PPIs are relevant to all 13 categories. Such PPIs tend to be more valuable for healthcare professionals since they may have a biological relation with more malignant neoplasms than others. For example, p53 has been described as "the guardian of the genome" because of its role in conserving stability by preventing genome mutation [30]. The combination of p53 and MIB-1 demonstrates prognostic significance in male germ cell tumors [31] and human bladder tumors [32] (row 2 in Table 3). Activated p53 binds DNA and activates expression of several genes including WAF1/CIP1 encoding for p21 and hundreds of other downstream genes [33] (row 3 in Table 3). Overexpression of p53 and Ki-67 could be used to discriminate low-risk luminal A subtype in breast cancer [34] (row 4 in Table 3). p53, cathepsin D, and B cell lymphoma 2 (Bcl-2) are joint prognostic indicators of breast cancer metastatic spreading [35] (row 5 in Table 3). In addition, ribosomal S6 kinase 1 (S6K1) is a downstream component of the mammalian target of rapamycin (mTOR) signaling pathway and plays a regulatory role in translation initiation, protein synthesis, and muscle hypertrophy [36] (row 6 in Table 3). Evaluation of HMNPPID data For a PPI database, the quality of its data is of great importance. However, there is no cancer-relevant PPI gold set currently. To assess the quality of the data in HMNPPID, we firstly explored the performance of PPIExtractor using the PPIs in HPRD, since the PPIs in HPRD were also collected from the literatures and their reliability is justified (they are curated by expert biologists) and the comparison with it is meaningful. HPRD includes 39,240 PPIs obtained from a set of published articles. We used PPIExtractor to extract 54,808 unique PPIs with the threshold 0 from the abstracts of the same article set (since the full texts of many articles are not available publicly, we only used the abstracts) and 12,870 of HPRD PPIs (accounting for 32.8% of total HPRD PPIs) were matched. We further analyzed some of the results to find the recall error types. The PPIs in HPRD were curated by expert biologists from both abstracts and full text. Since PPIExtractor was applied only on the abstracts, the PPIs present in the full text were missed out. This accounts for about 68% of total recall errors. In addition, some PPIs in HPRD were extracted by PPIExtractor but with a threshold less than zero (accounts for about 21% of total recall errors). The reason is that due to the complexity of the protein interaction expression, PPIExtractor may fail to extract some true PPIs. In fact, if the threshold is relaxed to − 0.6, almost half (48.08%) of HPRD PPIs could be extracted. Finally, the names of the proteins of HPRD PPIs are the formal ones assigned by expert biologists which usually are not the same with those used in texts. For example, for a HPRD PPI (INSR 00975 NP_000199.2 FABP4 02698 NP_001433.1 in vitro; in vivo 1648089), it can be extracted from the sentence "Kinetic analysis indicated that stimulation of ALBP phosphorylation by insulin was attributable to a 5-fold increase in the Vmax…" in the abstract with PubMed ID 164808. ALBP is an alias of FABP4 (fatty acid-binding protein 4) and insulin refers to insulin receptor, an alias of INSR. However, the failure of matching insulin with INSR by the matching program leads to the recall error of this HPRD PPI. Such errors account for about 11% of total recall errors. Furthermore, to assess the quality of the data in HMNPPID, we compared it with PPIs in HCPIN. There are 9,784 PPIs among HCPIN proteins. However, since these PPIs are not available, we reconstructed them from the PPIs of seven pathways (i.e., apoptosis, cell-cycle, Janus kinase, mitogen-activated protein kinase, PI3K, transforming growth factor, Toll-like receptor) provided on HCPIN website (http://nesg.org:9090/HCPIN/Show-Pathway.jsp) and only a total of 5,815 PPIs were obtained. As a result, 1636 PPIs of HCPIN (accounting for 28.13% of a total of 5815) were found in HMNPPID (72,866 PPIs with confidence scores greater than or equal to zero). Similar to the case of HPRD, the mismatching between the protein names in texts with the ones in HCPIN results in many recall errors. Considering that the PPIs in HMNPPID were extracted from abstracts rather than full texts, the coverage rates (about 30%) of HMNPPID data with HPRD and HCPIN are still acceptable. What is more, the 39,240 PPIs in HPRD were curated by expert biologists from 20,074 articles, which means less than two PPIs were curated from one article on average. In fact, only one PPI was curated from one article in most cases. This shows that expert biologists usually only curate the few novel PPIs while ignoring many other PPIs in the article. In contrast, PPIExtractor will extract all the PPIs in the abstracts into HMNPPID, which is especially useful for the researchers who need to explore the relations between the multiple PPIs from one single article or a set of related articles (i.e., these PPIs are usually associated with each other). This is also the reason why PPIExtractor can extract more PPIs than HPRD from the same article set (54,808 vs 39,240). However, the quality of the PPI data in HMNPPID but not in HPRD or HCPIN is difficult to evaluate due to the lack of gold standard. The database website As has been mentioned in the previous section, the PPIs of 171 types of malignant neoplasms were extracted with PPIExtractor, and then used to construct the PPI database of human malignant neoplasms, HMNPPID. HMNPPID can be accessed through http://202.118.75.18:8082/HMNPPID.asp. As shown in Fig. 4, on the web site the PPIs files are presented in tabular form. For each malignant neoplasm, the number of abstracts retrieved from PubMed with a corresponding query string and the number of the PPIs extracted from these abstracts is provided In addition, the website also supports the query function (the query interface is shown in Fig. 5). Users can search the PPIs by the protein names (or Entrez IDs), protein name (or Entrez ID) pairs, and PubMed IDs. PPI visualization program To facilitate users to analyze the PPIs of specific malignant neoplasm, the PPI visual analysis tool is needed. Though there have been some existing visual approaches to PPI analysis such as STRING-DB [37], we provide a visualization tool of our own, called VisualPPI, because it is more convenient to display the detailed information about the PPI data in HMNPPID. It can be downloaded from the HMNPPID website (its interface is shown in Fig. 6). While opening a PPI file (text format) of a malignant neoplasm in VisualPPI, a PPI network is displayed. The nodes in the network represent the proteins and the edges represent that this pair of proteins interacting with each other. VisualPPI provides four graphical display modes named "Circle layout," "FR layout," "Spring layout," and "ForceDirected layout" (as shown in Fig. 7). In addition, the users can set the PPI filtering threshold as needed and the default value is 0, which indicates that only the PPIs whose confidence scores higher than 0 will be displayed in the network. For example, in Fig. 6, the display mode is "ForceDirected layout" and the threshold is set to 0. Selecting any region in the network (when the nodes change from red to yellow), users can get detailed information about PPIs at the bottom of the interface. In our opinion, VisualPPI can facilitate the analysis of the specific PPI network of a malignant neoplasm and may help discover the molecular mechanisms behind the malignant neoplasm. Conclusions The analysis on the PPIs of human malignant neoplasms helps unveil the molecular mechanisms behind. However, it is difficult to manually extract all the PPIs from large quantities of ever-growing biomedical literatures. In this work, we constructed HMNPPID, a PPI database for human malignant neoplasms, using PPIExtractor from large amounts of biomedical texts. HMNPPID can hopefully become an important and readily available resource for the related research. We also provide the healthcare professionals with VisualPPI to help them efficiently analyze the PPI network of one specific malignant neoplasm. As discussed in the "Background" section, currently, there have been some cancer-related PPI databases such as CancerNet and HCPIN. For CancerNet, it provides cancer-specific molecular interaction networks across multiple cancer types and the PPIs associated with a cancer are those of which the two pair mates were both found to be expressed in that cancer (genes were considered expressed if their transformed expression level was equal to or above 2 (in log2 (TPM + 1) scale) in at least 80% samples) [20]. By contrast, more types of human malignant neoplasm specific PPI data are provided in HMNPPID but are extracted from large quantities of PubMed abstracts with PPIExtractor. For HCPIN, its interaction data are cancer-associated signaling pathways, but are not cancer-specific. In addition, they are a subset of the HPRD which was curated by expert biologists. Since the amount of biomedical literatures regarding PPIs is growing at an explosive speed, it is time-consuming and labor-intensive to manually extract PPIs from the unstructured texts. For HMNPPID, the PPIs associated with a cancer were extracted from the cancer-related PubMed abstracts with a tool PPIExtractor. On the one side, using PPIExtractor is much efficient than manual curation. For example, it only took about 8 days to extract 54,808 unique PPIs with the threshold 0 from 20,074 PubMed abstracts corresponding to the HPRD article set on a PC with an Intel i3-3220 CPU and 4G memory. On the other side, PPIExtractor can have satisfactory precision performance if a suitable threshold is set (usually the extracted PPI is reliable with the threshold 0). In fact, it achieved a precision of 79.23% on a DIP subset [27]. To keep the data up to date, we plan to update HMNPPID every half year (currently, the data in HMNPPID has been updated to April 30, 2019). In addition, our future research will focus on two areas in order to improve the quality and utility of the PPI database. First, we will improve the performance of PPIExtractor with the introduction of the popular deep learning method [38]. Second, we plan to extract the PPIs associated with human malignant neoplasms from full texts of the article instead of abstracts only which is recently made feasible with PMC Open Access BioC RESTful server (https://www.ncbi.nlm.nih.gov/research/ bionlp/APIs/BioC-PMC/). As discussed in section Evaluation of HMNPPID data, this will improve the recall performance of PPI extraction. This paper is a revised and expanded version of a paper [39] presented at IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018.
4,890.2
2019-10-01T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
The role of small-scale Enterprise in Entrepreneurship development in developing Economy: An analysis of bakery enterprise in Mubi-north local Government area of Adamawa State, Nigeria The paper investigated Bakery enterprise in Mubi – north local government area in the year 2019, via socio-economic variables, cost and returns as well as prospects and problems associated with the business. The results reveals that all (100%) of the respondents are males, majority 58% were within their active productive age bracket of 35 – 46 years and majority had 6 – 15 years experience on business. Similarly, operating capital stood at 1.5 to 2.0million naira, while business fixed asset stood at 1.5 to 2.5million naira respectively. The results further shows that Cost of processing one bag of 50kg of flour in to bread stood at N39,400 and a total returns of N50,000 was reported indicating a benefit of N10,600 which constitute one production cycle. Further, the result reveals a maximum, minimum and average production cycles per week of 5, 3, and 2 respectively which gives a benefit of N53, 000, N31, 800 and N21, 200 per week respectively. The study therefore concluded that baking enterprise is a profitable business in the study area and can be used as a veritable enterprise for promoting entrepreneurship among the youths especially the graduates because of it high prospects. The study recommended that Government and NonGovernmental Organizations should support and promote entrepreneurship development among youths through their various empowerment programmes. Entrepreneurship Development Centre’s in Universities, Polytechnics and Colleges of Education in Nigeria should promote baking as one of their enterprise areas because of it high prospects and acceptability among educated youths. Keyword— Entrepreneurship, Entrepreneur, Small Scale, Role, Developing and Economy. INTRODUCTION The Concept of Entrepreneurship as remedies for Unemployment is gradually receiving attention in many countries globally. Nigeria in particular has considered it imperative to pursue entrepreneurial objective to engage it citizens in order to minimize social vices usually associated with unemployment. Achieving this goal however, may require new ideas; policies, approaches and procedures from both the tier of government including their agencies as well non-governmental organizations (NGO's) that can encourage entrepreneurship activities for the growth and development of societies. Empirical reports over the years have indicated that over sixty-five (65%) of the total population of Nigerian citizens are mostly youths below forty years (40years) and without sufficient job opportunities that can solve their socioeconomic needs National Population Commission (NPC, 2016). Another disturbing issue which government needs to address is the teeming number of young men and women graduating annually from Universities, Polytechnics and National Youths Service Scheme (NYSC, 2017) reported that from 2010 to date over one hundred and fifty (150) thousands graduates are been mobilized yearly and less than sixty percent of these population are engaged. The Holt (2006) described entrepreneurial role as one of gathering and using resources, but also reported that resources to produce result must be allocated to opportunities rather than problems. He further lamented that entrepreneurship occurs when resources are redirected to progressive opportunities not used to ensure administrative efficiency. Further, Robbert Ronstadt cited in Holt (2006) considered entrepreneurship as the dynamic process of creating incremental Wealth. Wealth created by individuals who assume the major risk in terms of equity, time and or career commitment or providing value for some product or services. The product or service itself may or may not be new or unique but value must somehow be infused by the entrepreneur by securing and allocating the necessary skills and resources. Entrepreneur The word Entrepreneur is derived from a French word "Entrependre" meaning between and to take Schumpeter (1996). Therefore, a typical entrepreneur is a risk taker who braves/embrace uncertainty in an effort to produce profit. Karl Vester also cited in Holt (2006) explained that the nature of entrepreneurship in it nature is often a matter of individual perception. He found that Psychologist tend to view entrepreneurs in behavioral terms as achievementoriented individuals driven to seek challenges and new accomplishment. However, generally, the term "Entrepreneur" may be properly applied to those who incubate new ideas, start enterprise based on those ideas, and provide added value to society based on their individual initiatives (Holt, 2006). In an attempt to understand entrepreneurship potentials in the area, the researcher seek to conduct an in-depth analysis of one of the enterprise that is the Bakery Enterprise with the aim of understanding how viable the enterprise is in respect to cost-benefits and its potentials towards Entrepreneurship growth and development and sustainable livelihoods especially among the youths. Questions which the research seek to provide answers to include; what are the socio-economic characteristics of the bakers in the study area, how is cost related to revenue in the baking enterprise, what are the key contribution of the enterprise to societal growth and development. Objective of the Study The broad objective of the study is to analyze baking enterprise in the study area. Specific objectives, however is to: i. Describe the socio-economic characteristics of bakers in the study area ii. Estimate costs and returns associated with the enterprise in the study area iii. Identify the contribution of the enterprise in societal growth and development. Sampling distribution A purposive sampling technique was utilized for the study. List of all the bakeries in the local government area was collected from the leader of the bakers association in the area which serves as sampling frame. The names of the bakeries and their proprietors in all the twelve (12) wards of the local government area were considered for the study. Sampling All the Proprietors of the bakeries were considered for the study. Therefore, the population of the study is the same with sample. Data Collection Instrument A well structured close ended questionnaire was designed to solicit information from the respondents. Information generated includes; socio -economic variables of the respondents such as age, experience, educational qualification, income, working capital and asset. Similarly, costs and returns associated with baking enterprise were collected and information on observed prospects and challenges of baking industry were also collected. Data analysis Socio-economic characteristics of the respondents were analyzed using frequency distribution, percentages and means, while Business Budgetary Analysis and ratios were employed for the costs-benefits analysis. Similarly percentages and ranking were used to discussed prospects and challenges of the baking enterprise. Frequency is the number of times an event/observation occurred while percentage is the proportion of occurrence of Table 2 presents results on the socio-economic characteristics of the respondents, it shows all (100%) of the bakers are male and majority (58%) within the age brackets of 36-45yaers and most (52%) had 6-15 years of experience. It also revealed that majority have N1.5-N2.0 million as working capital and N1.5-N2.5 million worth of Business assets. Table 3 presents costs and returns analysis of one production cycle. One Results and Discussions Cycle here referred to cost of processing one bag (50kg) of flour into bread. It reveals total costs of N39, 400 and total returns of N50, 000, indicating a difference of N10, 600 as benefit for the business. Table 4 presents classification of respondents according to number of production cycles per week. It shows that majority has minimum of 3 production cycles per week, maximum of 5 while on the average 2. Table 5 presents results on the prospects of baking in the study area. The results indicate that all the respondents considered the business as viable. Majority (90%) reported that they derived satisfaction from it while 82% considered it profitable. However, 42%, 32% and26 % indicated that they considered it for leisure, provision of goods and prestige respectively. Table 6 presents results on the problems associated with the baking enterprise. It shows that, majority 82%, 79% and 74% reported debt, health risks and drudgery involved in the enterprise as the major problems associated with the business while inadequate profit as the least problem. IV. CONCLUSION The study concluded that baking enterprise is a profitable business in the study area and can be used as a veritable enterprise for promoting entrepreneurship among the youths especially the graduates because of it high prospects . Findings of the study reveal that almost fifty percent of the respondents attended tertiary education while the remaining fifty percent attended secondary schools. This shows that majority of the bakers are educated and therefore it is expected that they can appreciate and adopt inventions/innovation that can improve the baking enterprise. Similarly, net returns of N10, 600 per production cycle with an average of two cycles per week correspond to an average of N90, 000 per month which is considered good to sustain livelihood. i. Entrepreneurship Development Centre's in Universities, Polytechnics and Colleges of Education in Nigeria should promote baking as one of their enterprise areas because of it high prospects and acceptability among educated youths. ii. Graduates should be encouraged to venture into baking enterprise because of it viability and profitability to serve as source of livelihood. iii. Youths should be sensitized to form Co-operatives in order to pool their resources together to establish and manage enterprises iv. Bakers should be assisted to acquire modern facilities and equipment to standardized their products v. Banks and other financial agencies should be oriented to support entrepreneurship development vi. Government and Non-Governmental Organizations should support and promote entrepreneurship development among youths through their various empowerment programmes. The theory of Economic Development. London, Transaction publishers Classification of production cycles
2,182.6
2020-01-01T00:00:00.000
[ "Business", "Economics" ]
Research Article in Special Issue: Selected Papers from the 4th International Conference on Machine Learning, Image Processing, Network Security and Data Sciences (MIND-2022) Cloud-based Cost Effective IIoT Model Towards Industry 5.0 : Companies across industries increasingly depend upon cloud computing to manage their Industrial Internet of Things (IIoT) technology. Machines are connected over a network in the IIoT. Cloud computing plays an essential role by connecting people, devices, work processes, and buildings to deliver cloud services in industries. But cloud computing faces a problem with task scheduling, high latency delay, and memory management, affecting the overall cost of industries using cloud services. A major concern in the cloud computing field is task scheduling which is essential for achieving cost-effective execution and improving resource usage. It refers to assigning available resources to user tasks. This problem can be solved effectively by improving task execution and increasing the use of resources. The waiting time between a client’s sent request and a cloud service provider to give a response, known as latency, is another issue in cloud environments. In cloud computing, this delay can be significantly higher. As a result, users of various cloud services may incur increased expenses due to this delay. Finally, among the most significant topics in cloud computing is efficient memory management, which handles integrated data and optimizes memory management algorithms. This paper proposes a cloud model for IIoT, which provides task scheduling, helps reduce latency, and optimizes memory management. This proposed model helps to reduce the cost of using cloud computing in IIoT. Introduction Cloud computing is an internet-based computing service that provides services, like infrastructure, data storage, and applications remotely. Various computing resources, like storage, processing power, databases, networking, analytics, artificial intelligence, and software applications, are available through cloud computing. Companies do not need to purchase need and maintain a physical, on-site Information technology (IT) infrastructure on their premises because they may outsource these resources and use computing assets whenever required. Customers can access new or scale existing resources as required by paying charges based on the number of resources used. Users pay for their services the same way as home utilities. Cloud computing resources are divided dynamically and assigned on demand. As illustrated in Figure 1, a cloud service provider offers computing resources to users. These computing resources are available on demand and are divided and assigned dynamically. The Industrial Internet of Things (IIoT) refers to networked sensors, instruments, and other resources used in manufacturing and energy management that are integrated with industrial computer applications ( Figure 2). This connectedness makes the ability to collect, trade, and analyze data possible, which may also positively affect the economy. The IIoT is an expansion of a distributed computing system that enhances and optimizes process controls using cloud computing and allows a higher level of automation. Due to the energy shortage and change in the global climate, power consumption is a hot issue in every field where power is used. In cloud computing, thousands of resources are working simultaneously, but some of them are used by clients or users for their requests, and many are just idle. Due to this, cloud computing can have over-utilized and underutilized resources, which are called resource imbalances. To minimize the power consumption of these idle or underutilized resources, we should try to balance the load by scheduling the task to these under-utilized resources. Figure 3 shows the overall development in cloud computing from 2016-17 to till date. Techniques like mobile cloudlets, dynamic programming-based offline micro service coordination, deep reinforcement learning-based resource allocation (DRLRA), delay-dependent priority-aware task offloading, and auction-based virtual machine (VM) resource allocation are used in cloud computing to solve the problem of resource allocation, high latency, and memory management. Although load balancing, cloud computing has a problem with memory management and increased latency. Due to this problem, the cost of cloud services is rising. We can overcome this problem by developing a costeffective cloud model for IIoT. This model can schedule the cloud resources and reduce resource imbalance. This model can also optimize memory management and reduce cloud computing latency. And it will have an immediate impact on the cost of cloud service. Table 1 shows the techniques used, major findings, and limitations of various research related to cost-effective cloud computing. Table 2 evaluates the technique used in IIoT. Technique used Findings Limitations Optimal resource provisioning (ORP) [1][2][3] In these publications, the Cloud Assisted Mobile Edge (CAME) system is used to increase the applicability of Multi-access Edge Computing (MEC) while handling requests from mobile devices with varying time stamps. Some cloud instances have resolved source provisioning issues despite particular quality of service (QoS) requirements for mobile queries. When comparing the proposed ORP algorithms to minimize machine cost while meeting QoS requirements, the Optimal Resource Provisioning with Hybrid Strategy (ORP-HS) method has repeatedly shown the most excellent performance. It deals with the problems of resource allocation and workload planning when investigating multiplexing gain. Delay-dependent priority-aware task offloading [4,5] In hierarchical fog-based cloud architecture, this paper examines the task of degenerating strategy with a multi-level feedback queuing model. This research [4] aims to meet deadlines while consuming less processing time overall. The results demonstrate that, based on a prioritized scheduling strategy, both the minimal queuing waiting time and the offloading time positively benefit meeting the mission deadline. Dynamic programmingbased offline micro service coordination [6,7] This paper [6] selects the best edge clouds to run microservices when a mobile user moves. An offline approach for determining the ideal alignment of microservices is introduced when possible system knowledge is available, adding to the overall cost of migration that exceeds the time restriction. When possible system knowledge is available, and an offline technique for determining the best alignment of microservices is added. The overall cost of exceeding the time limit migration. Load balancing between microservices in the multiuser mobile edge computing approaches. Energy optimization [8][9][10] Many cloud enthusiasts and vendors are curious about how the design might be more effective in the case of energy conservation by increasing the cloud-related infrastructures, starting from entirely centralized to completely separate. The topic was covered in this essay. They utilize their communications network and reliable computing infrastructure and use multi-stranded technology to produce products that satisfy consumer needs. The model cannot be used for users who are dispersed differently. Do not support other technologies, architectures, or virtual network functions (heterogeneous telecommunication networks, mobile networks). DRLRA [11,12] In a mutative MEC setting, this study looks into the problem of resource allocation. We considered this issue in two different approaches. First, several tests were run to establish the effectiveness of DRLRA. The experimental results showed that our proposed DRLRA achieved much greater efficiency than the default Open Shortest Path First (OSPF) algorithm under various circumstances. Too much reinforcement can lead to an overload of states which can diminish the results. Technique used Findings Limitations Auction-based VM resource allocation [13,14] In this article [13], the problem of assigning varied VM resource requirements to globally dispersed edge cloud nodes with bandwidth constraints to maximize total social welfare is addressed. VM resources require globally distributed edge cloud nodes with bandwidth restrictions to maximize overall social welfare. Dynamic pricing system, termination time with capital cost can be examined by including further QoS. Edge and fog computing [15,16] Through this context, our work clarifies certain principles as an output in what can be called the very first doc to be read for those beginning research in computing fog and edge. In addition, a summary of the open challenges and possible directions for research in Internet of Things (IoT), cloud, fog, and edge computing was also presented as fuel for thinking. Although heterogeneity is a problem for both the fog and the edge, it is expected that the capabilities of fog devices will be more consistent than those of edge devices. Mobile cloudlets [17,18] This paper [17] thoroughly introduces the classic state-of-the-art technologies, such as mobile cloudlets, edge computing cloudlets, etc., that comprise cloud computing. The differences between fog computing and mobile edge computing are illustrated from the perspective of radio access networks and fog-based computing radio access networks. The scheduling and calculating of deployments introduce entirely new difficulties. Now, various remote execution locations must be considered, either inside the cloudlet or in different cloudlets, as opposed to a single identified surrogate. After ethernet was developed, people started looking at the idea of a network of intelligent devices. For instance, the first internet-connected appliance was a modified Coke machine at Carnegie Mellon University that could report its inventory if freshly filled drinks were cold. Carnegie Mellon University 1994 More extraordinary industrial applications were predicted. In an article for IEEE Spectrum, Reza Raji defined the idea as integrating and automating anything from home appliances to factories by sending little data packets to many nodes. Menlo Park, California Methodology for the cost-effective cloud model in IIoT A series of researches are done in cloud computing on task scheduling, latency, and memory allocation to obtain cost-effective cloud computing. This paper proposes a cloud model for IIoT which combines a solution of task scheduling, latency, and memory allocation to minimize the computational cost of the industry. Figure 4 shows the proposed system for an optimized cloud model to achieve better performance by adopting the task scheduling algorithm, memory management techniques, and cloud-edge nodes to reduce latency. The adaptive model employs the data skew technique on IoT application data and employs the cloud model to allocate the resource by migrating the resources based on the load. The proposed method organizes the optimization technique to achieve load balancing; this proposed cloud model employs an optimization technique to minimize the load and improves the computation efficiency. Various software and hardware components comprise a computational cloud environment, including computing nodes, storage systems, databases, network resources, file systems, etc. It is made up of four main parts: the cloud user, the cloud resources, the cloud resource broker (CRB), and the cloud information service (CIS). When using the cloud, a user first communicates with a resource broker to transmit a task for computation. Following that, resource finding, scheduling techniques, and task processing are carried out. Finally, the CIS was employed as an agent. It gathers all pertinent data, like resource availability, node capacity, etc., and gives it to the resource breaker so that they may decide when to schedule things. The interactions between various clouds components are done using multiple steps which are as follows: a. The cloud user runs their application and after analysis and specifying their requirement, submits their jobs to CRB. b. The CRB collects all resource information and performs resource discovery. c. After authorizing the user and resource(s), the CRB schedules the job to the appropriate resource(s) or computing nodes. d. Resource(s) execute the job and return the computational result to CRB. e. The CRB collects the result and provides it to the cloud user. With reference to the above-proposed model, this paper identifies different task scheduling algorithms used to perform task scheduling and compare their result to find the algorithm which gives minimum response time as compared to other algorithms. Task scheduling algorithms in cloud computing Algorithms for task scheduling are used to utilize resources effectively. Task scheduling variables include waiting, response, processing, and makespan. An efficient scheduler must control these factors to ensure that the scheduling policy is as effective as possible. The complexity increases in a dynamic environment. Real-time tasks with deadlines must arrange in advance according to priority. Task scheduling architecture The task scheduling process in cloud computing is depicted in Figure 5 [19]. The data center broker, who serves as an intermediary between the user and the data center, determines how the requested users submit their tasks to the data center. The service level agreement (SLA), which is maintained between the user and the data center broker, is specified by the user when the tasks are placed in the data center. The tasks are then compiled into a task pool and transmitted to the data center as a queue. There are several hosts in a typical data center, each with one or more processor cores. In the cloud concept, the host is mapped to one or more VMs, where the work is actually completed utilizing the recommended policy techniques. The data center broker supervises the task scheduling procedure. Classification of task scheduling algorithms The various task scheduling algorithms used in cloud computing are depicted in Figure 6 Table 3. FCFS [19] According to arrival time, jobs are ranked in the queue and distributed to resources in a first come first serve fashion. When there is no deadline restriction, this approach is used. This method can be used when a deadline will not be an important constraint during the execution of the task. Not preferable for complex systems. RR [19] A time slot is available for each resource and the task is allocated to each resource during an allocated time slot. Each task will get a fair allocation of resources. It gives poor makespan value. Min-Min [20] This algorithm selects the smallest task from the available tasks and assigns it to a resource with less execution time. It gives good makespan value when most tasks have a small execution time. For the competition of all little jobs, the longest task must wait. Max-Min [20] Max-Min chooses the longest task from a pool of tasks that have the potential to take the longest amount of time to complete and assigns it to the resources that will require the shortest amount of time to complete. This algorithm will be useful when we have to give priority to the execution of large tasks first. The smallest task must wait a long time for all other long tasks to be finished, starving the smaller tasks. PSO [19] PSO is a meta-heuristic algorithm that considers both computation task and data transmission time. It is used for workflow applications by varying its computation and communication costs. Simple and effective algorithm with low computational cost. This algorithm is easy to fall into the local optimum in highdimensional space and it has a low convergence rate in the iterative process. ACO [19] The ant foraging behavior is replicated in ACO. When ants are looking for food, they leave pheromones along the way so that other ants can easily follow their trail or locate the quickest path. This approach works well for finding a solution in computations with local variables. It increases task completion time. Not considering load balancing. IWDC [21] The three steps of the IWDC algorithm are: initializing path parameters, giving them values, and allocating tasks following path creation. Comparing this algorithm to other heuristic algorithms, it performs and uses resources well. Additionally, it offers superior outcomes compared to various meta-heuristic algorithms like PSO, ACO, and genetic algorithms. Energy consumption of resources is not considered. Table 4 gives the comparative analysis of all task scheduling algorithms. These algorithms are compared based on makespan value, resource utilization, response time, energy consumption, and throughput. Effective task scheduling algorithms should have less makespan, balanced resource utilization, less response time, low energy consumption, and more throughput value. Results and discussion The comparison shows that the IWDC algorithm gives a better makespan value as compared to other algorithms. It also gives good resource utilization, minimum response time, and large throughput. So, this algorithm can be used in the presented model to develop a cost-effective cloud model. Conclusion This paper presents a cost-effective cloud model for IIoT. This proposed model can help to minimize industrial computational cost by providing efficient task management, reducing latency delay, and optimizing memory allocation This paper identifies various work scheduling techniques applied in cloud computing and compares them all. Results and discussion show that each algorithm has distinct advantages for parameters like makespan, response time, resource utilization, energy consumption, and throughput. But the IWDC algorithm gives good results for each parameter. If the IWDC algorithm is used for task scheduling, then it helps to reduce the cost of computing in IIoT. If the cost of computation in industries is reduced, it benefits society by reducing the cost of products or services. Society can get services and products from industry at minimum cost. This paper also presents the evaluation done in Industries till the current year. Manufacturers are now including new technologies into their manufacturing facilities and operations, like the IoT, robotics, artificial intelligence, cloud computing, and machine learning to enable fast and efficient production of goods and services. The presented model will help towards Industry 5.0. IWDC has a better makespan than PSO. Resource utilization FCFS gives the worst load balancing. Provides balanced utilization of resources. In the Min-Min algorithm, the long tasks have to wait for smaller tasks to finish their execution. So, the workload can't be properly allocated and gives unbalanced utilization. Larger tasks can complete on fast resources and small tasks can work parallel on other resources. So, it gives good resource utilization. It offers the best solutions for allocating all the work among the pool of available resources. It gives better utilization of resources and load balancing than PSO and other task scheduling algorithms. Response time The response time for smaller tasks at the back of the queue is more because of long tasks at the front. Each task is allocated a quantum of time for execution; there is no need for the jobs to wait. So, it gives a good response time. Response time for a task with minimum completion time is good. But, tasks having large execution time have to wait for allocation. Reduces the waiting time for long tasks and gives minimum response time. But, for small tasks, it gives more response time. When the search space is large, the algorithm speed is slow, resulting in a longer response time. IWDC gives less response time as compared to other scheduling algorithms. Energy consumption FCFS has low energy consumption. Due to more context switches, it consumes more energy. Consumes more energy for heterogeneous tasks. Consumes more energy for heterogeneous tasks. PSO has good energy efficiency as compared to other algorithms. IWDC algorithm consumes more energy. Throughput This algorithm provides fast execution of tasks. It gives good throughput. If the number of tasks having less execution time is more than throughput is high. Throughput is low if tasks with more execution time are more in numbers. The number of tasks executed is more. Compared to other algorithms, this algorithm is faster. The result is excellent throughput.
4,303.6
2023-05-19T00:00:00.000
[ "Computer Science" ]
Guaranteed State Estimation for Nonlinear Discrete-Time Systems via Indirectly Implemented Polytopic Set Computation This paper proposes a new set-membership technique to implement polytopic set computation for nonlinear discrete-time systems indirectly. The proposed set-membership technique is applied to solve the guaranteed state estimation problem for nonlinear discrete-time systems with a bounded description of noises and parameters. A common practice for this problem in the literature is to search an optimal zonotope to bound the intersection of the evolved uncertain state trajectory and the region of the state space consistent with the observed output at each observation update. The new approach keeps the polytopic set resulting from the intersection intact and computes the evolution of this intact polytopic set for the next time step through representing the polytopic set exactly by the intersection of zonotopes. Such an approach avoids the overapproximation bounding process at each observation update, and thus, a more accurate state estimation can be obtained. An illustrative example is provided to demonstrate the effectiveness of the proposed guaranteed state estimation via indirectly implemented polytopic set computation. I. INTRODUCTION State estimation is formulated as the problem of estimating the real state of the system given the mathematical model of the system and also noisecorrupted measurements of the system output [1].There are roughly two main types of approaches to tackle the state estimation problem: the stochastic approaches and the deterministic approaches.The most notable stochastic approach for state estimation is the Kalman filter, which is an efficient and recursive procedure to estimate the system state in a way that minimizes the mean of the squared estimation error [2].If all noises and perturbations are Gaussian, the Kalman filter turns out to be an optimal estimator.However, such probabilistic assumptions on the system can be unrealistic or be hard to validate in practice. Instead of describing the uncertainties or noises of a system by probability distributions, deterministic approaches to state estimation use various kinds of sets such as ellipsoids, polytopes, intervals and zonotopes to bound unknown perturbations, noises and estimation errors [3], [4], [5], [6].The deterministic approaches are also called the setmembership approaches and they are particularly useful in case of lacking probabilistic information on the systems concerned.The state estimators in these set-membership approaches use a compact set or a union of sets to contain all the state variables that are consistent with available measurements and disturbance specifications [7].Acting as the worstcase techniques for estimation, the set-membership approaches to state estimation were studied long before in [8] where ellipsoids were first used to bound the set of all possible states for linear systems with noise-corrupted observations.Due to the shape restriction of ellipsoids, the ellipsoidal bounding of uncertain states can be quite conservative and polytopes are more preferable for the bounding tasks as they can approximate any compact convex set as closely as desired [9].The use of polytopes for state estimation was studied in [10], [3] where the uncertain states of linear discrete-time systems were recursively bounded by polytopes or the simplified parallelotopes.These set-membership approaches of using polytopes for state estimation are often computationally demanding and they are also restricted to linear systems or piece-wise affine systems [11]. Set-membership state estimation for nonlinear systems was studied in [12], [13] where the admissible state space was bisected and selected into subsets to test their consistency with the observations via interval set computation.The main hurdles for interval-based state estimation are the so-called wrapping effect that makes the solution conservative and also the curse of dimensionality that makes the computation burden grow exponentially with the dimensionality of the state space.Similar to interval set computation, the dynamic evolution of a nonlinear system with a zonotopic set as the initial state can also be computed directly via zonotopic set computation and the wrapping effect can be reduced greatly with comparison to interval set computation [14].Except for the reduced wrapping effect, zonotopes as a special kind of polytopes are also more flexible in shape than intervals.Therefore, zonotopic set computation has been increasingly used for state estimation of nonlinear systems [4], [5], [15].Nevertheless, zonotopic set computation can be used for state estimation of linear discretetime systems as well [1].A new class of sets called constrained zonotopes was also proposed in [16] for set-membership state estimation of linear discretetime systems. A common practice within these zonotope-based state estimation approaches for nonlinear discretetime systems is to search an optimal zonotope to bound the intersection of the evolved uncertain state trajectory and the region of the state space consistent with the observed output at each observation update.Then the optimized zonotope is to be propagated for computing the dynamic evolution of this nonlinear system via zonotopic set computation.The set resulting from the intersection of the evolved uncertain state trajectory and the region of the state space consistent with the observed output at each observation update is essentially a polytopic set therein.This polytopic set is often overapproximated by one single zonotope.The repetitive bounding processes of the polytopic set by one single zonotope and the following propagations of the over-approximated zonotopes impact negatively on the accuracy of state estimation although great efforts have been made to obtain a tighter zonotopic bound of the intersection in [15]. The over-approximating bounding of a polytopic set by one single zonotope at each observation update in [4] becomes unnecessary if the dynamic evolution of a polytopic set can be computed for a nonlinear discrete-time system.Currently, there is no direct method to compute the dynamic evolution of a nonlinear discrete-time system with a polytopic set as the initial set.This paper proposes a novel idea to implement polytopic set computation for nonlinear discrete-time systems indirectly, which is to represent the polytope exactly by the intersection of zonotopes at first and then to compute the dynamic evolutions of these individual zonotopes whose intersection forms the polytope.The proposed idea in this paper is originated from the perspective of extending existing interval or zonotopic set computation for nonlinear discrete-time systems into polytopic set computation for nonlinear discrete-time systems.The intersection of zonotopes was called a zonotope bundle in [17] and it was used for the efficient computation of reachable sets.The intersection of ellipsoids was also used to bound the reachable set of singular systems in [18].However, the problem of researchable set computation is different from state estimation because reachable set computation does not involve the intersection with the observation update at each step as in [19], [20]. The rest of the paper is organized as follows.Section II provides the mathematical formulation of the state estimation problem to be solved.Section III describes the proposed idea of indirectly implemented polytopic set computation for nonlinear discrete-time systems.The procedure of guaranteed state estimation via indirectly implemented polytopic set computation for nonlinear discrete-time systems is given in Section IV.An illustrative example to demonstrate the effectiveness of the proposed technique is provided in Section V. Finally, some conclusions and future work are drawn in Section VI. II. PROBLEM FORMULATION Consider the following nonlinear uncertain discrete-time system [4]: where x k ∈ n and y k ∈ p are the system state and the observed system output at time instant k, respectively; ω k ∈ nω represents the timevarying process parameters and process perturbations; υ k ∈ nυ represents the observation noises.The state function f (x k−1 , ω k−1 ) is assumed to be nonlinear while the output function g(x k , υ k ) is assumed to be linear as in [4], [15] with the format It is also assumed that the initial state and all the uncertainties are bounded by known compact sets: Starting from the initial set X 0 for the system state, the problem to be considered is to estimate recursively the set X k (k = 1, 2, • • • ) for the system state at future time instants.The estimated set X k should guarantee to bound any feasible system state under all the uncertainties The set of the system state consistent with the observed output y k at time instant k can be denoted as Then the set X k for the system state at time instant k can be computed recursively as follows: It can be seen that the state estimation problem considered here involves the set computation for the dynamic evolution of the past system state X k−1 and also the intersection of two sets f (X k−1 , W k−1 ) and X y k .Existing methods for this problem search an optimal zonotope to bound the polytopic set resulting from this intersection at each time instant and thus the initial state for the dynamic evolution in (1) is always a zonotope.The over-approximating bounding process at each time instant facilitates the computation of the dynamic evolution in (1) as the dynamic evolution of a nonlinear system with a zonotopic set as the initial state is straightforward [14].However, the set resulting from the intersection is essentially a polytopic set and it would be more accurate to compute the dynamic evolution of this exact polytopic set rather than to compute the dynamic evolution of an over-approximating zonotopic set.Since there is no direct method to implement polytopic set computation for nonlinear discrete-time systems, an indirectly implemented polytopic set computation technique is to be proposed for the first time in the following section. III. POLYTOPIC SET COMPUTATION Taking a set as the input for a function, set computation returns another set as the output of the function.Polytopic set computation involves the computation of the dynamic evolution of a nonlinear discrete-time system with a polytopic set as the initial state.Polytopic set computation can be implemented indirectly through zonotopic set computation, which is to be introduced in the following subsections. A. Zonotopic set computation A zonotope is a centrally symmetric convex polytope and it is closely related to interval analysis in terms of set computation.Given a vector p ∈ n and a matrix H ∈ n×m , the zonotope Z of order n × m is the set: where B m is a box composed of m unitary intervals B = [−1, 1] and ⊕ is the Minkowski sum of sets, which is to add each member in one set to each member in the other set so as to obtain a new set. Representing the matrix H by its column vectors, i.e., then the zonotope can also be regarded as a set spanned by the column vectors of H, which are called line segment generators: Geometrically, the zonotope Z is the transferred Minkowski sum of the line segments defined by the columns of the matrix H to the central point p.Specifically, the zonotope Z degenerates to be an interval vector as well as a box when H is a diagonal matrix or when m = 1.The list of line segment generators is an efficient implicit representation of a zonotope in terms of which set computations such as the Minkowski sum and difference are trivial.The explicit representation of a zonotope or the representation of a zonotope in the format of a polytope is the zonotope construction problem aiming to list all extreme points of a zonotope defined by its line segment generators.A relatively efficient algorithm was proposed in [21] to address the zonotope construction problem, where the addition of line segments was replaced by the addition of convex polytopes.Standard algorithms for polytope geometry such as vertex enumeration for a polytope and the intersection of polytopes have been implemented in Multi-Parametric Toolbox [22]. Using zonotopes, Kühn developed a procedure to bound the dynamic evolution of a nonlinear discrete-time system with a guaranteed sub-exponential over-estimation [14].The following theorem introduces the zonotope inclusion operator of Kühn's method [14]: Theorem 1 (Zonotope Inclusion) Consider a family of zonotopes represented by Z = p ⊕ MB m where p ∈ n is a real vector and M ∈ I n×m is an interval matrix.A zonotope inclusion, denoted by (Z), is defined by: where mid(M) is the centered-point matrix of M and G ∈ n×n is a diagonal matrix that satisfies: where diam(M ij ) is the length of the interval M ij .Under these definitions, it results that Z ⊆ (Z). Given a function f (x) : n → n , x ∈ Z ⊂ X ∈ I n , where Z = p ⊕ HB m and X is the bounding box for Z, the centered inclusion function F c (Z) : f (Z) ⊆ F c (Z) can be deduced by the mean-value theorem [14], [4], i.e., where Z − p = HB m .Thus the centered inclusion function F c (Z) of f (x) turns out to be a family of zonotopes represented by Z = q ⊕ MB m , where q = f (p) and M = ∇ x f (X)H, which can be further bounded by its corresponding zonotope inclusion (Z).This is the primary principle of Kühn's method to bound the dynamic evolution of a nonlinear discrete-time system by zonotopes, where the centered inclusion function is applied instead of the natural inclusion function.Kühn's method has been generalized to bound the dynamic evolution of nonlinear uncertain discretetime systems in [4].According to [4], the evolution of a zonotopic set as the initial state for a nonlinear uncertain function f (x, W) can be bounded by the following centered inclusion function: where W is the uncertain set.Assume that f (p, W) ⊆ p w ⊕ JB w , then the centered inclusion function F c (Z, W) can be further bounded as follows: where M w = ∇ x f (X, W)H.Based on (5), p w ⊕ JB w ⊕ M w B m can be further bounded by a zonotopic set.Therefore the dynamic evolution of a nonlinear uncertain discrete-time system with a zonotopic set as the initial state returns a zonotopic set as well, which is the essence of zonotopic set computation. The above discussion shows that the dynamic evolution of a nonlinear discrete-time system with a zonotopic set as the initial state can be computed directly through zonotopic set computation.Compared to interval set computation where each variable is represented by an interval and interval arithmetic is used for set computation [23], zonotopic set computation has the benefit of a reduced wrapping effect.The reduced wrapping effect can be demonstrated by an illustrative example shown in Figure 1, where the dynamic evolution of a nonlinear uncertain discrete-time system studied in [24] is computed for three steps via interval set computation and zonotopic set computation, respectively.These two approaches starts from the same initial state and it can be seen that zonotopic set computation is less conservative with comparison to interval set computation. B. Polytopic set computation The dynamic evolution of a nonlinear discretetime system with a polytopic set as the initial state cannot be computed directly due to its mathematical format involving inequality constraints.However, using the proposed idea of representing a polytope exactly by the intersection of zonotopes, polytopic set computation can be implemented indirectly through computing the dynamic evolution of these individual zonotopes whose intersection forms the according to set theory.So the key for indirectly implemented polytopic set computation is to represent the polytope exactly by the intersection of zonotopes. The following theorem provides the guideline to represent a 2-D polytope exactly by the intersection of parallelograms, which are simple zonotopes in 2-D space: Theorem 2 (Represent a polytope P in 2 exactly by the intersection of zonotopes) Assume that the polytope P ⊂ 2 has n c inequality constraints, then the convex polygon P can be represented exactly by the intersection of nc 2 zonotopes if n c is even or exactly by the intersection of nc+1 2 zonotopes if n c is odd. Proof.As the polytope P ⊂ 2 has n c inequality constraints, it has n c edges associated to these n c inequality constraints.At each vertex of the polytope, there are two edges that start from this vertex.Making use of these two edges that start from the vertex, a zonotope or a parallelogram can be constructed to contain the polytope.The polytope can be represented by the intersection of these constructed parallelograms if all its edges have been used up to construct the parallelograms.As each parallelogram uses two edges, the number of parallelograms needed to represent the polytope exactly is nc 2 if n c is even or nc+1 2 if n c is odd.The construction of a parallelogram to contain the 2-D polytope can be transformed to be a linear programming (LP) problem that minimizes the sum of the base length and the side length for the parallelogram to be minimal in volume.Assume Fig. 3. Representing a polytope exactly as the intersection of zonotopes that at the vertex V j , the corresponding two edges starting from the vertex are ax 1 + bx 2 = p 1 and cx 1 + dx 2 = p 2 according to the associated two inequality constraints.The constructed parallelogram should satisfy: q 1 ≤ ax 1 + bx 2 ≤ p 1 and q 2 ≤ cx 1 + dx 2 ≤ p 2 where q 1 and q 2 determine the size of the parallelogram and they are to be optimized through the following LP problem: subject to where is the ith vertex of the polytope and n v is the total number of vertices for the polytope.These linear constraints are to guarantee that the constructed parallelogram contains the whole 2-D polytope.Once the parallelogram is constructed with the optimized q * 1 and q * 2 , it can be re-represented in the format of a zonotope Z = p ⊕ HB 2 where p is the center of this parallelogram and H is algebraically determined from the vertices of this parallelogram. Taking the 2-D polytope with five vertices (1, −3), (0.5, 3), (2,6), (3.5, 4) and (3, −4) as an example, it can be represented exactly as the intersection of three zonotopes as shown in Figure 3, where these three zonotopes are Z It is worthy to note that the two edges with the associated two inequality constraints for formulating the LP problem in (10)(11) do not necessarily come from the same vertex.In fact, any two inequality constraints can be sequentially selected from the pool of all inequality constraints for the polytope to formulate the LP problem for finding the optimal parallelogram to contain the polytope as long as the edges from these two inequality constraints are not parallel.Using random inequality constraints for the 2-D polytope shown in Figure 3 to formulate the corresponding LP problems, the constructed parallelograms to represent the polytope exactly are shown in Figure 4.The obtained parallelogram from two random edges in Figure 4 may not be as compact as those obtained from two specified edges coming from the same vertex as shown in Figure 3.However, such an approach of using inequality constraints directly instead of using two specified edges coming from the same vertex to formulate the LP problem can be easily extended into higher dimensional spaces, i.e., the sequential selection of certain inequality constraints from the pool of all inequality constraints for the higher dimensional polytope to formulate the LP problem for finding the optimal zonotope to contain the polytope until all inequality constraints are used up. IV. GUARANTEED STATE ESTIMATION VIA INDIRECTLY IMPLEMENTED POLYTOPIC SET COMPUTATION Based on the problem formulation in Section II and the proposed technique for indirectly implemented polytopic set computation in Section III, the general procedure of guaranteed state estimation for nonlinear discrete-time systems via indirectly implemented polytopic set computation can be listed as follows: • Step 1: Represent the past system state of a polytopic set X k−1 exactly by the intersection of zonotopes where n z is the number of zonotopes whose intersection forms the polytopic set; individually via zonotopic set computation as formulated in (9); • Step 3: Compute the set of the system state X y k that is consistent with the observed system output y k and X y k is a convex set due to the linearity of g(x k , υ k ); • Step 4: Compute the current system state of a new polytopic set are to be transformed into the format of polytopes as described in [21] before their intersection with the convex set X y k ; and thus the system states are guaranteed to be contained in the computed polytopic sets The computed polytopic set X k can still be an overapproximation of the real state.However, such overapproximation is mainly from the limited wrapping effect of zonotopic set computation rather than the extra over-approximation of a polytopic set by one single zonotopic set and the following propagation of such an over-approximation as in [4], [5], [15].Furthermore, the intersection of zonotopes can also contribute to the reduction of conservativeness and the convergence of the algorithm since more constraints have been propagated during the evolution process. V. AN ILLUSTRATIVE EXAMPLE A modified nonlinear uncertain discrete-time system studied in [25] is adopted as the illustrative example for the proposed set-membership state estimation of nonlinear discrete-time systems via indirectly implemented polytopic set computation.The system is described as follows: where δ(k) ∈ [0.2, 0.3] is the uncertain parameter; ω(k) ∈ [0.4,0.5] is the process perturbation; |υ(k)| ≤ 0.1 is the bounded measurement noise. According to Section IV, the process of guaranteed state estimation via indirectly implemented polytopic set computation for this system is shown in Figure 5.The initial state is assumed to be within a box x 1 (0) ∈ [0.5, 0.15] and x 2 (0) ∈ [0.5, 0.15].For this particular simulation, the real initial state is set to be x 1 (0) = 0.1 and x 2 (0) = 0.1.As shown in Figure 5, a polytopic set is obtained from the dynamic evolution of this initial set and this polytopic set is then to be intersected with the convex set X y 1 that is consistent with the first observation.The renewed polytopic set from the intersection with this observation update is a hexagon and it is represented by three zonotopes obtained from the LP formulation as described in B of Section III.The dynamic evolution of these three zonotopic sets for the system is computed individually via zonotopic set computation as discussed in A of Section III.The polytopic set before observation at the second step is the intersection of these three propagated zonotopic sets and this polytopic set before observation is to be intersected with the set X y 2 that is consistent with the second observation.The same procedure of representing the new polytopic set after observation exactly by the intersection of zonotopes is performed at the second step and therefore the state for any future steps can all be bounded by polytopic sets. Repeating the processes in Figure 5, Figure 6 shows an example of nine steps for state estimation of this nonlinear discrete-time system with comparison to an existing method of approximating the polytopic set by one single zonotope with the minimized segments at each observation update [4].The dashed polytopes in Figure 6 are those polytopes obtained after observation update using the method proposed in [4] while the solid polytopes are those polytopes obtained after observation update using the method proposed here.To avoid too many overlapping polytopes in Figure 6, the dashed polytopes are plotted for only the first two steps and the ninth step of the simulation.It can be seen that the real states plotted by ⊕ are all within the obtained polytopic sets of the proposed approach and the polytopic sets from state estimation have various kinds of shape as well.The intersection operation of the propagated zonotopes as well as the intersection with observation update at each step can potentially reduce the complexity of the obtained polytopic set as well as the number of zonotopes needed to represent the polytope exactly.This can be seen directly from the ninth step in Figure 6 as only two zonotopes are needed to represent the obtained polytope exactly.The volume of the set after observation update for the existing method in [4] and the proposed approach is listed in Table I: Overall, the obtained polytopic sets after obser- The volume of the set Method in [4] New method Average volume for 9 steps 0.0210 0.0139 Specific volume at the 9th step 0.0247 0.0081 vation update for the proposed approach have an average volume of 0.0139, which is much smaller than the average volume of 0.0210 for the obtained polytopic sets after observation update from the existing method in [4].Therefore the average accuracy for state estimation has been improved by 33.81% for these nine steps.Particularly, the proposed approach has a larger improvement of 67.21% at the 9th step as shown in Table I, which shows the greater benefit of using indirectly implemented polytopic set computation for state estimation.The computation involves the converted LP problems and thus efficient algorithms are available. VI. CONCLUSIONS This paper has proposed the novel idea of representing a polytope exactly by the intersection of zonotopes, which enables polytopic set computation for a nonlinear discrete-time system with a polytopic set as the initial state.Such an extension of set computation for nonlinear discrete-time systems from interval and zonotopic set computation to polytopic set computation opens new research directions for set-membership methods.The paper has applied the proposed idea to solve the guaranteed state estimation problem for nonlinear uncertain discrete-time systems.The resulting set-membership state estimation via indirectly implemented polytopic set computation avoids the over-approximating processes of bounding the polytopic set at each observation update by a single zonotope and thus a more accurate state estimation can obtained. Fig. 5 . Fig. 5.The process of guaranteed state estimation TABLE I THE COMPARISON BETWEEN THE EXISTING METHOD IN [4] AND THE PROPOSED METHOD IN TERMS OF THE VOLUME OF THE OBTAINED SET AFTER OBSERVATION UPDATE
5,973.8
2018-03-22T00:00:00.000
[ "Mathematics" ]
Identification of gametes and treatment of linear dependencies in the gametic QTL-relationship matrix and its inverse The estimation of gametic effects via marker-assisted BLUP requires the inverse of the conditional gametic relationship matrix G. Both gametes of each animal can either be identified (distinguished) by markers or by parental origin. By example, it was shown that the conditional gametic relationship matrix is not unique but depends on the mode of gamete identification. The sum of both gametic effects of each animal – and therefore its estimated breeding value – remains however unaffected. A previously known algorithm for setting up the inverse of G was generalized in order to eliminate the dependencies between columns and rows of G. In the presence of dependencies the rank of G also depends on the mode of gamete identification. A unique transformation of estimates of QTL genotypic effects into QTL gametic effects was proven to be impossible. The properties of both modes of gamete identification in the fields of application are discussed. INTRODUCTION Fernando and Grossman [2] described how to incorporate genetic markers linked to quantitative trait loci (QTL) into best linear unbiased prediction (BLUP) for genetic evaluation. For this, the inverse of the conditional gametic relationship matrix G is needed. This matrix mirrors the (co-)variances between QTL allele effects of all animals for a marked QTL (MQTL). For offspring of so-called informative matings the paternal or maternal origin of gametes can be identified by one or several markers in the surroundings of the QTL. The QTL-allele on the paternal (maternal) gamete can then be taken as the first (second) MQTL-allele effect of such an individual. Below this is termed "gamete identification by parental origin". An alternative mode of gamete identification has been employed by Wang et al. [21] and Abdel-Azim and Freeman [1]: for an individual with a heterozygous (1, 2) marker genotype, the gamete with the first (1, in alphanumerical order) marker allele is taken to carry the first and the gamete with the other (2) allele, the second MQTL allele effect. This is denoted as "gamete identification by markers". Both modes of gamete identification have been used before in publications dealing with the computation of G and its inverse from pedigrees and marker data. Until now -to the authors' knowledge -the consequences of changing the mode of gamete identification in a marker assisted BLUP (MA-BLUP) model have, however, not been investigated. Abdel-Azim and Freeman [1] -based on the results of [2] and [21] -developed a numerically efficient algorithm for the computation of G and its inverse. This algorithm has been tailored for situations where G has full row and column rank and the number of MQTL effects is twice the number of animals in the pedigree. However, under certain circumstances, linear dependencies may occur between gametic MQTL effects and G may therefore be rank-deficient. This could e.g. arise from a microsatellite located within an intron (zero recombination rate) of that gene, which is responsible for the QTL or if double recombinants are ignored for a QTL between two flanking markers [10]. This article first demonstrates by example that G is not unique but depends on the mode of gamete identification, and as do the MA-BLUP estimates of gametic MQTL effects. Then a generalization of the Abdel-Azim and Freeman algorithm [1] is developed, allowing for the elimination of linear dependencies in G and its inverse. MODEL, NOTATION, DEFINITIONS, ASSUMPTIONS Let us consider the following mixed linear model (gametic effects model) where y (m×1) denotes the vector of m phenotypic records for n animals, f (n f ×1) is the vector of fixed effects, u (n×1) is the vector of random polygenic effects and v (2n×1) is the vector of the random gametic effects (v 1 1 , v 2 1 , . . . , v 1 i , v 2 i , . . . , v 1 n , v 2 n ) of a marked quantitative trait locus (MQTL) that is linked to a single polymorphic marker locus (ML). Linkage equilibrium between ML and MQTL is assumed. Observed marker genotypes are denoted by M. X (m×n f ) , Z (m×n) are known incidence matrices and T (n×2n) = I n ⊗ [ 1 1 ], where ⊗ stands for the Kronecker product. Subscripts in parentheses of the vectors and matrices denote their dimensions. Expectations of u, v and e and covariances between them are assumed to be 0. Furthermore, let Cov(u) = σ 2 u V, Cov(v) = σ 2 v G, Cov(e) = σ 2 e R, with the (n × n)-dimensional numerator relationship matrix V, the (m × m)-dimensional residual covariance matrix R and the (2n × 2n)-dimensional conditional gametic relationship matrix G and the variance components σ 2 u , σ 2 v and σ 2 e of the polygenic effects, the effects of the MQTL and the residual effects. Let α 1 i α 2 i , i = 1, ..., n denote the two MQTL alleles of individual i having the additive effects v i = (v 1 i , v 2 i ) , and P(α k i ⇐ α t j |M) defines the probability that the kth allele, k = 1, 2, of individual i descends from the tth allele α t j , t = 1, 2, of parent j given the observed marker genotypes M, and, r is the recombination rate between the maker locus and the MQTL. In the following paragraphs let us assume that individuals are ordered such that parents precede their progeny (ordered pedigree). COMPUTING G AND ITS INVERSE Abdel-Azim's and Freeman's example [1] is used to demonstrate that G and its inverse are not unique but depend on the mode of gamete identification. With the assumptions made above and a recombination rate r > 0, gamete identification by markers is considered first. Gametes are identified by markers Let s and d denote paternal and maternal parents of animal i. The eight probabilities that the MQTL alleles (α 1 i , α 2 i ) of animal i descended from any of the parents' four MQTL alleles, paternal (α 1 s , α 2 s ) and maternal (α 1 d , α 2 d ), for given observed marker genotypes M, can be written as a matrix Q i as defined by Wang et al. [21]: It must be defined what is the first and what is is the second MQTL allele in (2a): in heterozygotes (1,2 at the marker) the first MQTL allele is on the gamete with the first marker allele (1) and the second MQTL allele is on the gamete with the second marker allele (2), as already described in the introduction. In homozygotes, the MQTL alleles can not be distinguished. The Q i for the base animals, i.e. animals having no parents in the pedigree, are not defined. Non-base animals have Q i s with first and the second row sums equal to one as well as the sum of the elements of the sire block (first two columns of Q i ) and the sum of the elements of the dam block (last two columns of Q i ). The Q i matrices are of key importance, because once these Q i s have been computed for all individuals in an ordered pedigree, the tabular method [21] can be applied for the construction of G and G −1 -no matter what method has been used for the computation of Q i s before: where f i is the conditional probability that 2 homologous alleles at the MQTL in individual i are identical by decent, given observed marker genotypes M (conditional inbreeding coefficient of individual i for the MQTL, given M), which can be calculated according to formula (11) in [21], and )-dimensional matrix constructed by setting the (2s-1)th and (2s)th column equal to the first and second column of Q i and the (2d-2)th and (2d)th column equal to the third and fourth column of Q i , all other elements of A i are zero, where s and d are the numbers of the sire and the dam of individual i in the ordered pedigree. Abdel-Azim and Freeman [1] gave an algorithm for the decomposition of G by G = BDB , where B is a lower triangular matrix and D is a block diagonal matrix with (2 × 2)-matrices D i from (4) in the ith block. B can be recursively computed as where I 2 is an identity matrix and A i is the same matrix as in (3) and (4). The inverse of G can be calculated as with Table I. Example pedigree, marker genotypes from [1] and Q * i (bold numbers) from (2b), in Q i notation (2a). Animal Sire Dam Marker [1] proposed efficient computational techniques using this decomposition and a sparse storage scheme for G −1 . The example of Abdel-Azim and Freeman (see Tab. I in [1]) can be used to demonstrate G ( Fig. 1 in [1]) and G −1 (p. 162 in [1]) for complete marker data, linkage equilibrium and a recombination rate of 0.10 under gamete identification by markers. Gametes are identified by parental origin of the marker alleles When the gametes α 1 i , α 2 i are identified by the parental origin of the marker alleles, the first MQTL allele of animal i is defined as its paternal (α 1 i = def α s i ) and the second as its maternal allele (α 2 i = def α d i ). Consequently (2a) becomes and with the fact that the row sums of Q i are equal to 1 , i.e. only two parameters P(α s i ⇐ α s s |M) and P(α d i ⇐ α s d |M) are to be calculated and therefore Q i reduces to Q * 1 i and Q * 2 i are known as transition probabilities in QTL analysis. In contrast to gamete identification by markers (2a), the gametes of base animals cannot be uniquely identified and the paternal or maternal origin of the marker alleles of all base animals remains uncertain when (2b) is applied. With a probability of 0.5 the first marker allele may be of paternal or maternal origin, and the second, too. This fact creates differences in the Q i matrices and, as a consequence, differences in G and its inverse if gamete identification by parental origin is used. The same is true for heterozygous offspring of uninformative matings. For illustration, let us consider animal 5 in Table I in [1] and Table I of this paper. Animal 5 has a marker genotype A 1 A 1 and is offspring of animal 3 (sire, A 1 A 2 ) and animal 4 (dam, A 1 A 2 ). It is evident that animal 5 has inherited A 1 from both parents. With definition (2a), this is the first allele of the sire and the first of the dam, but because of the homozygosity, each of the A 1 in animal 5, A 1 can be the first or the second marker allele. Thus under (2a), Q 5 must be determined as where the first matrix of the product is the matrix with the probabilities of descent for the marker alleles and the second is the matrix with the recombination rate r = 0.1 in both formulas for Q 5 . Now we use definition (2b), and the fact that the sire of 5 is base animal 3. Hence in individual 3 A 1 can be maternal or paternal with probability 0.5. The dam of animal 5 is no base animal. So it is clear that A 1 is the paternal allele of the dam, and or in (2b) notation Q * 5 = 0.5 0.9 . The complete set of Q * i s (2b) in their Q i notation (2a) for Table I data in [1] for gamete identification by parental origin can be found in Table I. With Q i -notation of the Q * i the algorithm of [21] and [1] can also be applied for computing the conditional gametic relationship matrix (non-zero elements of this matrix see (E 1) and its inverse (non-zero elements of the inverse see (E 2)). Figure 1 in [1] and (E 1) or the matrix at page 162 in [1] and (E 2), there are some differences in G and G −1 . The G-matrix [1] is of full rank and has 128 non-zero elements, G in (E 1) is of full rank, too, but it only has 106 non-zero elements. The numbers of non-zeros in the corresponding inverses are 74 (p. 162 in [1]) versus 58 (E 2). With the w = Tv, model (1) can be written as MQTL genotypic effects of model y = Xf + Zu + Zw + e, with (n × 1)-vector w of genotypic effects at the MQTL of the n animals, to the same conditional genotypic relationship matrix [19] (non-zero elements in (E 3)) for both different conditional gametic relationship matrices Figure 1 in [1] and (E 1). As a consequence the resulting genotypic effects w are independent of the variant of G and the same is true for polygenic effects and the total breeding values of all animals. LINEAR DEPENDENCIES IN G AND RULES FOR ELIMINATING THEM As already mentioned, the recombination rate r between MQTL and the marker may be zero for certain applications. Therefore we re-examine the example from Table I in [1] using gamete identification by markers, but now with a recombination rate of r = 0. The corresponding Q i s can be found in Table II. With the Abdel-Azim and Freeman algorithm [1] the G-matrix can be calculated, but it has dependent rows and columns (e.g. identical rows/columns 8, 12 and 14, see (E 4)). (E 4) The computation of G −1 fails because of the dependencies in G. These dependencies are indicated by det(D i ) = 0 for individuals i = 5, 6, 7, and consequently, D −1 i in (4) or (6) does not exist for these individuals. The dependencies in G are caused by the configuration of Q i s. Problem-creating Q i -matrices in the example are Q 5 , Q 6 and Q 7 in Table II. Q 6 and Q 7 imply Table II. Example pedigree, marker genotypes from [1] and Q i (recombination rate: of (E 6), (E 7): (1). Q 5 contains the information that animal 5 has received the sire's first MQTL-allele and the dam's first MQTL-allele, but it is not known which of these alleles is the first and which is the second in animal 5. Therefore Q 5 can be written as the average Q 5 = 0.5 · 1 0 0 0 0 0 1 0 + 0 0 1 0 1 0 0 0 , and the corresponding effects . Hence the number of gametic effects in model (1) can be reduced to a smaller set of different effects without dependencies in a corresponding 'condensed' gametic relationship matrix G * . How the configuration of the Q i s can be used in a 'condensing' algorithm for the gametic effects and the computing of the 'condensed' gametic relationship matrix G * and its inverse is outlined in detail in the following section. Let v * denote the n * -dimensional vector of the n * remaining components of v and let L be a(2n × n * )-dimensional matrix with row sums equal to 1 in such a manner that v = Lv * . Therewith, model (1) can be written as The determination of the n * remaining components of v is part of the condensing algorithm. It is assumed that the Q i matrices (2a) have already been computed for all animals and the pedigree is ordered such that parents precede their progeny. Let 4)vector of the column sums of Q i . SQ 1 i = 1 for example means that animal i has received the first MQTL-allele of its sire and therefore SQ 2 i = 0. If there is a one in the first or second row of the first column of Q i the place of this allele in i is the number of that row containing the one. Define N = ((N i,j )), i = 1, ..., n; j = 1, 2 a (n × 2)-dimensional integer matrix with the indices of the remaining gametic effects v * of n animals and N i = (N i,1 ; N i,2 ) the ith row of N and let n b be the number of base animals at the top of the pedigree which are considered to be unrelated and non inbred, and n max The algorithm consists of four parts: the generation of the index matrix N, the determination of matrix L, the calculation of the condensed gametic relationship matrix G * , and finally, the computation of its inverse. It is independent of the mode of gamete identification and can be used with Q i definition (2a) as well as with Q i definition (2b). First part of the algorithm: Generation of the index matrix N For i ≤ n b (base animals): For i > n b (non base animals) and k, j = 1, 2: where N s(i),k is the index of the kth MQTL-allele (k = 1, 2) of the sire s(i) and N d(i), j is the index of the jth MQTL-allele ( j = 1, 2) of the dam d(i) of Table III. Second part of the algorithm: Determination of the incidence matrix L For each animal i (i = 1, ..., n) there are two rows in L. Let L 2i−1,t denote the elements of the first row and L 2i,t (t = 1, ..., n * ) those of the second. The following algorithm determines the non zero elements of L: and ; else for t = N i,2 where 1 Q i means that no element of Q i equals one. Third part of the algorithm: Calculation of the condensed gametic relationship matrix G * The condensed gametic relationship matrix G * with full rank n * can be computed by the use of the following generalization of the tabular method of [21]: G * 1 = G 1 and for i = 2, ..., n i is a (2×n max i−1 )-dimensional matrix with columns N s(i),1 , N s(i),2 , N d(i),1 , N d(i),2 being identically with the first, second, third, fourth column of Q i in this order and zero elements otherwise. Fourth part of the algorithm: Computation of the inverse of the condensed gametic relationship matrix G * The inverse G * −1 of the condensed gametic relationship matrix G * can be determined by generalization of the tabular method of [21] in an analogous way: G * −1 1 = G −1 1 and for i = 2, ..., n if one row and column is added and if two rows and columns are added. Calculation of d * i and D * i can be simplified by using d * We continue with animal 7 for illustration of (10). With G * −1 6 we only have to compute d * 7 to get G * −1 7 = G * −1 (see (E 7) for non-zero elements of the inverse G * −1 ) with A * 1 7 = 0 0 0 0 0.5 0 0.5 0 0 from above. With w = Tv, v = Lv * , the relation Q G = 0.5 · TGT = 0.5 · T(LG * L )T between conditional genotypic relationship matrix Q G and conditional gametic relationship matrices G and G * , it is easy to verify that G * from (E 6) and G from (E 4) result in the same conditional genotypic relationship matrix Again, we consider the situation that the gametes α 1 i , α 2 i are identified by parental origin for Table II data and calculate Q * i according to (2b). The Q * i s for the non-base animals are Q * 4 = 0.5 0.5 , Q * 5 = 0.5 1.0 , Q * 6 = 0.5 0.0 and Q * 7 = 0.5 0.0 . Applying the condensing algorithm for this data, the condensed gametic relationship matrix G * (see (E 9) for non-zero elements) and its inverse G * −1 (see (E 10) for non-zero elements) ) can be calculated recursively. In contrast to (E 5) (10 remaining effects) there are now 11 effects left. The differences in the condensed gametic relationship matrices ((E 6) vs. (E 9)) and their inverses ((E 7) vs. (E 10)) are evident. G * in (E 6) is of rank 10 and has 34 non-zero elements, G * in (E 9) is of rank 11 and has 43 non-zero elements. The corresponding inverses are of rank 10 with 32 non-zeros (E 7) versus rank 11 with 39 non-zero elements (E 10). But again the matrices of (E 6) and (E 9) result in the identical genotypic relationship matrix (E 8), which can easily be verified. This means that the number and size of estimates of gametic MQTL effects depend on the mode of gamete identification, but the sum of these effects remains unaffected for each animal. DISCUSSION Gamete identification by parental origin and gamete identification by markers have already been used earlier in the literature without a clear distinction [1,2,14,21]. In this article it was shown for the first time, at least to the authors' knowledge, that both identification methods result in different conditional gametic relationship matrices and different estimates of gametic effects in a MA-BLUP model. It could, however, be demonstrated that the MA-BLUP breeding value -the estimated sum of QTL-gamete effects and the polygenic effect -remain the same irrespective of the method of gamete identification. A practical advantage of identification by parental origin is that both the sire-block and the dam-block of the Q i -matrices (2a) can each be represented by a single number, namely the probability that the paternal allele of the sire and the dam have been transmitted to the descendant, respectively. The reason for this is that each block (sire and dam) of the Q i -matrices has only two non-zero entries which sum up to one, if the Q i -matrices reflect gamete identification by parental origin. The alternative mode of gamete identification needs three probabilities for each block, because the number of non-zero entries per block (again summing up to one) is four when gamete identification is done by marker. It may be worthwhile to store Q i -matrices for all animals additionally to marker raw data as an essence of marker information and intermediate result in computing the G matrix and its inverse and also for other purposes as e.g. the computation of measures of marker information content. Though six numbers per animal will not be prohibitive to store even with tens of thousands of animals, identification by parental origin needs only two and is therefore easier to administer. The genotypic relationship matrix Q G in the MQTL genotypic effect model may either be determined by deterministic methods [1,2,9,13,21] or by Markov chain Monte Carlo (MCMC) [5,12,[16][17][18]20] and their advantages and disadvantages have been investigated by several authors [11,13]. MCMC has been implemented in the LOKI program [6] in order to compute Q G by MCMC, but currently it cannot be used to compute G for a MQTL allelic effects model. Though not primarily designed for this purpose SimWalk2 [15] can be employed to achieve this goal: it reports MCMC estimates of all 15 detailed identity state probabilities [4] between any pair of animals in the pedigree. Q i matrices can be derived from the SimWalk2 output implicitly using gamete identification by parental origin. This follows from the definition of identity states in SimWalk2 software [15]. The MA-BLUP breeding value of each animal in (1) equals the sum of the MQTL genotypic effect and the polygenic effect. Thus it would be sufficient to use a MQTL genotypic effects model and to save one equation per genotyped animal. Since the MQTL genotypic effect is the sum of the first and the second MQTL allele effects, one positive and one negative gametic effect may give rise to the same genotypic effect as two gametic effects of average size. It would be interesting for breeders to know how an MQTL genotypic effect is composed and therefore it is desirable to have estimates of the available gametic effects. A conclusion of the considerations above is, that a certain mode of gamete identification has to be chosen. It may be more natural to animal breeders to think in pedigrees and parental origin rather than in marker haplotypes and therefore gamete identification by parental origin may be preferred for this purpose. It has been proposed by [11] to estimate MQTL genotypic effects w and to transform these into allelic effects v. If such a transformation exists, it would have to be specific for a certain definition of v, i.e. it would depend on the mode of gamete identification. The conclusion of [11] that v = 0.5 · GT (Q G (n×n) ) −1 w follows from Tv = w = TGT (TGT ) −1 w = 0.5·TGT (Q G (n×n) ) −1 w is however not possible, because a left-inverse, say T ( where t 1,1 cannot take the values 1 and 0 at the same time, and consequently, T (left)− (2n×n) T (n×2n) never equals the identity matrix. Both modes of gamete identification fail for some animals. The gametes of homozygous individuals (both base and non-base) cannot be distinguished by markers. On the contrary, gamete identification by parental origin fails in all founders and in all non-founders from non-informative matings, where from marker analysis it cannot be deduced whether the marker was inherited from the dam or from the sire. As the number of markers is increased, the probability for a mating to be non-informative for all markers as well as the probability for an individual to be homozygous at all marker loci becomes smaller and smaller. Multiple markers will therefore help to identify nearly all gametes unequivocally. This is, however, not true for founder animals, when gamete identification is done by parental origin. This problem can, however, be resolved by identifying founder gametes by markers and by arbitrarily denoting one of both gametes of each founder animal as maternal and the other as paternal. By applying this rule to the example pedigree [1] it can be shown that a third variant of a conditional gametic relationship matrix results and differs from the two others demonstrated above (data not shown), but again transforms to the same Q G -matrix and, consequently, leads to the same MA-BLUP breeding values. Gamete identification by marker may be of interest in applications where the intention is to test a certain polymorphism for linkage disequilibrium with a QTL. With two alleles and in using this polymorphism for gamete identification, all gametes with allele 1 are treated as the first and all gametes with allele 2 as the second gamete of heterozygous animals. The expectation is that, if the polymorphism is in strong linkage disequilibrium with the QTL, differences between the first and the second gametic effect of heterozygotes will exhibit the same sign and roughly the same size, provided there is a sufficient accuracy of both gametic estimates. A reduction of the size of the conditional gametic relationship matrix has already been proposed by [10]: parents and offspring sharing the same marker haplotype were treated as sharing the same QTL allele, by assuming a zero probability of double recombinations. In these cases the same gametic QTL effect was assigned to the parent and the offspring. When the offspring has received a recombinant marker haplotype a new gametic QTL effect was defined as the average of both gametes of the parent animal. Treating these two gametes as parents of the new gamete allows to set up a pedigree of gametic effects and to compute the conditional gametic relationship matrix and its inverse simply by applying the Henderson rules [7,8]. The condensing algorithm will, of course, lead to identical results, if desired: sire-and dam-blocks of progeny with non-recombinant parental haplotypes have to carry zeros and ones only, and in the recombinant case the corresponding sire-and dam-blocks are assembled by fifty-percent transition probabilities as in the case without markers. The Meuwissen and Goddard proposal [10] therefore combines a special case of the condensing algorithm with gamete identification by parental origin. Assuming the QTL in the middle of a certain marker interval and rounding the QTL-transition probabilities to either one or zero if they are closer to these values as a predefined threshold, e.g. 0.02, will give the same results as in [10] for those animals which are informative at the markers flanking this particular interval. For the other animals information from markers more remote and asymmetrically distributed around the QTL's assumed home interval may be available. In these cases the probability of double recombinations may be too high to be neglected and, furthermore, the assumption of an equal transmission probability of the first and second parental allele may be unrealistic in the light of the markers transmitted. These animals can easily be combined with the former group by maintaining the original transition probabilities in the Q imatrices without rounding and then applying the condensing algorithm to all pedigree members in the same way, no matter of previous rounding or not. In conclusion, the condensing algorithm is a generalization of the Abdel-Azim and Freeman algorithm [1] for computing the conditional gametic relationship matrix and its inverse. Although suggested before by other authors, computing this inverse cannot be avoided if estimates of gametic QTL effects are desired. The condensing algorithm can be applied to different modes of gamete identification, situations with and without markers including X-linkage, clones and haplodiploid pedigrees. Treatment of haplotypes according to the proposals of [10] are covered as a special case and can be combined with exact treatment of more remote marker information.
7,071.4
2004-11-15T00:00:00.000
[ "Mathematics" ]
The Automated Bias-Corrected and Accelerated Bootstrap Confidence Intervals for Risk Measures Different approaches to determining two-sided interval estimators for risk measures such as Value-at-Risk (VaR) and conditional tail expectation (CTE) when modeling loss data exist in the actuarial literature. Two contrasting methods can be distinguished: a nonparametric one not relying on distributional assumptions or a fully parametric one relying on standard asymptotic theory to apply. We complement these approaches and take advantage of currently available computer power to propose the bias-corrected and accelerated (BCA) confidence intervals for VaR and CTE. The BCA confidence intervals allow the use of a parametric model but do not require standard asymptotic theory to apply. We outline the details to determine interval estimators for these three different approaches using general computational tools as well as with analytical formulas when assuming the truncated Lognormal distribution as a parametric model for insurance loss data. An extensive simulation study is performed to assess the performance of the proposed BCA method in comparison to the two alternative methods. A real dataset of left-truncated insurance losses is employed to illustrate the implementation of the BCA-VaR and BCA-CTE interval estimators in practice when using the truncated Lognormal distribution for modeling the loss data. INTRODUCTION Interval estimation, in addition to point estimation of the population parameters, represents an important foundation in statistical inference.Interval estimators are commonly referred to as "confidence intervals."The confidence interval expresses the uncertainty that exists between the estimate and the true population parameter given a certain confidence level.Many inferential statistical methods to construct confidence intervals in actuarial science applications build on asymptotic theory. There are two main standard approaches to determine confidence intervals: (1) a nonparametric and (2) a parametric asymptotic approach.The nonparametric approach determines the confidence interval based on the empirical cumulative distribution function (ecdf) relying on the fact that the ecdf is guaranteed to converge to the underlying cdf (refer to the Glivenko-Cantelli theorem;Gnedenko 1950;Tucker 1959).In the finite sample case, the nonparametric approach thus relies on assuming that the ecdf approximates the underlying cdf of the independent and identically distributed (i.i.d.) data well.In the parametric case, the cdf is specified by a parametric model where only a vector of parameters needs to be estimated from the data and the fitted distribution can then be used to determine the quantity of interest; for example, the risk measure.Parameter estimation can be performed using general maximum likelihood (ML) theory, and for i.i.d.data the ML estimates are asymptotically normally distributed, with the uncertainty being captured by the Fisher information matrix.Point estimates as well as interval estimates of the risk measures are then determined conditionally on the fitted model; for example, using the quantile function or the conditional expectation, given the implied parametric distribution, as well as combining the results from standard ML theory with the Delta method to obtain uncertainty estimates.This approach heavily relies on the asymptotic normal distribution of the estimators; that is, the distribution obtained for a sample containing infinitely many observations.When determining confidence intervals of risk measures such as Value-at-Risk (VaR) and conditional tail expectation (CTE), it might be particularly questionable to rely on the assumption of the asymptotic distribution being suitable to use for samples of rather limited size given the nature of insurance losses that exhibit a large departure from normality. Determining interval estimators for the CTE has previously been discussed in the actuarial literature.Manistre and Hancock (2005) focused on the nonparametric estimator of the CTE and determined a suitable variance estimate of the CTE estimator that they then plugged into a confidence interval estimate of the form point estimate ± quantile of the normal distribution times the standard error estimate.Brazauskas et al. (2008) considered nonparametric and parametric approaches to estimate the CTE.In the nonparametric case, estimation of the confidence intervals was also based on estimates of the standard error, where they proposed to either use the bootstrap or base the estimate on order statistics of the sample.In addition, they considered a parametric approach where they focused on three specific parametric models.Each of these parametric models contained a single parameter only and constituted shifted versions of the Exponential, Pareto, and Lognormal distributions.The parameters were estimated using ML and the standard error of the estimates was obtained using the Delta method to then derive confidence intervals.An alternative to ML estimation was introduced by Brazauskas, Jones, and Zitikis (2009) as the so-called method of trimmed moments (MTM).Using MTM, one removes a small predetermined proportion of extreme observations before parameter estimation to robustify parameter estimation.In the following we focus on ML estimation but would like to point out that in the presence of outliers, MTM could be a preferable parameter estimation method. Focusing on one-parameter families only in the parametric approach seems overly restrictive.For example, in many actuarial applications, the two-parameter Lognormal distribution is needed to obtain a good fit for the loss data (see Blostein and Miljkovic 2019).When additional constraints are added to this model, such as left truncation or censoring (i.e., driven by the policy regulations), obtaining analytical solutions for the risk measures becomes extremely tedious and may not be attractive in practice.This may be one of the reasons why many studies in the area of risk measures focus on curve fitting with the point estimation of risk measures without looking at their uncertainty.Even though researchers recognize that the risk measures are subject to uncertainty, the interval estimation of risk measures seems to be a less popular research area.Thus, one of the goals of this study is to explore alternative computational solutions for obtaining parametric asymptotic confidence intervals for VaR and CTE.With the use of computer power and general computational tools in R (R Core Team 2021), we are able to substitute analytical work, related to the Delta method, with computational solutions to improve the efficiency of the implementation and increase awareness for practical use of these computational tools. As a part of the computational effort, this study introduces a new approach to building the confidence intervals of risk measures based on the bias-corrected and accelerated (BCA) method previously introduced and refined in a sequence of publications by Efron (1979), Tibshirani (1988), Efron and Hastie (2016), and Efron and Narasimhan (2020a).This BCA method allows for the implementation of three corrections to the standard confidence interval.These corrections account for nonnormality, the bias and the nonconstant standard error of the bootstrap distribution, which is often not symmetric in the case of risk measures.Thus, we propose the new BCA-VaR and BCA-CTE confidence intervals that build on the BCA method for the confidence interval estimation of risk measures by taking advantage of the R package bcaboot (see Efron and Narasimhan 2020b).The proposed BCA confidence intervals should perform better for small to moderate sample sizes or in the case of violations of the asymptotic normality assumptions when the parametric asymptotic intervals, as well as nonparametric intervals, are not fully reliable. Considering BCA confidence intervals complements other approaches discussed in the extensive literature available for interval estimation of the risk measures VaR and CTE.We refer the reader to the books by Serfling (1980) and DasGupta (2008) for general topics and additional publications by Hosking and Wallis (1987), Brazauskas and Kaiser (2004), Manistre and Hancock (2005), Kaiser and Brazauskas (2006), Kim and Hardy (2007), Brazauskas et al. (2008), Miljkovic, Causey, andJovanovi c (2022), and many others.In comparison to these other approaches, BCA confidence intervals are appealing for their generic nature, allowing for straightforward application to different insurance loss models while requiring fewer assumptions than standard asymptotic theory. In conclusion, our study focuses on the following aims: (1) introduce the BCA method in the computation of interval estimators for risk measures and evaluate its performance relative to the nonparametric and parametric asymptotic alternatives and (2) assess the performance of the confidence intervals determined based on the analytical formulas compared to those obtained using generic computational tools.For those approaches requiring a parametric model, we assume that the data come from a left-truncated Lognormal distribution when addressing these two aims.We hope that this model will serve as a point of reference for many other parametric models to be considered in future implementations. We proceed as follows.In Section 2, we discuss the proposed BCA method for risk measures in general and its specific implementation assuming a left-truncated Lognormal distribution for the insurance loss data.Existing methods for nonparametric and parametric asymptotic interval estimators are reviewed in Section 3.This section also provides new analytical formulas to determine the parametric asymptotic interval estimators of the left-truncated Lognormal distribution.In Section 4, we 732 B. GRÜN AND T. MILJKOVIC show the application of the proposed BCA confidence intervals on the left-truncated automobile claims of Secura Re losses.Section 5 assesses the performance of the three methods considered in the confidence interval estimation through several different simulation settings and two different implementation methods.Section 6 provides concluding remarks. METHODOLOGY 2.1. Background Statistical methods relying on asymptotic normality of the parameter estimate construct a standard confidence interval for the parameter of interest h in the following way: where ĥ is a point estimate, r is an estimate of the standard error of the point estimate, and z a 2 is the a=2 th quantile of the standard normal distribution.For a ¼ :05 we expect this interval to have approximately 95% coverage.However, in many applications, this might not be the case because the accuracy of Equation (2.1) can be of concern.Efron (1987) and DiCiccio and Efron (1996) showed that the standard intervals are first-order accurate, having the error in their claimed coverage probability going to zero at rate O 1= ffiffi ffi where n denotes the sample size.Bootstrap confidence intervals have better performance because they are shown to be second-order accurate, having the error in their claimed coverage probability going to zero at rate Oð1=nÞ: Compared to the standard confidence intervals, suitable bootstrap-based confidence intervals are able to correct (1) for nonnormality of ĥ, (2) for bias of ĥ, and (3) for nonconstant standard error of ĥ: In particular, confidence intervals for risk measures might be suspected to suffer from these issues.Bootstrap confidence intervals offer an improvement from the first to the second order in accuracy, and in the estimation of the extreme quantiles, all three corrections may have a substantial effect on the results (see Kim and Hardy 2007;Brazauskas et al. 2008). To include these three corrections in the confidence interval construction, Efron (1987) proposed the BCA level-a endpoints of the confidence intervals defined as where GðÁÞ represents the bootstrap distribution of ĥ, which accounts for nonnormality of ĥ; ẑ0 is the bias correction; and â is the acceleration that corrects for a nonconstant standard error.Here, UðÁÞ denotes the cdf of the standard normal distribution.When ẑ0 and â are equal to zero, the lower and upper bounds of the confidence interval, based on the bootstrap distribution, coincide with the values obtained from the percentile method. In most cases, these three quantities are obtained based on simulations.When the assumptions required for applying the parametric approach are questionable (as they often are; see Micceri 1989), the BCA method constitutes an alternative approach for deriving confidence intervals that--in combination with the suitability of the parametric model assumed--only relies on the assumption that the available data represent a random and representative sample from the population of interest allowing for a suitable estimation of the cdf (see Kelley 2005; Efron and Narasimhan 2020a). Preliminaries Consider a set of realizations of insurance severity claims; that is, x ¼ fx i : x i >0g, where x i are observations from i.i.d.random variables that come from an unknown probability distribution F on a space X, X i $ F, for i ¼ 1, 2, :::, N: (2.3) In general, this set of realizations is not observed because the claims are subject to some policy restriction; for example, all claims below a level b for b > 0 are not observed because claims are only reported above the level b.This leads to the observed dataset consisting of the subset of x, subject to the restriction b, denoted as x b and defined as x b ¼ fx i : x i >bg: These observed data follow an unknown modified probability distribution F b ðwÞ on a space X; that is, with n being the sample size of x b : We refer to this data transformation process as truncation from below because the range of claim values is restricted on the left side of the interval; that is, the support is given by ½b, 1Þ, and the point of restriction b is referred to as truncation point.The modified unknown distribution F b is referred to as the left-truncated distribution.The subscript b is introduced to indicate the left truncation and distinguish between the two distributions under discussion.The fact that X is a one-dimensional space with F b ðwÞ being the cdf of a truncated random variable from a parametric family with parameter w makes this a one-dimensional problem of suitably modeling univariate insurance claims. In the following, we are interested in proposing the BCA methods for automatic construction of confidence intervals for the risk measures (VaR and CTE) associated with specific upper quantiles and conditional expectations of F b ðwÞ: The aim is to obtain reliable confidence intervals that will efficiently handle bias correction and perform well for skewed and heavy-tailed insurance data where the reliance on asymptotic theory assumptions might be questionable. Proposed Algorithm We are interested in determining a confidence interval for a real-valued parameter or measure h.We have a function given that estimates h given the observed data x b as follows: Any function tðÁÞ that obtains suitable point estimates for h given the data x b might be considered.For example, a parametric model could be assumed, with the parameters estimated using ML and then the parameter h determined based on the fitted parametric model.In this article, we focus on a set of real-valued statistics h 2 fp p , g p g where p p represents VaR and g p is equal to the CTE at a given security level p, subject to 0<p<1: A nonparametric bootstrap sample x à b is composed of n random draws with replacement from the set of original observations x b and is denoted by Bootstrap replications for p p and g p are obtained by drawing i.i.d.bootstrap samples and determining the risk measure B times, which results in for j ¼ 1, 2, :::, B: The bootstrap estimate of the standard error is the empirical standard deviation of pà p: and ĝà p: , defined as ĝà pj represent the means of the bootstrap estimates for p p and g p , respectively.The vectors of bootstrap replications; that is, form the basis for estimating G (based on the ecdf) and ẑ0 (the quantile of the standard normal distribution for ĥ evaluated at Ĝ).Both of these quantities are used in Equation (2.2). To obtain an estimate for the acceleration a, the jackknife (or leave-one-out procedure) differences are obtained.For the two risk measures, for which we are interested in developing the confidence intervals, they are given by (2.4) B. GRÜN AND T. MILJKOVIC The jackknife estimates ppðiÞ ¼ t 1 ðx bðiÞ Þ and ĝpðiÞ ¼ t 2 ðx bðiÞ Þ are obtained on the dataset x bðiÞ of size n -1 after removing x i from x b : The jackknife differences computed by Equation (2.4) provide an estimate of the acceleration rate a (refer to Efron and Hastie 2016, 194) as follows: with d i being replaced by d for the computation of their confidence intervals, denoted as BCA-VaR and BCA-CTE, respectively.Efron (1987, section 10) shows that for one-parameter families, one-sixth of the skewness estimate is an excellent estimate for the acceleration a. Implementation The general algorithm to determine BCA confidence intervals as outlined in Section 2.2.2 and proposed in Efron and Narasimhan (2020a) is implemented in the R (R Core Team 2021) package bcaboot (Efron and Narasimhan 2020b).The package provides function "bcajack()" which returns the endpoints of the BCA a-confidence intervals for a given vector of a values.The function requires as input the dataset x b , the number of bootstrap replications B, as well as a function that, given the data, returns the point estimate of the risk measure; for example, t 1 ðÞ for BCA-VaR and t 2 ðÞ for BCA-CTE.An outline of the computations required for the BCA confidence intervals is given in Algorithm 1. Algorithm 1: Computation of BCA-VaR and BCA-CTE confidence intervals. b , a security level p, and a confidence level a. for j ¼ 1, … , B do Sample with replacement from x b to obtain x à bj : Estimate the risk measure using t l ðx à bj Þ: Estimate the risk measure using t l ðx bðiÞ Þ: end Determine: ẑ0 , â, and ĜðÞ and apply (2.2) using a to obtain the bounds.Result: Lower and upper bound of the ð1ÀaÞ confidence interval for BCA-VaR and BCA-CTE. In the following, we assume that the functions t 1 ðÞ and t 2 ðÞ implement the derivations of the point estimates of the risk measures using two steps: (1) fitting a parametric model to the data using ML estimation and (2) determining the risk measures given the parametric model together with the parameter estimates.For a sample x b ¼ ðx 1 , :::, x n Þ > , Step 1 consists of determining the maximum likelihood estimates (MLEs) ŵ: where f b () is the probability density function for the cdf F b (). In Step 2, the risk measures are determined given the parametric model, the MLE ŵ, and the security level p: A generic approach to implementing these functions requires as input (1) the probability density function (pdf) of the parametric model and (2) a quantile function or a conditional expectation function for the parametric model to obtain the risk measures.The MLE for the parametric model can be obtained using a general-purpose optimizer (e.g., "optim()" in R) after defining the log-likelihood function using the pdf and the dataset as well as initial parameter values as input.Alternatively, for BCA CONFIDENCE INTERVALS FOR RISK MEASURES some parametric models, closed-form analytical formulas might be available to obtain the MLE.We would like to note that Poudyal (2021) also derived the MLE and Fisher information for the mean and variance of the left-truncated Lognormal distribution.However, the author did not consider estimating the risk measures.Instead of providing a quantile function and a conditional expectation function, computational tools might also be exploited to determine the risk measures based on the parametric model and the MLE.In this case, only the pdf and cdf of the parametric model given the parameter values are required.The quantile can then be obtained using the pdf together with a root-finding algorithm; for example, using function "uniroot()" in R. "uniroot()" requires the function as input as well as an interval of finite length preferably containing the root.However, in case the specified interval does not contain the root, the limits are automatically extended internally in "uniroot()."In this way, the VaR is obtained.The CTE may then be determined using numeric integration based on the pdf and using suitable integral limits implied by the VaR; for example, using the R function "integrate()."For a sketch of this generic implementation of t l ðÞ, l ¼ 1, 2, see Algorithm 2. Algorithm 2: Generic implementation of functions t 1 ðÞ and t 2 ðÞ: Input: pdf and cdf of the parametric model, initial parameter values w ð0Þ , the security level p, and a confidence level a. Step 1: Determine the MLE ŵ given the data x b using a general-purpose optimizer with the initial parameter values w ð0Þ as starting values and the log-likelihood function based on the pdf. Step 2: Determine the risk measures pp and ĝp given the security level p and the parametric model together with the MLE ŵ using computational tools for root finding and numeric integration based on the pdf and cdf of the parametric model.Result: pp , ĝp : Left-truncated Lognormal distribution. In the following, we assume for the functions t 1 ðÁÞ and t 2 ðÁÞ, which estimate VaR and CTE, respectively, that the data generating process is given by a left-truncated Lognormal distribution F b ðw, x b Þ with the parameter vector w ¼ ðl, rÞ: The Lognormal distribution is known to provide a reasonable model for fitting insurance claims data.We outline the derivation of the log-likelihood function to obtain the MLE.We then also derive closed-form formulas to determine the risk measures. The likelihood function for the left-truncated Lognormal sample, defined by Equation (2.3), has the following form: Lððl, rÞjx 1 , :::, , where B ¼ log ðbÞÀl r : The corresponding log-likelihood function is defined as Setting @lðl, rÞ @l , @lðl, rÞ @r ¼ ð0, 0Þ and considering that /ðÁÞ and UðÁÞ are the pdf and cdf of the standard normal distribution yields the following system of two estimating equations: Due to the dependence of B on the parameter values l and r, no closed-form solution seems possible to solve this system.Numeric tools are required to obtain the solution ðl, rÞ: For example, function "optim()" in R may be used together with a definition of the log-likelihood function to obtain the parameter estimates.A general-purpose optimizer usually requires starting 736 B. GRÜN AND T. MILJKOVIC values inside the feasible parameter range.In case of the left-truncated Lognormal distribution, the closed-form formulas for the Lognormal distribution might be used as starting values; that is, ignoring the truncation at b: Given the MLE, VaR and CTE can be computed in closed form as a result of the following lemmas. Lemma 1. Suppose that a random variable X follows a left-truncated Lognormal distribution with parameters l and r and the truncation point b.Then the 100p% quantile of X may be expressed as ) The proof is provided in Appendix A. Suppose that X follows a left-truncated Lognormal distribution with parameters l and r and the truncation point b.Then the conditional tail expectation of X for the security level p may be expressed as The proof is provided in Appendix B. It is worth mentioning that the computation of g p is not impacted by the truncation b once p p is fixed.The unconditional sample x and the conditional sample x b result in the same value for g p once p p is given. COMPARISON TO EXISTING METHODS In this section, we discuss existing alternative approaches proposed for confidence interval estimation of the risk measures VaR and CTE.In particular, we outline the nonparametric as well as the parametric asymptotic approaches.The standard nonparametric formulas for both VaR and CTE are readily available in the literature (see Serfling 1980;Manistre and Hancock 2005;Kaiser and Brazauskas 2006).For the parametric asymptotic approach, we specifically investigate the derivation for the left-truncated Lognormal distribution.The general parametric asymptotic approach based on the Delta method is already outlined in the literature (see Hogg, McKean, and Craig 2005), and Brazauskas et al. (2008) applied this approach specifically for the CTE.This article provides an additional contribution in the area of the parametric asymptotic interval estimation of the risk measures VaR and CTE by providing analytic results in Section 3.2 for the left-truncated Lognormal distribution. Nonparametric Interval Estimators for VaR and CTE Serfling (1980) derived the nonparametric formulas for confidence intervals related to the upper quantiles and the conditional tail expectation based on the order statistics ðX ð1Þ X ð1Þ ::: X ðnÞ ).Kaiser and Brazauskas (2006) provided the formulas for the nonparametric interval of CTE that builds on the work done by Manistre and Hancock (2005).In the following we summarize all of these formulas in the context of our notation used in this article.Note that these formulas for CTE are only valid in the finite variance case.An alternative nonparametric CTE estimator with suitable interval estimators would need to be considered otherwise (see Necir, Rassoul, and Zitikis 2010). The Nonparametric 100ð1ÀaÞ% Confidence Interval for VaR. The empirical sample quantile of VaR is defined as pp ð Fn , pÞ ¼ X ðnÀbnð1ÀpÞcÞ , where Fn denotes the empirical distribution and bÁc denotes "greatest integer part."The 100ð1ÀaÞ% nonparametric confidence interval of the VaR is given by where the sequences of integers k 1n and k 2n satisfy 1 k 1n <k 2n n and The Nonparametric 100ð1ÀaÞ% Confidence Interval for CTE.The 100ð1ÀaÞ% distribution-free confidence interval for CTE is given by Asymptotic Parametric Interval Estimators for VaR and CTE In the following we assume that a parametric distribution F b ðw, x b Þ with parameter w is used to model the data and MLEs are obtained for w.Estimates for the risk measures VaR and CTE as well as their interval estimators are then based on this parametric model conditional on the MLE ŵ: Brazauskas et al. ( 2008) developed inferential tools for estimating the 95% confidence interval for the CTE obtained based on a parametric model.More specifically, the authors considered a simplified case of a shifted Lognormal distribution with an unknown parameter l as parametric distribution.We use their approach based on ML inference in combination with the multivariate Delta method to derive the interval estimators for VaR as well as CTE using an asymptotic parametric approach for any parametric model. The Parametric ŵ Asymptotic Confidence Interval for VaR. We derive an estimate of the variance function V ðp p Þ of the VaR estimator to develop the 100ð1ÀaÞ% for p p in the following form: The lower and upper bounds of this confidence interval are derived based on the asymptotic normal distribution of pp ¼ p p ð ŵÞ with ŵ the maximum likelihood parameter estimates of the parametric model.We combine ML estimation with the multivariate Delta method to determine the variance function: with J the observed Fisher information matrix, which converges in probability to the expected Fisher information. The Parametric 100ð1ÀaÞ% Asymptotic Confidence Interval for CTE. Following the same approach based on ML inference and the multivariate Delta method, we derive the variance function of the CTE estimator to develop the 100ð1ÀaÞ% confidence interval for g p in the following form: Implementation The implementation of the nonparametric approach requires sorting the dataset x b and then selecting the observations with indices given by where b:e denotes rounding to the nearest integer.For details, see Algorithm 3. Algorithm 3: Computation of nonparametric (non-par) confidence intervals. Data: Input: The security level p and a confidence level a. Step 1: Determine the indices of the sorted observations for the lower and upper bounds using (3.5) and select these observations for the confidence interval of the VaR. Step 2: Calculate the empirical mean of the sorted observations with indices bnpc þ 1 and higher to obtain the CTE estimate using (3.1). Step 3: Calculate V p as the sum of the empirical sample variance of the observations sorted in ascending order with the highest bnpc þ 1 indices and p times the squared difference between VaR and CTE estimate.Then determine the bounds of the confidence interval for CTE with (3.2).Result: Lower and upper bounds of the ð1ÀaÞ confidence intervals. For the parametric approach, the following quantities are required: (1) MLE ŵ of the parameters of the parametric model together with the Hessian of the log-likelihood function at the MLE and (2) the gradient of the VaR or CTE risk measures as functions of the parameters of the parametric model evaluated at the MLE ŵ: Determining the MLE was outlined in Section 2.2.3 for a general implementation regardless of the parametric model as well as the left-truncated Lognormal distribution in particular.The Hessian might be obtained numerically as a by-product returned by the general-purpose optimizer (e.g., the R function "optim()" has a logical argument specifying whether a numerically differentiated Hessian matrix should be returned).Alternatively, given the log-likelihood function and the MLE, the Hessian can also be obtained using a numerical approximation in a separate step; for example, using function "hessian()" from the R package numDeriv (Gilbert and Varadhan 2019).Section 2.2.3 also discusses computational tools to determine the risk measures VaR and CTE based on the parametric model and the MLE requiring only the specification of the pdf and cdf of the parametric model.Using these tools, the gradients required for the variance functions might be numerically approximated using function "grad()" available in the R package numDeriv.The stepwise procedure for determining the asymptotic parametric confidence intervals for the risk measures is outlined in Algorithm 4. Data: x b ¼ ðx 1 , :::, x n Þ > : Input: pdf and cdf of the parametric model, initial parameter values w ð0Þ , the security level p, and a confidence level a. Step 1: Determine the MLE ŵ given the data x b using a general-purpose optimizer with the initial parameter values w ð0Þ as starting values and the log-likelihood function based on the pdf. Step 2: Determine the gradients of the risk measure functions pðwÞ and ĝp ðwÞ and the Hessian of the log-likelihood at ŵ using numerical differentiation. Step 3: Calculate the variance functions V ðp p Þ and V ðĝ p Þ using matrix inversion and multiplication and determine the bounds of the ð1ÀaÞ confidence intervals using (3.3) and (3.4).Result: Lower and upper bounds of the ð1ÀaÞ confidence intervals.This implies that for any parametric model the asymptotic parametric confidence intervals for the risk measures VaR and CTE may be obtained using general computational tools for optimization, root finding, integration, and determining derivatives BCA CONFIDENCE INTERVALS FOR RISK MEASURES in addition to the provision of the pdf and cdf of the parametric model as well as starting values for the ML estimation.Starting values need to be inside the feasible region of the parameter space.Alternatively, closed-form formulas might be available for specific parametric models to determine the MLE and determine the quantile and the conditional expectation as well as the gradients of the risk measures and the observed Fisher information matrix of the MLE.In the following, we investigate this for the left-truncated Lognormal distribution. Left-truncated Lognormal Distribution.Section 2.2.3 indicates that numeric methods are required for the left-truncated Lognormal model to determine the MLE.However, closed-form formulas are provided for determining the quantiles and conditional expectations.In the following, we derive closed-form formulas for the observed Fisher information matrix and the gradients of the risk measures VaR and CTE as functions of the parameters l and r.These can be used to determine the parametric asymptotic confidence intervals instead of relying on numerical differentiation. The observed Fisher information matrix is available in closed form as a result of the following lemma. Let J ðl, rÞ ¼ J 11 J 12 J 21 J 22 !represent the observed Fisher information matrix for a sample x b of size n from the left-truncated Lognormal distribution with parameters l and r and the truncation point b.Then The proof is provided in Appendix C. The proof is provided in Appendix D. APPLICATION In this section, we illustrate the calculations of the BCA-VaR and BCA-CTE on the real dataset of insurance automobile claims provided by Secura Re, a Belgian reinsurer.The raw dataset is available as part of the book published by Beirlant et al. (2004).It is also included in the R package ltmix developed by Blostein and Miljkovic (2021) for modeling Secura Re using left-truncated mixture models (see Blostein and Miljkovic 2019).Secura Re contains 371 automobile claims in the amount of at least 1.2 million euros.Claim amounts less than 1.2 million were not reported to the reinsurer.Therefore, the data are left-truncated with a truncation point at 1.2 million.The smallest automobile claim is 1.21 million and the largest automobile claim is 7.9 million. This dataset has been used by several researchers (Verbelen et al. 2015;Reynkens et al. 2017;Blostein and Miljkovic 2019) to illustrate methodologies for different application areas: estimation of the excess of loss insurance premium for different retention levels and estimation of risk measures.These authors first performed model calibration considering different classes of models for density estimation and then estimated the quantities of interest after having determined the best model.For both of these application areas, the researchers focused only on point estimation without performing a variability assessment of the estimates.Blostein and Miljkovic (2019) showed that not only is the left-truncated Lognormal distribution the most parsimonious model for calibrating Secura Re losses but it also achieves the lowest Bayesian information criterion and Akaike information criterion among other models under consideration.The authors also used quantile-quantile (Q-Q) plots as a model diagnostic tool to validate and assess model fit.For this reason, we will adopt the left-truncated Lognormal distribution to illustrate the computations of the BCA confidence intervals based on the methodology presented in Section 2. Further, the BCA results will be compared to those generated using the nonparametric and parametric asymptotic methods presented in Section 3.For the parametric asymptotic approach, the left-truncated Lognormal distribution is also used.The computational implementations outlined in Algorithms 2 and 4 are used for the parametric asymptotic and BCA results presented.Using the analytical formulas developed in Lemmas 1-5 would lead to essentially the same results. Figure 1 shows two-sided confidence limits of the VaR generated using the three different methods with security levels p ¼ 0.95 (left) and p ¼ 0.99 (right) for 1Àa 2 f0:68, 0:8, 0:9, 0:95, 0:99g: The three methods included are BCA-VaR (determined using the computational tools), nonparametric, and parametric (using the computational tools).The values on the x-axis represent the wide range of a values considered to determine the ð1ÀaÞ-confidence intervals, and the values on the y-axis represent the bounds of the confidence intervals for the VaR.Several observations are drawn based on this data visualization.First, as expected based on results previously reported in the literature, the width of the confidence interval for VaR increases with a higher security level as well as smaller a values or higher coverage levels (see Miljkovic, Causey, and Jovanovi c 2022).Second, there are some new observations: The nonparametric confidence intervals are particularly different when the security level is 0.99: we see a strong increase in the upper bound of the confidence interval.The same phenomenon is observed for p ¼ 0.95 when the coverage levels are above 0.8 (a<:2).The lower bounds of the confidence intervals obtained with BCA-VaR and par-comp are visually the same for both security levels.The upper bounds of the confidence intervals for the BCA-VaR and par-comp differ slightly, with the BCA-VaR results providing higher upper bounds than those obtained with the par-comp method for both security levels and the discrepancy increasing with decreasing a values. Similarly, we present the results for CTE in Figure 2.For p ¼ 0.95, both BCA-CTE and par-comp generate the same results for the lower limit of the confidence intervals regardless of the confidence level implied by a.The results for the upper limits of both methods indicate that the BCA-CTE values are slightly above the par-comp values.The nonparametric confidence intervals for p ¼ 0.95 are quite different from those obtained with the other two methods, in particular, the upper limits are considerably larger.Also, the lower limits for the nonparametric confidence intervals are above the lower limits of the other two methods except for very small values of a.For p ¼ 0.99 the results for BCA-CTE and par-comp again rather closely align for the lower limits whereas the nonparametric estimates are much higher.The results are different for the upper limits where BCA-CTE leads to the highest upper limits for small a values followed by the nonparametric estimates, while for higher a values again the nonparametric approach results in the highest upper limits. Complementing Figures 1 and 2, Table 1 provides the numerical summary of the confidence intervals for VaR and CTE for the Secura Re dataset when the left-truncated Lognormal model is fitted.Two security levels and three estimation methods are considered.The lower and upper bounds of the 95% (a ¼ :05) and 99% (a ¼ :01) confidence intervals are included for each estimation method. BCA CONFIDENCE INTERVALS FOR RISK MEASURES Table 2 provides the summary of the computational parametric and BCA estimates associated with the computation of the confidence intervals for VaR and CTE at two different security levels (0.95 and 0.99).The standard deviation of the distribution developed for the par-comp method is provided in the column labeled by rpar : The estimates of the standard deviation of the bootstrap distribution GðÁÞ, defined by Equation (2.2), are provided in the column labeled by rboot : The values of the 742 B. GRÜN AND T. MILJKOVIC standard deviations are similar, with consistently slightly higher values for the BCA approach with the security level fixed. The skewness estimate of GðÁÞ is labeled as "Skew" and reported in the last column of the table.This indicates that GðÁÞ is slightly right-skewed.Bias correction and acceleration value estimates are reported in columns labeled as ẑ0 and â, respectively.These values are positive, indicating an upward correction to the standard confidence limits.According to Efron and Narasimhan 2020a (616), if GðÁÞ is right-skewed, the confidence limits should not be skewed to the left, at least for the distribution in the exponential family.The differences observed between the parametric and BCA methods in the estimation of the upper confidence limits for VaR and CTE raise doubts about the underlying statistical assumptions imposed for the par-comp approach being suitable, in particular given that the Secura Re dataset is only of moderate size with a skewness coefficient of 2.42.Rather, it is suggested that the results obtained from the BCA method provide better accuracy.In addition, we conjecture that the differences between the results of these two approaches may be even more pronounced for datasets exhibiting a stronger heavy tail such as those modeled by Miljkovic and Gr€ un (2016). SIMULATION In this section, we use a simulation study with artificial data where the true data generating process and hence the true values for VaR and CTE are known.Our objective is to compare the performance of the proposed BCA confidence intervals (BCA-VaR and BCA-CTE) with confidence intervals obtained using the nonparametric (non-par) as well as parametric (parcomp) approaches discussed in Section 3. The performance of these three methods is evaluated based on their coverage probability and the width of the confidence intervals calculated for different simulation settings.Coverage probability is estimated using the proportion of the confidence intervals that contain the true value of VaR or CTE.The width of the confidence interval is computed as the difference between the upper and lower bound estimates.We draw random samples from the left- truncated Lognormal distribution assuming that this distribution represents the true data generating process.Three sample sizes n 2 f100, 500, 1000g with two security levels p 2 f0:95, 0:99g and confidence coefficients 1Àa 2 f0:68, 0:8, 0:9, 0:95, 0:99g are considered. The goal of the simulation study is to answer the following questions related to the BCA, nonparametric, and parametric methods used in construction of the confidence intervals: 1. How do the methods compare in terms of coverage probability also taking into account the width of the confidence interval?2. How do the results of the BCA and the parametric approach differ for the left-truncated Lognormal model depending on the implementation; that is, if the general computational tools or the analytical solutions from Lemmas 1-5 are used for implementation? The left-truncated Lognormal model was fitted to the Secura Re dataset used in Section 4 and parameter estimates were obtained using ML estimation.This model, together with the estimated parameters, was used as data generating processes in the simulation study.The datasets created in this way hence followed a distribution that mimics the data distribution observed for the Secura Re dataset.We generated 10,000 samples for each simulation setting.For calculation of the BCA confidence interval, each sample was bootstrapped 2000 times (i.e., resampled with B ¼ 2000).Figure 3 displays the multipanel plot of the simulation results related to the coverage probability and Figure 4 displays the multipanel plot of the simulation results related to the interval width.Each multipanel plot is arranged across several dimensions of the simulation study: we consider two risk measures, two security levels, and three sample sizes.For each of these 12 slices of data, we show how the coverage probability or interval width (y-axis) varies with the confidence coefficient (x-axis) across the three different estimation methods displayed with different line styles.The simulation results for the BCA method are shown with the black solid line and the non-par and par-comp methods are denoted by dotted and dashed lines, respectively. The VaR results (shown in the two left columns in Figures 3 and 4) indicate that the par-comp as well as the BCA approach provide comparable good coverage and similar width of the confidence intervals.The results of the nonparametric approach indicate that coverage is only reasonable if either the sample size is not too small or the security level is not too high (with the coverage results being poor for p ¼ 0.99 and n ¼ 100 in case of the nonparametric approach) and that the interval width in general is higher than for the other two approaches, except in the case where coverage was not satisfactory. The CTE results (shown in the two right columns in Figures 3 and 4) indicate that coverage is again comparable good for the par-comp and the BCA approaches regardless of security level and sample size.Only considering the interval width the par-comp approach leads to considerably shorter intervals for p ¼ 0.99 and n ¼ 100.The non-par method proves inferior in terms of coverage and the interval width when p ¼ 0.99 across all three sample sizes, with the coverage performance also being poor for the small sample size, n ¼ 100 when p ¼ 0.95. The results of the simulation study were obtained for a setting where the true data generating process is a truncated Lognormal distribution that is also used as the parametric model in the parametric and BCA approaches for determining the confidence intervals.In this case, the results obtained with the nonparametric approach are clearly inferior for sample sizes up to 1000.In the case of the VaR, the nonparametric approach at least provides reasonable coverage in case n !500: But in all cases, the width of the confidence intervals is higher than for the other two approaches when coverage is satisfactory.The underlying assumptions of the parametric approach on the data generating process are met in this simulation study, and relying on the approximation based on the asymptotic normality seems to be reasonable.The BCA approach requires fewer assumptions and provides similar results and hence might be preferable in practice where the underlying data generating process is unknown. To address the second question of the simulation study, we compare the results of the BCA approach as well as the parametric asymptotic approach depending on two different implementation methods: fully computational (method 1) and partially computational (method 2).Maximum likelihood estimation of the parameters given a dataset requires computational tools for both approaches regardless of the implementation.We also compare the coverage of method 1 and method 2 to their respective true coverages. In the case of the BCA approach, the point estimates of the risk measures were found for the fully computational method using a root-finding algorithm (for VaR) and the numerical integration tools (for CTE) in R. When a partially computational method was used for the BCA approach, the point estimators of the risk measures were implemented using the analytical results obtained by Lemmas 1 and 2. The relative differences between method 1 and method 2 in percent were calculated for the width and coverage of the confidence intervals of both risk measures across all simulation settings.The ranges of these results are summarized in the top portion of Table 3.The columns labeled D width and D cov show the ranges of relative BCA CONFIDENCE INTERVALS FOR RISK MEASURES differences in percent between the two implementation methods.We observed minor differences between method 1 and method 2, indicating that these two methods are similar in terms of their performance and either one can be used in practice.The range of relative differences between coverage calculated by method 1 and the true coverage in percent is shown in the column labeled D cov, 1 : Similarly, the range of relative differences between coverage calculated by method 2 and the true coverage in percent is shown in the column labeled D cov, 2 : Both D cov, 1 and D cov, 2 results include zero within the given range, indicating that the calculated confidence intervals lead to over-as well as undercoverage compared to the true coverage. Similarly, the implementations of the parametric asymptotic confidence intervals were compared when using either the analytical solutions (method 1) provided by Lemmas 1-5 or using general computational tools (method 2) available in R for finding the Hessian of the log-likelihood function and the gradients of the variance function numerically.The results are summarized in the bottom portion of Table 3.The columns labeled D width and D cov show the ranges of the relative differences in percent between the two implementation methods.Here, the results point to some systematic differences in both width and coverage between the two methods.We again investigate which of the two methods has better coverage compared to the true coverage.Similar to the BCA implementation approach, we compute the range of relative differences between coverage calculated by method 1 and the true coverage in percent as shown in the column labeled D cov, 1 : The range of relative differences between coverage calculated by method 2 and the true coverage in percent is shown in the column labeled D cov, 2 : Based on these results, we note that method 1 implemented based on the analytical solutions of Lemmas 1-5 consistently underestimates the true coverage probability for both risk measures as observed in D cov, 1 : By contrast, the results for D cov, 2 include zero within the given range.Overall method 2, based on general computational tools, produces better coverage of the confidence intervals for both risk measures and should be adopted in practice. Through this simulation study, we learned that the BCA approach can be used to validate the results of using the parametric asymptotic approach to determine the confidence intervals.For large sample sizes, both methods yield similar results.For small to moderate sample sizes, small differences between the two methods are observed when modeling the real dataset of Secura Re losses.In summary, the simulation study provided insightful information about the behavior of the three types of confidence intervals under consideration as well as the influence of the implementation used. CONCLUSION In this article, we advanced the study of interval estimation of risk measures in the actuarial literature.We proposed the automated bias-corrected and accelerated bootstrap confidence intervals, named BCA-VAR and BCA-CTE for VaR and CTE, respectively, using, in particular, the left-truncated Lognormal distribution.The performance of the proposed intervals was assessed in comparison to nonparametric and parametric asymptotic approaches based on the same parametric distribution.For the application of the parametric approach, we also obtained new analytical results required when using the Delta method and asymptotic theory to derive the confidence intervals, which were previously not available in the literature for the left-truncated Lognormal loss model.Note: D width denotes the range of relative differences in percent between method 1 and method 2 for the width of the confidence intervals.D cov denotes the range of relative differences in percent between method 1 and method 2 for the coverage of the confidence intervals.D cov, 1 denotes the range of relative differences in percent between method 1 and the true coverage of the confidence intervals.D cov, 2 denotes the range of relative differences in percent between method 2 and the true coverage of the confidence intervals. B. GRÜN AND T. MILJKOVIC Our results showed that the nonparametric approach to interval estimation is generally inferior to the other two methods.The asymptotic parametric approach relies on the asymptotic normality of the risk measure estimates as the sample size increases and the regulatory conditions are met.However, when this assumption is violated, the BCA method will provide a more realistic assessment of the uncertainty of the risk measure estimates. When testing these three methods on the real Secura Re dataset, we found that the BCA and parametric approaches, generated similar results for the lower limit but slightly different results for the upper limit of the confidence interval.When the results of the parametric and the BCA methods agree, the results can be taken to be accurate and meaningful.But if these methods yield different results such as those observed for the upper limit, researchers should ask themselves whether the statistical assumptions of the parametric approach are likely to be violated, hence supporting the validity of the BCA method. In future applications involving computations of the confidence intervals for risk measures, we recommend using the BCA method to validate the parametric results.If the two methods produce different results, the insurer should refer to the BCA method because its bias correction and acceleration procedures allow for a higher level of accuracy in the results. We focused on the left-truncated Lognormal model to derive the empirical results and also provided analytical formulas for this parametric model.However, the computational tools outlined are rather generic and might easily be used for other nonparametric estimators where standard error estimates are available and any kind of parametric model regardless of the estimation method used; for example, using the method of trimmed moments instead of ML.The comparison of the empirical results obtained with the computational and analytical implementations confirms the reliability of the computational tools.Hence, our study can be easily extended to include other parametric models such as the composite models considered by Gr€ un and Miljkovic (2019) with the efficient implementation of computational tools.The BCA approach would require that the user provides the loss data as well as the input functions t 1 ðÞ and t 2 ðÞ based on the considered parametric model to estimate the risk measures.For many composite models, these functions can be derived analytically.The BCA method can also be explored with the model averaging approach considered by Miljkovic and Gr€ un (2021) to account for model uncertainty in the estimation of the confidence intervals. FIGURE 3 . FIGURE 3. Coverage Probability for the Left-Truncated Lognormal Distribution of the Three Estimation Methods: BCA (solid), non-par (dotted), par-comp (dashed).Note: Dashed gray lines denote the expected coverage probability. FIGURE 4 . FIGURE 4. Average Width of the Confidence Interval for the Left-Truncated Lognormal Distribution of the Three Estimation Methods: BCA (solid), nonpar (dotted), par-comp (dashed). TABLE 1 Summary of the Interval Estimates for VaR and CTE Based on the Secura Re Dataset Summary of the Computational Parametric and BCA Estimates for VaR and CTE Based on the Secura Re Dataset
11,761.4
2022-12-02T00:00:00.000
[ "Mathematics" ]
Optimized Performance Parameters for Nighttime Multispectral Satellite Imagery to Analyze Lightings in Urban Areas Contrary to its daytime counterpart, nighttime visible and near infrared (VIS/NIR) satellite imagery is limited in both spectral and spatial resolution. Nevertheless, the relevance of such systems is unquestioned with applications to, e.g., examine urban areas, derive light pollution, and estimate energy consumption. To determine optimal spectral bands together with required radiometric and spatial resolution, at-sensor radiances are simulated based on combinations of lamp spectra with typical luminances according to lighting standards, surface reflectances, and radiative transfers for the consideration of atmospheric effects. Various band combinations are evaluated for their ability to differentiate between lighting types and to estimate the important lighting parameters: efficacy to produce visible light, percentage of emissions attributable to the blue part of the spectrum, and assessment of the perceived color of radiation sources. The selected bands are located in the green, blue, yellow-orange, near infrared, and red parts of the spectrum and include one panchromatic band. However, these nighttime bands tailored to artificial light emissions differ significantly from the typical daytime bands focusing on surface reflectances. Compared to existing or proposed nighttime or daytime satellites, the recommended characteristics improve, e.g., classification of lighting types by >10%. The simulations illustrate the feasible improvements in nocturnal VIS/NIR remote sensing which will lead to advanced applications. Introduction Nocturnal optical remote sensing in the visible and near infrared (VIS/NIR) of the electromagnetic (EM) spectrum is largely inferior both to its daytime counterpart as well as to nighttime remote sensing in the thermal infrared. Even if there is a large gap in terms of the amount and the diversity of available missions and products, there exists demand for such nighttime products. The interest in such products is growing as evident from the increasing number of applications [1]. These include the monitoring of human settlements and urban dynamics, the estimation of demographic and socio-economic information, light pollution and its influence on ecosystems and human health and astronomical observations, energy consumption and demands, detection of gas flares and forest fires, natural disaster assessment, and the evaluation of political crises and wars [2]. Most of these applications are derived from data linked to artificial lights which emit mainly in the VIS/NIR. A stronger focus on optical nighttime remote sensing is, therefore, well-founded. However, the aim of the first satellite sensor imaging low-light data, namely DMSP-OLS in 1976 [3], was to collect global cloud cover data day and night, detecting nocturnal VIS/NIR emission sources was a widely used by-product. For example, to derive essential socio-economic information, the lighting type is, however, a much stronger indicator of economic growth than solely the intensity of light as used in most studies [4]. In 2011, a considerable improvement in spatial resolution from 2700 to 750 m and detection limits from 5 × 10 −9 to 2 × 10 −10 Wm −2 sr −1 nm −1 was possible with the arrival of its follow-on, the NPP-VIIRS-DNB yet with a daily global coverage [5]. In addition to these panchromatic (500-900 nm, [5], for NPP) space-based nighttime images, trichromatic ones come in the form of photographs with a spatial resolution between 10 and 200 m taken by astronauts aboard the ISS irregularly since 2003 [6]. Other panchromatic data are acquired only over China frequently with a spatial resolution of 130 m by LJ1-01 since 2018 [7] and of 0.7 m by EROS-B sporadically since 2013 [8]. Other data with multiple spectral bands are sporadically acquired only with a spatial resolution of 120 m by AC-5 (AC-4 with similar spectral resolution) since 2013 [9] and of 0.9 m by JL1-3B (JL1-07/08 with similar spectral resolution) since 2017 [10]. Furthermore, sporadically acquired nighttime images of operational daytime missions reveal detection limits, e.g., for Landsat-8, of above 4 × 10 −4 Wm −2 sr −1 nm −1 only for the multiple spectral bands [11]. The need for finer spectral and spatial resolutions was expressed many times. For example, a high-pressure sodium (HPS) lamp is indistinguishable from a light emitting diode (LED) lamp in panchromatic images and a conversion from HPS to LED is even incorrectly observed as a decrease in radiant flux for typical panchromatic images ( [12], for Milan, Italy). For example, a street with one lamp every 25 m is indistinguishable from a street with two lamps every 50 m in 100 m resolution according to the Nyquist sampling theorem. Despite a proposal for a Nightsat mission in 2007 [13], however, there is still no space-based nighttime VIS/NIR mission up with spatial resolution less than 100 m, multiple spectral bands, and a global coverage. For daytime imaging, for example, with four spectral bands typically blue (457-523 nm), green (542-578 nm), red (650-680 nm), and NIR1 (784-900 nm) are recommended ( [14], for Sentinel-2) and a panchromatic band (450-800 nm). Furthermore, spectral bands red edge (705-740 nm) and NIR2 (960-1040 nm) are suggested. For nighttime imaging typically the same sensors are used. In order to determine optimal spectral and radiometric characteristics for dedicated nighttime VIS/NIR imaging, it is important to note that available data, for example based on airborne campaigns, provide only panchromatic imagery with high radiometric resolution ( [15], for Berlin, Germany) or high spectral imagery with only low radiometric resolution ( [16], for Las Vegas, NV, USA). Hence, these sources do not satisfactorily determine the optimal sensor parameters and performances; instead, an end-to-end sensor simulation is required with controlled environments to perform a realistic and precise examination. The objective of this article is to recommend spectral and radiometric nighttime sensor parameters that are needed to support the community's requirements, as well as those of the lighting engineering community and the general public, with a main focus on urban environments and the detection and differentiation of artificial outdoor irradiance sources at necessarily high spatial resolution. Therefore, Section 2 analyzes the elements affecting the sensor signal, namely the natural (e.g., moon) and artificial (e.g., street light) nighttime radiation sources, their interactions with the surface (M2 and L1) and atmosphere (M1, M3, and L2), and the satellite sensor itself as illustrated in Figure 1. For daytime similar considerations were widely investigated, e.g., by [14]. For nighttime similar considerations were rarely investigated, e.g., by [17] based on their findings on spectrometer measurements of outdoor lighting spectra. However, as only light source spectra were taken into account, a large part of the complexity is ignored by neglecting, for example, the variability in surface reflectances, atmospheric composition and sensor noise. For instance, two identical HPS or LED will look different when illuminating a patch of grass compared to a stretch of asphalt. Similarly, they will look different under hazy conditions compared to a clear night. Additionally, the number of spectral band combinations that the authors have considered was limited to eight and does not cover the full range of possibilities. Nevertheless, their recommendations are a reference for the considerations. Here, radiances are constructed which combines spectra from different lighting types, different intensities, different surface types, and different atmospheric compositions. In other words, it generates theoretical reference top-of-atmosphere (TOA) radiances for different conditions to know which signals arrive at a space-based sensor at night. Section 3 utilizes this data to answer the question, if it is possible to discriminate between different radiation sources from space-based images and at what spectral and radiometric resolutions. The complexity exceeds that of the traditional classification task, where the illumination source is known (e.g., sunlight) and the surface object types are unknown. In the nighttime case, the illumination source is also unknown and furthermore, irradiances produced by artificial lights are sometimes mixed with moonlight. It is, therefore, important to know how different moon characteristics affect TOA radiances and the discrimination of lighting types. Knowing the type of radiation source sheds light on a number of important light characteristics. Some essential considerations of lighting, however, are not linked to lighting type on a one-to-one basis. As a consequence, it is necessary to consider the dominant criteria in the planning of nighttime lighting, and how far these lighting quality indices, namely luminous efficacy of radiation (with radiant flux and luminous flux), spectral G index, and correlated color temperature, are derivable [18]. Hence, the results of the simulations performed for various spectral and radiometric parameters and performances are analyzed, namely optimal spectral bands are derived (and typical TOA radiances are considered for the optimal radiometric resolution). While the radiances focus on homogeneous single-pixel environments, this approach does not, however, take into account any spatial information such as a lamp's intensity distribution pattern or the overlapping of different lights. Therefore, also the spatial resolution is discussed. Finally, Section 4 concludes the findings and recommendations for sensor parameters as well as the overview for future research. Natural Radiation Sources There are in fact a number of natural light sources emitting light in the VIS/NIR of the EM spectrum during nighttime. The most prominent of those is the moon, reflecting sunlight arriving at its surface onto Earth. Hence, the moon is actually not a light source in itself, but instead acts as a reflecting object. The intensity of moonlight (0.1 lm/m 2 , cloud-free full moon) is rather small in comparison to sunlight (100, 000 lm/m 2 , cloud-free full sun) or artificial lighting (10 lm/m 2 , lighted parking lot). In contrast to artificial lighting, however, the emitted light is not focused, but instead spatially homogeneous across the surface. Therefore, depending on the moon phase angle and with increasing elevation, moonlight becomes significant, even though its intensity is relatively limited. For example, moonlight is crucial in the detection of clouds from DMSP-OLS or NPP-VIIRS-DNB imagery, which is their principal focus and also explains the low detection limits of these sensors achieved also by the coarse spatial resolutions. Furthermore, moonlight facilitates the possibility to observe snow and ice features. Compared to the typical spectra of artificial lighting, moon spectral irradiances are relatively spectrally homogeneous across the VIS/NIR (Figure 2a). Most classifications and indices deal well with such offsets. For that reason and because of the relatively small illumination of moonlight especially for the considered fine spatial resolutions, namely the spectral radiances combine in particular one light source and one surface, the moon is not considered. Furthermore, the moon irradiance is modeled straightforwardly [19]. Approximating the surface reflectances using existing daytime imaging, e.g., Sentinel-2 or daytime acquisitions of the nighttime satellite, (or less adequate assuming a constant average value) and approximating the atmospheric compositions using existing operational services, e.g., ECMWF, (or less adequate assuming a constant standard atmosphere) allows estimating and eliminating this mixture. Further natural nighttime radiation sources, such as auroras, nightglow, lightning, and bioluminescence, either occur rarely or have insufficient intensities to be reasonably detected, here. For that reason, they are not considered. Artificial Radiation Sources The sources used by humans to produce lighting have changed drastically throughout history going from open fires to candles and oil lamps over natural gas to electrical light. For example, by [18] it was figured out that, among artificial light sources, HPS lamps were responsible for about half of the artificial light in the European Union in 2015, although a trend towards the use of LED lamps is to be expected. We give an overview of the most common exterior lighting types used and a description of their principal emission peaks, i.e., those wavelengths for which a particular lighting type emits most of its light. Fire is a relatively common source of nighttime radiation either natural, e.g., forest and grass fires, lava, or artificial, e.g., candles, liquid, and pressurized gas-and petroleum-based fuel lamps. Fire emission spectra (Figure 2b) are described using Planck's law for blackbodies, whereby other, here minor, effects on the spectra, e.g., kind of fuel, oxidation of fuel, amount of pressure, are not considered. For example, for liquid fuel emission peaks at 1350 nm are obtained, whereas for pressured fuel mantles typically contain rare Earth oxides absorbing infrared radiation to glow white in the visible. For typical fires, color temperatures range between 400 and 1100 K. Incandescent (Inc.) lamps emit light by heating a tungsten filament inside a vacuum enclosed by a glass bulb. When electricity passes through the filament, it heats up, thereby producing a spectrum similar to that of a blackbody of the same temperature. However, for these bulbs most of the light with an emission peak between 900 and 1050 nm falls in the infrared part of the spectrum (Figure 2c). The next five lighting types are gas discharge lamps, which generate radiation by sending electricity through an ionized gas, thereby releasing energy in the form of photons. Different gasses typically result in their own characteristic emission lines. The lines are broadened by hot vapor or high pressure due to physical broadening mechanisms. High-and low-pressure sodium (HPS/LPS) lamps are a kind of gas discharge lamps which use sodium in an excited state. They typically emit a bright yellow-orange light. For HPS the strongest emission peak is at 819 nm ( Figure 2d). Further broadened lines lie between 560 and 620 nm. LPS exhibits a distinct narrow line at 589 nm due to absence of line broadening and a weak emission peak at 819 nm ( Figure 2e). Mercury vapor (MV) lamps are another kind of gas discharge lamps, using mercury and providing a more blue-green color because of its peak emissions between 540 and 580 nm (Figure 2f). In contrast to other discharge lamps, it additionally resembles the curve of an incandescent lamp, with a peak at 1250 nm. Metal halide (MH) lamps are similar to mercury vapor lamps, but with an additional mixture of metal halides added to the mercury. Metal halide lamps generally have strong emission peaks at 671 and 819 nm, with other peaks strongly depending on the composition of the halides (Figure 2g). Fluorescent (Fluor.) lamps are gas discharge lamps at low-pressure using fluorescence to produce radiation. Like with mercury vapor lamps, they make use of mercury. However, the inner surface of the glass tube in which the gas resides contains a fluorescent coating of phosphors. This results beside smaller infrared emissions in two main emission peaks at 544 and 611 nm (Figure 2h). Warm-white and cold-white light emitting diodes (wLED/cLED) lamps consist of one or more LED, which are semi-conductors releasing photons by radiative recombination when injecting an electrical current. Different kinds of semi-conductors are used to create a wide range of colors. Therefore, there are no specific emission peaks for LED, however, they are identifiable by relatively symmetrically shaped emission bands and a lack of infrared emissions. White LED generally have two primary peaks, one in the blue and another one in the green to red range (Figure 2i for warm-white and j for cold-white). If the correlated color temperature (CCT) is at most 4000 K, it is warm-white and otherwise, it is cold-white. As a result of their long lifespan and high efficiency, LEDs are becoming more and more the standard for both indoor and outdoor lighting. For each of the eight lighting types, we consider the spectra of [17] (by NOAA (National Oceanic and Atmospheric Administration)) and [20] (by GUAIX (Universidad Complutense de Madrid, Group of Extragalactic Astrophysics and Astronomical Instrumentation)) interpolated to the range from 350-900 nm in steps of 1 nm as it is the range common to both libraries and comprises the VIS/NIR range of interest having a focus on outdoor street lighting. For fires, the spectra of blackbodies with temperatures of 400 K, 700 K, 900 K, and 1100 K are considered. While it is possible to give an extensive overview of a number of artificial light emitting sources of radiation in the VIS/NIR part of the EM spectrum during nighttime, relative worldwide frequencies of lighting types are difficult to determine. Luminance To determine the intensity of lamps, the European standard (EN) for street lighting is used as reference [21]. As a result that it is more relevant to know how much reflected light is seen by the human eye than to know the radiant flux of a lamp, standard values for average street luminance are given instead. The required minimum luminance depends on the type of street and ranges between 0.3-2.0 cd m −2 . Together with measured data a maximum luminance of 4.0 cd m −2 is considered, here. From these luminances the corresponding bottom-of-atmosphere (BOA) radiances are deduced. The BOA radiances have to equal the integral over all spectral radiances and they are computed based on combined lamp spectra and surface reflectances. Surface Surface reflectance data are required to generate reflected lamp emissions, namely indirect radiation towards the sensor. For this reason, 18 representative surface types including street asphalt, paved brick, road concrete, grass, snow, sand, wood, asphalt roof shingle, and a Spectralon with near-constant reflectance of 99%, which is considered as this is, here, similar to direct radiation towards the sensor, of [22] (by USGS (United States Geological Survey)) are used as source for surface reflectances (Figure 3, top). Atmosphere Radiative transfer is the physical process to transform BOA radiances to TOA radiances, by which radiation interacts with the atmospheric constituents. To quantify atmospheric impacts, the code by [23] is used. Since the focus is on cloud-free conditions, only such atmospheres are considered. How to differentiate between cloud-covered and cloud-free conditions even solely based on panchromatic images is illustrated by [24] especially for urban areas with high accuracy. Since the focus is on urban areas, urban aerosols and atmospheric profiles mid-latitude summer, mid-latitude winter, subarctic summer, subarctic winter, and tropical are included to cover a wide range of conditions. Together with visibilities between 10 and 100 km, that strongly determine transmittance, a range of 30-40% between highest and lowest atmospheric transmittance occurs (Figure 3, bottom). Without further assumptions, this also gives an indicator of the possible error range for atmospheric transmittance estimation, which is the largest error source in the estimation of radiant flux [25]. As illustrated in Figure 4, hence, a TOA radiance is generated for each of the eight considered lighting types by sampling uniformly at random among the corresponding lamp spectra of NOAA and GUAIX, luminances between 0.3 and 4.0 cd m −2 uniformly at random, as well as surface reflectances selected of USGS and for fire by sampling temperatures uniformly at random between 400 and 1200 K. Finally, one of five stated atmospheric profiles and between 10 and 100 km visibility selected uniformly at random are applied. Satellite Sensor The purpose of the satellite sensor is to produce radiance images on the focal plane array of an optical imaging system, where different spectral bands are sensitive to particular ranges of wavelengths. To compute for a band B the signal L B that arrives at a sensor through combining spectral radiances L λ of different wavelengths λ, the weighted average of the normalized effective radiance value over the detector bandpass R B,λ is considered, namely the band-averaged spectral radiance The detector bandpasses are difficult to synthesize accurately beforehand. As a general rule, however, such detector bandpasses are described by an analytical function which behaves as a combination of a rectangular function and a Gaussian function. Commonly, a symmetric super-Gaussian function R B,λ = 2 −|2(λ−B CW )/B FWHM | k is used, where B CW and B FWHM represent the center wavelength (CW) and full width at half maximum (FWHM) of band B and k denotes a parameter which defines the shape of the function. For high values of k, the function resembles a rectangular function, while for k close to 2 the Gaussian function is approximated. For optical remote sensing purposes, k = 6 usually results in realistic detector bandpass functions [14]. Due to the robustness of optimization and due to production of detector bandpasses, we consider CW in steps of 1 nm and FWHM in steps of 5 nm. The signal that constitutes the at-sensor radiance image does not only contain radiances originating from the already mentioned radiation sources. Additionally, it might include stray light in case the satellite is directly lit by sunlight and high energy particles. Moreover, noise can be introduced during the charge transfer process caused by detectors and electronic devices. Here, the focus lies on radiometric or optical imaging system noise because other noise sources, such as straylight, are relatively straightforward to model. To compare the amount of desired signal electron number S B to the level of noise electron number N B , it is assumed that SNR B = S B /N B as Signal-Noise-Ratio and NER B = L B /SNR B in W m −2 sr −1 nm −1 as Noise-Equivalent-Radiance. The largest radiometric noise contribution is a result of the random incidence of photons, thereby randomly generating photo-generated electrons, so-called photon shot noise. Assuming photon shot noise to be dominant and other contributions negligible, the noise electron number is rewritten as thereby obeying a Poisson distribution. The signal electron number S B itself is retrieved by converting the incoming spectral radiance to electron content S B = L B · A · π · τ · η · t · B CW · B FWHM /((4( f /#) 2 + 1) · h · c) with A being the detector's effective area, τ is the system's optical transmittance, η is the quantum efficiency, t is the effective integration time, f /# is the f-number, h is Planck's constant, and c is the speed of light [26]. As a reference for the noise model, the recommendation by [13] on SNR is adopted, i.e., for a photopic spectral band P with P CW = 560 nm and P FWHM = 100 nm an SNR P = 10 at a band-averaged spectral radiance L P = 2.5 × 10 −7 W m −2 sr −1 nm −1 . With the assumption of all other variables remaining identical, this results in NER B = L B P CW P FWHM /(L P B CW B FWHM ) · NER P and allows for a noise value to be taken from a Poisson distribution to be added to L B . Thus, as illustrated in Figure 4 for any given spectral band B, the TOA band-averaged spectral radiances L B are computed from the TOA radiance spectra and noise taken from a Poisson distribution with mean NER B is added to the signal in order to end up with a realistic sensor signal. Note that the conversion to a digital number is not considered at this point; due to the fact that some sensor-specific qualities need to be known before detection limits and saturation values are determined. Performance Metrics For indices, we consider the mean absolute error (MAE) to measure errors, namely MAE = ∑ n 1 |y i − x i |/n, where y i is the estimated value and x i the true value. For classifications, we consider the confusion matrix with true positive (TP), false positive (FP), false negative (FN), and true negative (TN). For example, the number of TP is defined by the number of correctly predicted positives, while the number of TN is defined by the number of incorrectly predicted negatives. With recall = TP/(TP + FN) describing the ability to correctly predict all positive instances and precision = TP/(TP + FP) describing the radio of correct predictions among those instances that have been predictive positive, it is F1 = recall · precision/(recall + precision) considered as both measures are not reliable by themselves. For multiple classes the F1 is the mean over the F1 scores of all classes. Results As it is not effective to share full lamp spectral irradiances, technical descriptions of and management decisions on artificial lighting generally consists of only a limited number of performance parameters or indices. We investigate the most common spectral indices in lighting engineering, based on a report on road lighting and traffic signals of the European Commission [18,21]. These define how much light is emitted, how much of the emitted light is seen by the human eye, or how much light is emitted in the blue part of the EM spectrum. In addition to the discrimination of lighting types, also the perceived color of the emitted light is assessed. For optimization the MAE concerning the lighting parameters and F1 score concerning the lighting classification are minimized or maximized based on the at-sensor radiances only. Namely information on the radiation source, luminance, surface, or atmosphere are not considered while estimating the values. Luminous Efficacy of Radiation In designing artificial lighting, achieving a high luminous efficacy is crucial. Luminous efficacy rates the amount of visible light that is produced, in lumen, divided by the total amount electrical power that is required, in watt. Therefore, it is a measure for the efficiency of a particular luminaire system. Not only does luminous efficacy take into account emissions outside of the visual spectrum, but also, for example, decreased lumen as a result of dirt collections on the luminaires or electrical power losses in control gears. As luminous efficacy is impossible to be estimated without any ground-based information, it is often interchanged with luminous efficacy of radiation (LER), which is computed as the ratio between luminous flux dλ dλ with K max being the greatest luminous efficacy which can theoretically be achieved at 555 nm, equaling 683 lm W −1 , and V(λ) is the CIE photopic spectral luminous efficiency or the human eye's relative sensitivity under well-lit conditions, and radiant flux Φ e = ∞ 0 [18]. Thus, a choice of two bands is physically required for the estimation of LER. A panchromatic band B0 is used to estimate the irradiance emitted across the full EM spectrum (i.e., denominator of equation) and a (near) luminance band B1 is used to estimate the amount of irradiation which is visible to the human eye (i.e., numerator of equation). Band B0 estimates the amount of emitted irradiance, namely the radiant flux and is itself a light parameter playing one of the most important roles that corresponds to a certain luminaire system. The estimation of the radiant flux from band B0, however, is not straightforward, as it depends on a number of different parameters, e.g., surface reflection, atmospheric transmittance, and the ratio of emitted power within the measured spectrum. As in typical situations surface reflection and atmospheric transmittance are essentially constant and similar factors to bands B0 and B1, however, they basically cancel for LER. As band B0 covers the full EM spectrum, the signal is high and sensitive to all considered radiation types; it also serves as a panchromatic band and is used for normalization purposes. Whereas it is possible to estimate radiant flux of lamps, uncertainties remain relatively high. Usually, rather than the radiant flux, it is the required electrical power that is of interest. However, estimating the latter is further complicated by the need for data on electrical power efficacy, which describes the ability to transform electrical power into optical power. Band B1 estimates the amount of visible light, namely the luminous flux and is also an important light parameter by itself, its detector bandpass function shall closely resemble the photopic spectral luminous efficiency curve V(λ). However, the form of a detector bandpass function differs from the form of V(λ). The optimal spectral band B1 to resemble V(λ) is reached for B1 CW = 561 nm and B1 FWHM = 121 nm, theoretically. Note that the CW differs slightly from the wavelength at which V(λ) reaches its maximum, i.e., 555 nm. This is ascribed to the asymmetrical form of V(λ) having a slightly positive skew. It is expected that this shift to higher wavelengths will be less significant in practice, as most lamps have almost no emissions around the larger base of the function, i.e., 650 nm. The estimated LER, namely LER = a · L B1 /L B0 , is not only achieved by L B1 /L B0 as these ratios do not yet represent LER values, but they need to be adjusted by applying a multiplication factor a. It is theoretically approximated by a K max · B0 FWHM /B1 FWHM . A practical approximation based on the ratios and LER using least-squares estimation is preferred. In order to select the optimal parameters for B0 and B1, uniformly distributed sampling is used for the CW and FWHM. Based on the discussions, for B0, CW ranges between 500 and 750 nm and FWHM ranges between 300 and 550 nm, and for B1, CW ranges between 500 and 600 nm and FWHM ranges between 80 and 150 nm. Considering all possible combinations, the optimum is reached for the combination B0 CW = 619 nm and B0 FWHM = 490 nm (panchromatic) as well as B1 CW = 556 nm and B1 FWHM = 125 nm (green). Note that, as expected, B1 deviates slightly from those reached from approximating the photopic luminous efficiency function directly. An MAE for LER of 13 lm W −1 is reached. For comparison, the mean LER for the test data equals 307 lm W −1 with a standard deviation of 117 lm W −1 . It is important to note that these LER are slightly higher than they will be in reality, since they are only based on the 350-900 nm range, with data outside this range missing for the spectra. The theoretical estimation of a results in 174, while the practical estimation yields a = 151. Applying these analyses to the band B1 recommended by [13], namely B1 CW = 560 nm and B1 FWHM = 80 nm, results in an MAE for LER of 46 lm W −1 , indicating that the proposed bands offer a significant improvement for the estimation of the efficiency of artificial lighting. Let us consider instead of the optimized the typical panchromatic band B0 with B0 CW = 700 nm and B0 FWHM = 400 nm. Here, the range 865-900 nm is covered, but it is relevant for fire and incandescent lamps only, where the radiant flux is only marginally emitted in VIS/NIR at all. However, using the optimized or typical panchromatic band, estimations on the radiant flux are derivable taking the lighting type into account. For all other lighting types there is almost no emission in this range. Here, the range 374-499 nm is not covered, but it is relevant for more than half of the considered lighting types, e.g., LED lamps produce strong emissions, however, HPS lamps do not. Spectral G Index Light pollution, especially in the blue part of the spectrum, has gained substantial attention. Examples are the disruptive effect of artificial lights on the nocturnal behavior of different species as well as on human health [27]. For a long time correlated color temperature has been the principal indicator for the amount of emitted blue light, despite its inability to sufficiently describe the spectrum of a lamp. However, the European Commission has published a report in which it recommends the use of the so-called spectral G index instead [18], which is computed as the total amount of luminous flux divided by the amount of radiant power emitted between 380 and 500 nm with high values corresponding to low blue light emissions, namely G = 2.5 log 10 Note the similarities between the enumerator of this equation and the one of LER. Thus, a choice of two bands is physically required for the estimation of G, with a focus on the amount of blue light. However, the enumerator is estimated by the same spectral band B1. The denominator, on the other hand, comprises the sum of emissions between 380 and 500 nm, and therefore needs an additional band B2. This sum equals applying a rectangular detector bandpass with CW of 440 nm and FWHM of 120 nm. The optimal band for a super-Gaussian detector bandpass function will not, in practice, differ much from these values. The estimated G, namely G = 2.5 · log 10 (a · L B1 /L B2 ), first ignores the logarithmic form, in order to more accurately estimate the multiplication factor a. In order to select the optimal band B2, uniformly distributed sampling is used for the CW between 420 and 460 nm and FWHM between 100 and 140 nm based on the discussions. As expected, the optimal band B2 closely resembles the mentioned rectangular with B2 CW = 443 nm and B2 FWHM = 120 nm (blue). For a factor a = 1.15 an MAE for G of 0.081 is obtained. By comparing this result to the mean 1.875 and the standard deviation 1.923 of the data, adding a band in the blue part of the spectrum proves to be beneficial. The recommended bands by [13], occasionally criticized for its lack of a dedicated blue band, only reaches an MAE of G of 0.569 with its scotopic band B2 CW = 502 nm and B2 FWHM = 95 nm. With common criteria suggesting G ≥ 1.5 [18], an error of 0.081 is acceptable in most cases. Note that these accuracy values additionally depend on the characteristics of the sensor, i.e., detection limits, saturation, and the number of bits used for radiometric sampling. Hence, the error will be slightly larger in reality, but remains acceptable for a sensible choice of sensor parameters. With most of modern streetlights being non-Planckian radiators more emphasis is placed on the estimation of the spectral G index, as opposed to estimating correlated color temperature. Classification In order to classify different radiation sources into their respective type, sensor data is compared to a spectral library with the means of a k-nearest-neighbor (KNN) classification. Put simply, a particular radiation source is labeled with the same class as the majority of its k nearest neighbors in feature space. KNN is used as it is robust. The goal is not to find the best possible classifier, but the classification method serves as a comparison measure to judge the usefulness of a particular band combination. When applying KNN, adding features might deteriorate the classification performance, even if it improves the classification of one of the classes. For example, if adding a particular band improves the classification of one lighting type, it might generate large distances for other lighting types, because of large variances in this band. To overcome these effects, with normalized bands considered as features, namely its signal is divided by the signal of the panchromatic band, KNN is applied to each possible combination of features individually. For each lighting type the best feature combination is determined by withholding the combination with the highest classification performance. To combine the resulting binary one-versus-all classification results, a weighted voting is performed, with the classification performance used as weights. In other words, if a particular lamp is classified as multiple types, the one with the best performing classifier will be the deciding. For the case where a lamp is not classified as any lighting type, it is labeled as no class. Based on Figure 4, the spectral library includes two selected representative spectra for each of the eight considered lighting types combined with eight typical surface reflectances and normalized to luminance of 1 cd m −2 . Furthermore, four fires with temperatures equally distributed between 400 and 1100 K are included. Finally, all these spectral radiances are combined with the five stated atmospheric profiles at 20 and 75 km visibility each. For each lighting type, KNN searches for the best possible combination of bands, including bands B1 and B2 that have been fixed. In order to select the optimal additional bands, uniformly distributed sampling is used for the CW between 350 and 900 nm and FWHM between 5 and 200 nm. As a reference, the classification performance of the case is given, where no additional bands are added, i.e., by only making use of bands B0 for normalization, B1, and B2. Here, a mean F1 score of 0.620 is reached. The four bands suggested by [13] instead of B1 and B2, however, reached a mean F1 score of 0.791. Thus, it is evident that improvements are possible, and required, by including more bands. Table 1 gives an overview of the F1 scores for individual radiation source classes, for optimal band selections of 0, 1, and 2 additional bands. An examination of the optimal combination of 3 additional bands proves that improvements are minimal with a mean F1 score of 0.917 compared to 0.899 for 2 additional bands. One additional band results in a mean F1 score of 0.802. Hence, the addition of two bands is recommended in order to allow lighting type identification. An optimum is reached for B3 CW = 578 nm and B3 FWHM = 15 nm (yellow-orange), in the case of one additional band B3. Corresponding mean F1 scores significantly increase to a similar mean F1 score than for the bands suggested by [13], but with one band less. For example, the largest improvement is seen for mercury vapor, which is clearly differentiated from other lighting types. Although it is expected that the best combination of two additional bands includes a band identical, or at least similar, to mentioned band B3, this is not necessarily the case. The reason for this is that some parts of the spectra might possess high correlations with other parts of the spectra. In other words, joining the two best scoring spectral bands does not necessarily result in a better classification performance if they contain similar, correlated, information. The determination of two additional bands, therefore, needs to start from the situation with B0, B1, and B2 fixed, and with detector bandpasses for B3 and B4 being generated. In the case of two additional bands B3 and B4, an optimum is reached for B3 CW = 576 nm and B3 FWHM = 15 nm (yellow-orange) as well as B4 CW = 815 nm and B4 FWHM = 35 nm (near infrared). B3 does not, as is expected, differ much from the one optimal in the case of one additional band. The largest improvements are seen for HPS and LED classes as well as only two classes generate F1 scores lower than 0.8, i.e., fire and fluorescent. A closer look at the confusion matrix in Table 2 reveals the reasons for these low values and details which types of misclassifications are to be expected. For example, fire is sometimes wrongly classified as an incandescent lamp or as a mercury vapor lamp. The confusion with mercury vapor lamps is probably solved by adding a narrow band around 545 nm, where mercury vapor lamps have one of their emission peaks. However, the relatively low occurrences of both classes in urban areas, and the relatively small improvement of introducing such a band aiming solely to distinguish between these two types, do not justify its consideration. Likewise, there is a high correlation between fluorescent and mercury vapor lamps. The reason for this lies in the manufacturing process of a fluorescent lamp. Similarly to mercury vapor lamps, fluorescent lamps make use of mercury gas, resulting in nearly identical emission spectra. Adding a narrow band around 610 nm probably solves this issue. However, again its consideration is not justified. Correlated Color Temperature In order to assess the perceived color of the light emitted by a particular lamp, its spectrum is compared to a range of blackbody radiators, which follow Planck's law. The absolute temperature of the blackbody that most closely resembles the spectrum of the lamp, defines the so-called correlated color temperature (CCT) [18]. It needs to be noted that, while the computation of CCT values is relevant for lamps that closely resemble the spectrum of a Planckian source, e.g., in the case of incandescent lamps, it is no longer relevant for other lighting technologies such as gas discharge or LED lamps. Despite its limited ability to describe a lamp spectrum, CCT remains a widely applied indicator, as it is relatively straightforward to grasp its meaning. Another frequently cited parameter to describe a light source spectrum is that of the color rendering index (CRI), which expresses a lamp's ability to faithfully reproduce different colors along the spectrum, compared to a blackbody radiator with the same CCT. Typically, incandescent lamps have high CRI values close to the maximum value of 100. LPS lamps, on the other hand, have only one narrow peak in its spectrum and, therefore, yield low CRI values, near 0. As estimating CRI requires a very high spectral resolution its estimation is not considered, here. Although it will most likely lose its value as a lighting metric, as it is limited in properly describing a lamp's characteristics, CCT remains a valuable and frequently mentioned specification. For a proper estimation of CCT, a good distribution of spectral bands along the visible spectrum is required. Looking at the bands that are already fixed for LER, G, and classification of lighting types, B1 and B2 seem to be good candidates to cover the green and blue part of the spectrum. The part of the spectrum that is not covered by the existing bands is located in the red part of the spectrum. It is, therefore, expected that adding a single band in that wavelength range will significantly decrease the estimation error of CCT. As expected, the optimum is reached for a band that covers the red part of the spectrum, namely B5 CW = 610 nm and B5 FWHM = 75 nm (red). By adding this band, MAE for the estimated CCT, namely CCT which is derived based on the estimated tristimulus values ( X, Y, Z) = ∑ 1≤i≤5 (x i , y i , z i ) × L Bi /L B0 [28], is significantly improved from 994 K to 391 K. An additional advantage of including B5 is that it offers the possibility of generating true color imagery, with B2, B1, and B5 corresponding to the blue, green, and red band. Performance Analyses The recommended spectral bands are illustrated in combination with some typical lamp spectra in Figure 5 without the panchromatic band with 374-864 nm. What is immediately seen is the ability of bands B3 and B4 to distinguish between different lighting types. Additionally, there is a good spread of the different bands across the VIS/NIR spectrum, except for the wavelengths 650-800 nm. This unsurprisingly coincides exactly with that part of the spectrum where lamps typically emit no light. Due to the nature of nocturnal radiation sources, the choice of spectral bands differs significantly from the typical daytime optical sensors. This nighttime focus results in rather atypical bands, e.g., the narrow yellow-orange band B3 around 576 nm. A performance comparison for the selected bands with respect to other available band combinations, i.e., the proposal for Nightsat, 10 m bands of Sentinel-2, AC-5, and JL1-3B is given in Table 3, where the bands are illustrated in Figure 6. As the characteristics of the green band of JL1-3B and photographs taken by astronauts aboard the ISS are the same, the estimated LER are also the same. The blue bands are shifted by 10 nm only and therefore, the estimated spectral G indices are also similar. As the table reveals, a performance improvement was achieved for all relevant indices with the optimized bands. The Nightsat mission proposal, which is the standard reference with respect to nighttime VIS/NIR missions, does score relatively well in certain aspects. For example, the classification of radiation source types reaches similar results as the proposal with three multispectral bands. However, it does not succeed in estimating emissions in the blue part of the spectrum, as is reflected by the large MAE for the spectral G index. Radiometric Resolution As the conversion from sensor signal to a digital number as illustrated in Figure 4 is not considered, unlimited dynamic ranges and quantization are assumed. Detectors, however, have a detection limit, saturation, and bit depth. Assuming detectors have a linear response, the conversion for band B from an incoming band-averaged spectral radiance L B to DN B is computed by DN B = (L B − offset B )/gain B with offset B = L B,min and gain B = (L B,max − L B,min )/DN B,max , where L B,min and L B,max represent the detection limit and the saturation of the sensor as well as DN B,max is the maximum digital number that can be attained, e.g., 255 for 8-bit images. Recommendations for the detection limits and saturation of multispectral nighttime VIS/NIR sensors have been made by [13]. For a photopic spectral band with 510-610 nm detection limit and saturation of 2.5 × 10 −7 and 2.5 × 10 −1 W m −2 sr −1 nm −1 are recommended. Here, band-averaged spectral radiances for the corresponding band B1 range between 6.3 × 10 −8 and 4.4 × 10 −5 W m −2 sr −1 nm −1 for the lamps. Note that the lower limit considers land environments and challenges with the limited reflected light in aquatic environments are not covered. Especially the upper limit is significantly lower than the saturation recommended by [13], where the Luxor Sky Beam in Las Vegas, NV, USA, is used as reference. With a 42 billion candela tunnel of light, it is the strongest light beam in the world. Taking a linear response into account and ignoring less than 2% of all TOA radiances, for band B1 detection limit and saturation of 1 × 10 −7 and 5 × 10 −3 W m −2 sr −1 nm −1 are recommended in the case of 16-bit as well as 1 × 10 −7 and 3 × 10 −4 W m −2 sr −1 nm −1 in the case fewer bits are available. The detection limit recommended by [13], on the other hand, more or less conforms to the computed values, here. However, setting the detection limit to at most 5 × 10 −8 W m −2 sr −1 nm −1 extends the area of operation to the lighting of pedestrian and cycle zones. For the other bands, the distribution of band-averaged spectral radiances follow that of band B1 for the lamps, with marginally lower values for the narrow band B3 and with the exception of two bands, i.e., the blue band B2 and the near infrared band B4. The rather low TOA radiances of band B2 between 1.4 × 10 −10 and 2.7 × 10 −5 W m −2 sr −1 nm −1 is explained by the fact that some lamps, e.g., high-and low-pressure sodium lamps, barely emit blue light. Ignoring less than 12% of the smallest TOA radiances and with the importance of blue light emissions in mind, even for relatively low TOA radiances, it is recommended that the blue band has a slightly lower detection limit by a factor 10 −1 . Another difference is seen in band B4, where again some lamps, e.g., low-pressure sodium, fluorescent, and LED lamps, barely emit near infrared light, but also rather high TOA radiances of 1.8 × 10 −3 W m −2 sr −1 nm −1 are computed, belonging to some of the high-pressure sodium lamps. Therefore, by ignoring less than 33% of the smallest and 2% of the largest TOA radiances, it is recommended that this band has a higher dynamic range, with a saturation in the case of 16-bit at 5 × 10 −3 and 8 × 10 −4 W m −2 sr −1 nm −1 in the case fewer bits are available. Increasing the saturation of band B4 is not only useful for high-pressure sodium lamps, but additionally increases the detection rate of fire. Although the main focus is on urban areas, the detection of fire is an interesting by-product of a dedicated nighttime VIS/NIR sensor. Performing a similar analysis for fire spectra with temperatures between 400 and 1100 K illustrates that the highest TOA band-averaged radiances are not surprisingly located in band B4, in the near infrared part of the spectrum. However, even this band is not able to detect all fires, given the detection limit and saturation recommended for lamps. For a standard atmosphere a detection limit of 10 −7 W m −2 sr −1 nm −1 for B4 roughly corresponds to fires of 550 K, while a saturation at 10 −3 W m −2 sr −1 nm −1 for B4 roughly correspond to a temperatures of 750 K. However, the saturation is less of an issue, as TOA radiance values are lower in the panchromatic band B0, for example, thereby not exceeding the saturation threshold. As a consequence, most fires with temperatures exceeding 550 K are detectable by the recommended sensor. These temperatures cover most of the forest fires, meaning that the proposed spectral bands, with their detection limits and saturation, serve as an additional tool for fire detection programs. Although the VIS/NIR part of the spectrum does not cover the radiation peak of fires, as given by Wien's displacement law, there is an important difference with respect to daytime optical sensors. The lower detection limits that are required for nighttime VIS/NIR sensors offer an opportunity to detect the lower TOA radiances that are emitted by fires in the VIS/NIR region, typically not visible to daytime VIS/NIR sensors. For a standard atmosphere and a constant albedo of 20%, typical for road surfaces, TOA band-averaged spectral radiances for full moon conditions range between 1.5 × 10 −7 for band B2 and 2.5 × 10 −7 W m −2 sr −1 nm −1 for band B5. As these values exceed the recommended detection limit, it is necessary to model out moonlight in most cases. With typical albedo values for snow and clouds around 95 and 70%, their respective TOA radiances range between 5 × 10 −7 and 10 −6 W m −2 sr −1 nm −1 . Thus, under full moon conditions, it is possible to detect both clouds and snow cover. As both effects are extended in size; a spatial binning results in detection capabilities even with reduced moonlight. However, in comparison to, e.g., NPP-VIIRS-DNB its ability to detect such phenomena is limited. The computed performance metrics for LER, spectral G index, classification, and CCT are based on sensor signals before being converted into DN. This means that the results are slightly worse in a realistic setup, since certain small TOA radiance differences will be lost as a result of radiometric sampling. With the above-mentioned spectral bands and their recommended detection limits and saturation levels, additional analyses are carried out for different bit depths in Table 4. Table 4. Performance comparison for different bit depths, detection limit, and saturation considering B0-B5. slightly different detection limit and saturation are considered. Overall accuracy (OA) is the total number of correctly classified instances divided by the total number of instances. For most bit depths, classification results are more or less stable, with the exception of an 8-bit conversion, which produces significantly deteriorated mean F1 scores. While the conversion to 10-bit still succeeds at classifying most of the radiation sources, the ability to estimate the luminous efficacy of radiation and spectral G index has drastically declined with respect to larger bit depths, with its values unacceptable for proper use. It is, therefore, recommended to apply a radiometric sampling of at least 12-bit, with higher bit depths not considerably better. ∞-Bit For each of the bands, typical TOA band-averaged spectral radiance values are calculated based on the selected lamp spectra, surface types, luminance recommendations, and atmospheric conditions. It is important to note here that some of these parameters are uniformly distributed between a minimum and a maximum value. Therefore, rather than covering a realistic distribution of values, it reflects a range of possibilities that is evenly distributed. Spatial Resolution The light emitted by artificial lighting sources does not only vary with wavelength, but also depends on the direction in which the light is emitted. The spatial resolution that is required for a VIS/NIR nighttime sensor depends completely on the objective of such a mission. It depends especially on the scale of the objects that need to be detected. For example, if the focus of a mission is on single-lamp level, a different spatial resolution is required, compared to city block level such as NPP-VIIRS-DNB. However, for such spatial resolutions, the multispectral approach makes little sense, since the signal that arrives at the sensor consists of a multitude of lamp signals and lamp types, turning the estimation of LER, spectral G index, and radiation type meaningless. Therefore, the focus will be on the single-lamp level here. It is sufficient to consider the panchromatic band only as reducing the spectral resolution as such is not likely to change the detectability of lamps. To arrive at recommendations concerning the spatial resolution of a nighttime VIS/NIR sensor which focuses on artificial lighting, the spacing between different lamps, in combination with their mounting height, plays an important role. Typical values for these variables were derived from lighting engineering standards [21]. This lead to three different cases, i.e., a spacing of 25 m and mounting heights of 6 m for residential roads ( Figure 7a); a spacing of 40 m and mounting heights of 10 m for roads with a mixed function (Figure 7b); and a spacing of 60 m and mounting heights of 18 m for major roads (Figure 7c). Lower spacing distances than 25 m do occur, but are not frequent. A typical luminous intensity distribution pattern is considered, which shows the intensity of emitted light for different directions. According to the Nyquist sampling theorem, the sampling frequency shall be at least twice the highest frequency contained in a signal. Applying this logic here means that the required ground sampling distance should equal half of the spacing, or less, between neighboring lamps. With a minimum spacing of 25 m, this results in 12.5 m or less. Moreover, neighboring lamps usually possess similar characteristics, which makes the detection of individual lamps not necessarily required in all cases and a reduced ground sampling distance for multispectral bands than for the panchromatic band are feasible. For each of these cases, some sensible spatial resolution options are investigated in Figure 7. Notwithstanding the above-mentioned prediction, road lighting does not behave like a regular point source. Intensity distributions have a major influence on the positioning estimation of lamps. The results predicted by the Nyquist theorem, however, are confirmed by visual inspection that a spatial resolution of 10 m is feasible. As a target of lighting engineering is to create more homogeneous illumination patterns on the surface and the atmosphere blurs the shape, a spatial resolution less than 10 m is in this case required to distinguish lamps and between public and private lighting. For larger-sized roads, such as dual-carriageways, not only the most common one-sided arrangement and single central arrangement occur that are sufficiently covered by the investigated single-row arrangement, but also a twin central arrangement (Figure 7d), a two-sided opposite arrangement (Figure 7e), and a two-sided staggered arrangement (Figure 7f) are common. Lamp spacings of 40 m are considered as a spacing of 25 m is not relevant, as it corresponds to relatively narrow roads in a residential area, for which a single-row arrangement is the preferred option. A similar pattern as for single central arrangement with double the number of lamps is achieved for the twin central arrangement, as neighboring twin lamps are that close to each other that they are almost identical to single lamps. However, under certain conditions a thorough pattern analysis at a minimum spacing of at most half the distance between the centers of the light cones of the twin lamps allows the two lamps to be differentiated. Furthermore, as the light cones face each other, the task gets more complicated. For the two-sided opposite arrangement a similar situation occurs, the light cones are oppositely directed, which allows for a slightly coarser spatial resolution still enabling the discrimination of the lamps. Moreover, since the lamps exhibit identical characteristics, it is acceptable to classify them as a single lamp. It is considerably easier to detect single lamps for a two-sided staggered arrangement. Once more, as all lamps have a distance of more than 25 m to each other due to the carriage width, a comparable situation as for the single-row arrangement with a spacing of 25 m is present. Given the recommended spatial resolution of 10 m and the relatively low detection limits, it is also possible to combine two panchromatic bands. The first band combines a high spatial resolution of 10 m with relaxed detection limits, thereby only focusing on lamp detection, while the second band combines lower detection limits with a relaxed spatial resolution, e.g., 20-25 m. This is also sufficient for the multispectral bands, for which a reduced spatial resolution of 40-50 m is still acceptable as lighting parameters typically do not change from lamp to lamp, but potentially from street to street. However, in this case a stronger mixture of the lamps spectra with spectra of residential and industrial lighting as well as vehicle lights have to be considered also in combination with skyglow [29]. Conclusions With a focus on spectral characteristics, but also considering radiometric and spatial resolutions, the article performed and analyzed simulations to recommend performance parameters for nocturnal multispectral satellite imagery for urban areas as summarized in Table 5. The simulations accounted for all major contributions to the signal, namely typical theoretical fire spectra, lamp spectral libraries, standard luminance values for road surfaces, a surface reflectance library, estimations of atmospheric effects, and the sensor. Future research shall generate and consider fire spectral libraries in visible and near infrared (VIS/NIR) to thermal infrared to exploit the capabilities of enhanced nighttime satellite imagery for users of fire products. For urban areas the most important lighting parameters, namely luminous efficacy of radiation (LER) (with radiant flux and luminous flux), spectral G index (G), classification of lighting types (fire, incandescent lamps, high-pressure sodium lamps, low-pressure sodium lamps, mercury vapor lamps, metal halide lamps, fluorescent lamps, warm-white LED lamps, cold-white LED lamps) (Classif.), and correlated color temperature (CCT) are considered. Reference radiances represent the mean of all considered TOA radiances and the corresponding Signal-Noise-Radio is derived according to Section 2.6, whereas all other parameters are already directly derived in Section 3. The next step in this process of simulation is to generate more complex imagery based on measured lamps, measured reflectance spectra, digital surface models, cadastral maps, modeled cloud conditions and adding factors such as the moon, residential and industrial lighting as well as vehicle lights. Therefore, further ground, air-, and spaceborne measurements need to be integrated. Such more realistic data allow investigating how mixtures of different radiation sources and spatial resolutions influence, e.g., classification results and radiant flux estimations considering cloud parameters. Finally, the accuracies of the models and the parameters are to be covered more detailed which is of major importance for applications, too. Nighttime images with high spectral and high spatial resolutions are a relatively unexplored field. The options for future research are, therefore, plentiful. One such area of interest is the estimation of radiant flux also considering inter-and intra-night changes of emissions, which enables recognition of changes in human activities, where research has already scratched the surface of this topic. Time-series of imagery at different scales are investigated to cover the dynamics of the urban lightscape [30]. Another such area of interest is the estimation of cutoff of lamps, i.e., non-cutoff, semi-cutoff, cutoff, or full-cutoff, to rate the amount of wasted light and glare which is a major lighting parameter not covered, here. Imagery acquired under different tilting angles are investigated. With the mentioned areas of research only touching a part of the countless opportunities, simulating nocturnal imagery is a major step towards a deeper understanding of nighttime VIS/NIR remote sensing (missions) to reveal insights in the human activities shaping and changing the Earth.
13,916
2020-06-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Feasibility Study for a Solar-Energy Stand-Alone System : ( S . E . S . A . S . ) The present study is aimed to serve a small community living on Stand-Alone Solar-Energy (S.A.S.E.S.) system. As a basis for the study 1 cubic meter of hydrogen is to be produced by electrolysis in 5 hrs that requires energy input of 5 KW-hr. The proposed system consists of the following main components: photovoltaic module, water electrolyzer and fuel cell. Solar hydrogen production by water electrolysis is described and design parameters are specified. Economic feasibility of the proposed system is evaluated. The projected cost of hydrogen is calculated and found to be 5 cents/ft. Introduction The sun blasts Earth with enough energy in one hour: 4.3 × 1020 joules to provide all of humanity's energy needs for a year (4.1 × 1020 joules).Solar energy provides electricity via photovoltaic cells.Sunlight reaching the land surface of our planet can produce the equivalence of 1600 times the total energy consumption of the world; the amount of solar energy derived from the sun's radiation on just one square kilometer is about 4000 megawatts, enough to light a small town. With the eventual exhaustion of conventional fuel resources and the aggravation of environmental damage caused by the use of fuel combustion procedures, the use of hydrogen as an energy source and the development of hydrogen energetics is increasingly the major focus of many research laboratories working in the energy sector. To have a Solar-Energy Stand-Alone System (S.E.S.A.S.) that provides power around the clock for a community; solar hydrogen is proposed.Solar hydrogen is simply produced by the electrolysis of water using solar energy (photo-electrolytical); one of the promising options of renewable energy sources as illustrated in Figure 1. Solar Hydrogen Production by Electrolysis Hydrogen, the cleanest energy storage in the Universe, is most of the time associated with high costs, although it is extracted from water, which is the cheapest yet the most precious element to life BTU. Electrolysis is one of the acknowledged means of generating chemical products from their native state [1,2].In other words, make hydrogen while the sun shines; once produced the stored hydrogen will play a key role in the Solar-Energy Stand-Alone System. Depending on the fraction of hydrogen produced by electrolysis (values can be up to 85%), the amount of electricity required based on electrolysis efficiency of 100%, would require close to 40 kWh per kilogram of hydrogen-a number derived from the higher heating value of hydrogen, a physical property.However, today's systems have an efficiency of about 60% -70%, with the Doe's future target at 75%.This can boost the amount of energy required to produce one kilogram of hydrogen from 40 kWh to more than 50 kWh. The cost of hydrogen production is an important issue.For comparison hydrogen produced by steam reforming costs approximately three times the cost of natural gas per unit of energy produced.This means that if natural gas costs $6/million BTU, then hydrogen will be $18/million BTU.While more expensive than steam reforming of natural gas, electrolysis may play an important role in the transition to a hydrogen economy because small facilities can be built at existing service stations.In addition, electrolysis is well matched to intermittent renewable technologies.Finally, electrolyzers can allow distributed power systems to manage power during peak.Producing hydrogen from electrolysis with electricity at 5 cents/kWh will cost $28/million BTU-slightly less than two times the cost of hydrogen from natural gas.Note that the cost of hydrogen production from electricity is a linear function of electricity costs, so electricity at 10 cents/kWh means that hydrogen will cost $56/million BTU. Description of Proposed System The system is schematically outlined as given as shown next in Figure 2. The main components are: 1) Photovoltaic modules. The following features are typical for the proposed system: 1) Electricity provided by the P.V. Cells is utilized for day time. 2) Electricity provided by the Fuel Cells is utilized for night time. 3) Fuel cells produce water as a by-product.4) O 2 produced in water electrolysis could be utilized, instead of air, in the chemical reaction taking place in fuel cell, illustrated in Figure 3. The merits of our multi-purpose system are summarized as follows: 1) Supply of electricity for day use. 2) Supply of electricity for night use. 3) Availability of Hydrogen gas for energy generation using fuel cells. 4) Supply of fresh water as a by-product from the fuel cell. 5) Supply of heat source as a by-product from the fuel cell. 6) The use of the generated electricity to electrolyze seawater to produce Hydrogen plus Sodium hypo-chlorite.The barrier to lowering the price of high purity hydrogen is the fact that it must use far more than 35 kWh of electricity to generate one kg of hydrogen gas.It takes 60 kWh to make the hydrogen itself, that's a cost of $6.00 per kg if electric power cost is 10 cents per kWh. Design Parameters The proposed study is aimed to serve a small community living on Stand-Alone Solar-Energy System (S.A.S.E.S.) [3].As a basis for the study 1 cubic meter of hydrogen is produced by electrolysis in 5 hrs and requires energy input of 5 KW-hr. The following are the main parameters underlying the study: 1) 2 photovoltaic modules, each is 1000 Watt (1 KW) are to be constructed. 2) One module will be used to supply electricity for day use; while the other will supply power to the hydrogen electrolyzer. 3) A 1000 Watt electrolyzer is used.4) 1 cubic meter of hydrogen is equivalent to 3 KW-hr (thermally).For practical calculations, 5 KW-hr will be used instead of 3. 5) In one hour, 1 KW electrolyzer receives energy of 1KW-hr to produce 1/5 cubic meter of hydrogen and 1/2 this quantity of oxygen. 7) The average annual sunshine hours for countries in the Middle East = 2500 hrs, as reported by the authors [5]. Economic Feasibility of the Project To judge the economic feasibility of a project, one has to estimate first the capital investment and the operating costs.Next a life time of the equipment is assumed.Finally the production costs $/unit is figured out and compared with the current production cost of a product. To come up with a preliminary cost for the produced hydrogen we will concern ourselves with the cost analysis for the electrolysis unit only.The following calculations are presented: 1) The annual production rate of hydrogen from electrolyzer =7 (ft 3 /hr) × 2500 (hr/y) = 17,500 ft 3 . The cost of electricity produced by the second photovoltaic module is figured out as follows: 1) A 1000 Watt will produce energy in one year equivalent = 1000 Watt × 2500 hr = 2500 KW-hr. Discussions and Conclusions The system presented in this paper offers a practical and simple mode for harnessing the sun to provide energy for a small community.Solar energy is regarded by many as the only ideal energy source especially for countries in the middle east located around the so called "solar belt".Coupling solar energy with hydrogen production along with fuel cells is the main feature of the S.E.S.A.S. Electrolysis on the other hand is presently the most practical generation method, and offers the greatest promise of meeting required capital and operating cost objectives without requiring a major technological breakthrough [6]. Cost analysis and feasibility study indicate that the system would be more attractive for scale-up production. Figure 3 . Figure 3. Function of a typical fuel cell.
1,734.6
2012-08-31T00:00:00.000
[ "Engineering", "Environmental Science" ]
Exploration of the investment patterns of potential retail banking customers using two-stage cluster analysis Identifying investment patterns as part of customer segmentation is one of the most important tasks in retail banking. Clustering customers effectively is an important element of improving marketing policy and strategic planning. There are several methods for identifying similar groups of customers and describing their characteristics to offer them appropriate products. However, using machine learning methods is rare, and the application is limited for certain types of data. The aim of this study is to investigate the benefits of using a two-stage clustering method using neural-network-based Kohonen self-organizing maps followed by hierarchical clustering for identifying the investment patterns of potential retail banking customers. The unique benefit of this method is the ability to use both categorical and numerical variables at the same time. This research examined 1,542 responses received for an online investment survey, focusing on the questions that are related to the respondents’ investment preferences and their current financial assets. The research utilizes descriptive statistics and multiple correspondence analysis (MCA) to understand the variables and Kohonen self-organizing maps (SOMs), in combination with hierarchical clustering, to identify customer groups and describe the characteristics of these clusters. The analysis was able to identify clusters of potential customers with similar preferences and gained insights into their investment patterns related to their investment portfolio and investment behavior, including their savings profile, attitude to risk-taking, and preferences for investment advice. These findings were supported by additional insights through the application of multiple correspondence analysis (MCA) describing patterns of financial instruments and portfolios. The main contribution of the research is the combined application of the machine learning methods Kohonen SOM, hierarchical clustering, and MCA for investment pattern analysis in the retail banking business. available to or the constraints bound by investors for investment. Personal circumstances could be related to the amount of savings, the personality traits of the individuals, or the amount of knowledge they have about investing. Investment patterns can be observed in various financial assets. These include the purchase of securities, monetary or paper (financial) assets, cryptocurrencies, and relatively liquid real assets such as gold, real estate, or art collections. In fact, an "investment pattern is the investment in different investment avenues with vital consideration of risk and return" [1]. Strong organizational relationships with customers are required to run a successful business [2]. Studying customer behavior in the management of investment affairs is in many respects almost a new issue [3]. Many records are created every day through several agencies about customers in different parts of the world. Structured, semi structured, and unstructured data are being created at a rapid pace from heterogeneous sources such as reviews, ratings, feedback, trading details, investment data, etc., leading to Big Data. This information generated about customers may hold many frequent patterns that can be filtered and analyzed to provide suggestions about the products, items, or offerings that customers might be interested in. The internet also offers a vast amount of information about potential customers through a variety of investment websites. They attract customers by offering products and services online, tailored to different customer groups, and many of them also provide free consulting services. Online searches also result in a group of products and services with features that the customer can see and check before buying them [4]. In financial and investment companies, the customer relationship management system is a repository of customer information and experience, holding all customer profiles. With this information, it is possible to customize the products or service offerings for each unique customer based on their personal needs before they decide to invest. To better manage customer investments, companies need to categorize, cluster, and segment them based on their individual circumstances. "One of the most effective tools to understand consumers' motivation and behavior is segmentation" [5]. Customer segmentation therefore could help to optimize marketing policy and strategic planning to maximize profitability. How the customer uses a product and evaluates it is also important because it affects reuse behavior. Barczak, Ellen, and Pilling [6] reflected on this by examining the consumption behavior of bank customers after purchase. These are the reasons why researchers are trying to help investment companies and their clients make the best investment decisions based on their characteristics. On the other hand, one of the problems investment companies are facing is the lack of sufficient information about customers. Therefore, experts try to discover hidden patterns in customer data. In this way, they can provide suitable investment options for customers. There are several ways for investment companies to use all available opportunities, strengthening competition between companies and discovering hidden knowledge about customers [7,8]. The latest research in artificial intelligence (AI), especially machine learning methods and neural networks, provides new opportunities for analyzing customers' behavior [9][10][11]. Future AI trends in business include forecasting customer behavior [12], predicting customers' responses to direct marketing [13], analyzing customer churn, and predicting market evolution [13,14]. This study aims to explore potential customers' investment patterns to provide them with appropriate products in retail banking. We applied a novel, two-stage clustering approach: combining Kohonen self-organizing maps [15,16] with hierarchical clustering as an unsupervised machine learning method for exploring the respondents' investment characteristics. This approach enables us to use both categorical and numerical variables that are common in survey instruments. In addition, we used multiple correspondence analysis (MCA) to gain insights into the patterns potential customers use to think about financial instruments and portfolios. This study investigates customers from multiple dimensions that may influence their investment patterns: the respondents' current investment portfolio, the magnitude of their available savings, how they are making ends meet, and their views on appropriate investment products and risk profiles. This research utilizes the responses of customers (as potential investors) to an online investment questionnaire published by a leading Hungarian financial portal. The paper first discusses the theoretical background and reviews the published scientific works. We performed two types of analysis: an analysis of the subjects of the published articles in relation to the factors influencing the investment and a semantic analysis in relation to the basic concepts related to the research problem. We used the Voyant tool [17] for term analysis and the Yewno tool [18] to find the semantic relation between basic concepts in the research knowledge map. This is followed by introducing the investment questionnaire, the data collected, and the methodology applied. The data analysis and findings sections describe the data dimensions that are relevant from an investment point of view and discuss the customer clusters identified by applying the two-stage clustering method. The paper concludes by summarizing the results and discussing the applicability of the method. Edmondson and Mcmanus [19] stated that theory is developed as an outcome of a study, new ideas that contest conventional wisdom, challenge prior assumptions, integrate prior streams of research to produce a new model, or refine the understanding of a phenomenon. A review of our scientific literature shows that related research studies focus on different aspects of this topic. Here, in addition to summarizing the key concepts of the research topic, we reviewed the past literature related to this study. The following is a general analysis of the subjects and concepts of published articles related to this research. Scientific databases and web analysis tools were used to perform this analysis. Literature review Retail banking is the direct provision of banking services to individuals. This area of banking includes a variety of services, such as cash cards, credit cards, debit cards, current and savings accounts, mortgages, and personal loans. These services are provided to customers through various service channels, such as branch chains, ATMs, internet banking, and telephone banking. Retail banking can also be defined as receiving deposits from people and lending them to individuals, firms, and companies. By this definition, banks act as direct financial intermediaries. Customer experience is influenced by the contribution of both the customers and the company. All the events encountered by customers before and after a transaction are part of the customer experience. Keiningham et al. [20] believe that innovation in the business model (BMI) is crucial for a company's ability to achieve long-term growth and sustainability. Creating innovations to improve the value of products or services or delivering appropriate offers helps customers to use products and services more effectively. What customers encounter is personal and may involve sensory, emotional, rational, and physical aspects to create a memorable experience. In retail banking, both investors' experiences and the qualities of investment funds play an important role in the success of services and products offered by the company. Maklan and Klaus [21] presented the customer experience quality (EXQ) scale to measure the quality of the customer experience. They concluded that key features of the customer experience play an important role in assessing service quality or customer satisfaction in the market. Klaus [22] then explained the conceptualization and implementation of customer experience (CX) quality on the EXQ scale. Kuppelwieser and Klaus [23] developed this scale and systematically examined the psychometric properties of EXQ. They studied the nature of the relationships between these dimensions, as well as between their dimensions and cases. The results of their research showed that customers evaluate and understand experience as a general evaluation and do not differentiate between the meanings of different stages or dimensions of experience. Wewege and Thomsett [24] have addressed this issue with the publication of the third edition of their book titled The Digital Banking Revolution. They stated how fintech companies are transforming the retail banking industry through disruptive financial innovation. Innovation in customer segmentation is one of the important issues in the retail banking industry. Marco et al. [25] show that the use of cognitive analytics management is a valid tool to describe new technology implementations for businesses. They found that a self-organizing map better classifies the customer base of a retailer by paring two machine learning algorithms. Fatima and Sharma [26] identified certain biases affecting investor decisionmaking and segmented investors accordingly. They used factor analysis, and the findings revealed that eight extracted factors affect investment decisions: tend to fall into imitator, stereotypical, independent individualist, risk-tolerant, efficient planner, confident, passive, and competent confirmer. Jääskeläinen [27] analyzed customer data from a local retail bank using machine learning to detect the attributes of investors. They used different clustering algorithms. He found that a customer invests if he or she has an investor profile, higher account balance, job, and marketing permission. Some authors have examined the impact of various factors that influence customer decisions [9,[28][29][30][31]. Some studies have been conducted on customer clustering and segmentation based on customer behavioral perspectives, customer behavioral factors, demographic factors, and environmental objects [32][33][34][35][36][37][38]. Goncarovs [39] described a five-step customer segmentation method consisting of gathering quantitative information, creating specific microsegments, sorting microsegments, and creating final customer segments. Artificial intelligence and machine learning play an important role in identifying investment patterns for banking innovations, with the addition of risk capital and other emerging technologies also considered by scientists [40]. Boone and Roehm [41] examined the use of artificial neural networks (ANNs) as an alternative means for the segmentation of retail databases. They concluded that ANNs are useful for retailers in market segmentation because they offer more homogeneous segmentation solutions than the mixed model and K-means clustering algorithms and are less sensitive to initial start-up conditions. Ying Li and Feng Lin [42] used data mining methods to segment clients in the securities industry from the perspective of customer value and customer behavior. They believe that the clustering algorithm could be used as a customer segmentation method commonly used in data mining. Li, Wu, and Lin [43] proposed a two-stage clustering algorithm based on a self-organizing feature map, which uses the self-organizing feature to cluster the raw data initially. Then, the behavioral and value features of the segmented groups of customers were filtered out using a data mining tool. They concluded that, in terms of behavioral features, security customers are segmented into general type, important type, and silent type. Bigné et al. [44] examined neural networks, specifically SOM, as an alternative to traditional statistical segmentation methods and identified segments in the mature market. The results show the superiority of nonhierarchical clustering and SOM over hierarchical clustering and show their complementary nature. Mak, Ho, and Ting [45] presented a financial data mining model for extracting customer behavior. They aimed to increase the availability of decision support data and hence, increase customer satisfaction. Their simulation experiments showed that the proposed method can improve the turnover of a financial company and deepen the understanding of investment behavior. Saluja and Shaikh [46] decoded the investment pattern of large payers, such as foreign institutional investors (FIIs) and domestic institutional investors (DIIs), using the decision tree method of machine learning techniques. Chen, Ho, and Liu [47] analyzed whether personality creates significant differences in financial performance. They used investor personalities rather than sentiment, which is difficult to predict because of noise. They applied statistics tests and machine learning algorithms to achieve their goal. Albert, Merunka, & Valette-Florence [48] and Lamprinopoulou & Tregear [49] used the MCA criterion to review and analyze the literature, distinguish key factors in motivating the network to share knowledge, accelerate innovation, reduce transaction costs, improve reputation, and create new market opportunities. Clustering data with both categorical and numerical variables is a challenge, as many clustering algorithms either expect categorical or numerical data [50]. K-means clustering, which is a widely used method, expects only numerical data, while others [51,52] use only categorical variables. One way to overcome this problem is to cluster first the numerical variables and then combine the results with the categorical variables and apply a clustering algorithm that is designed for categorical variables [53]. Another approach for two-stage clustering is the combination of self-organizing maps with K-means clustering, [54], which is proposed for market segmentation. Artificial neural networks, such as self-organizing maps, have demonstrated their learning capabilities; they can handle a large amount of data and can handle mixed variables of categorical and numerical data. The result of the self-organizing map has well-defined distances for the nodes, wherein hierarchical clustering could be easily applied to aggregate them to the optimal number of clusters. We applied this method to our dataset. To obtain a general understanding of the research conducted in relation to the "factors affecting investment", we analyzed all English language articles published in academic journals between 2010 and 2020. We searched "factors affecting investment" in the "keyword" field by the "SuperSearch" 1 tool available at the library of the Corvinus University of Budapest. We created a document from all the subjects of these articles and then analyzed this document using the Voyant web-based tool [17]. The document was classified into 10 groups based on the factors' relative frequencies. Table 1 shows the factor frequencies in the entire corpus. The factor is the subject in the corpus, while the count is the frequency of the factor in the corpus. To further analyze the relationship between the basic concepts related to our research, we prepared a knowledge map. For this reason, we used the Yewno Semantic Search Tool [18] to draw a scientific map of the research concepts. The semantic relationship has four main concepts: investment, investment funds, savings, and retail banking, depicted in Fig. 1. There are overlapping concepts between investment and investment funds, investors, stock traders, investment companies, and several investment instruments, such as private equity funds or exchange-traded funds. The various types of funds and investment portfolios are the most important concepts for investment funds, while performance metrics (e.g., earnings per share, stock valuation) are the most important concepts for investment. Retail banking as a concept refers to several types of banks (a reference to specific companies has been removed from the semantic map), while savings refers to various economic theories around consumption and investment. Our model was formulated with reference to these concepts: the types of investment funds the person finds appropriate are analyzed in relation to the person's saving and consumption patterns, approach to performance and risk, and the types of assets their family holds to care for the future. In summary, the research background shows that many methods have been used to classify and cluster customers and analyze these clusters; however, the use of AI, especially neural networks, is still rare. There are several research studies on customer experience and investment patterns examining customer experiences from different perspectives. They examined this relationship from different aspects, including the impact of demographic factors. Having both categorical and numerical data as a source for clustering presents challenges. The practical use of Kohonen self-organizing maps to overcome these challenges is rare, and practical application is not well documented for analyzing investment patterns. We propose here a novel approach using Kohonen self-organizing maps in combination with hierarchical clustering and describe the practical application of this method. Our research could help companies develop better investment proposals based on customers' investment patterns using this approach. Research questions and methodology This section introduces the methodological approach of the study: the research process, the data collection methods, and the data analysis techniques. It describes how the components are interconnected and form a logical sequence and will evaluate and justify the reasons for the proposed methodological options. Our main goal is to demonstrate the effectiveness of the two-stage clustering method using Kohonen self-organizing maps (SOMs) and hierarchical clustering by exploring investment patterns in potential retail banking customers. We explored this through answering three research questions: • Could we identify investment patterns using this clustering method? • Could we describe the important investment factors? • Could we recommend appropriate investment products for potential investors? Prior to selecting the variables for building the machine learning model, we analyzed the variables related to investment and financial awareness. Three variables were excluded from the model: • Current investment portfolio consists of…" • "Influencing factors of investment decisions" • "Which (fintech) product have you heard of and have you used?" Having the following five variables selected to build the machine learning model to identify clusters of potential customers with similar preferences and investment patterns: • "Which product is appropriate for you based on your opinion?" • "What are the important factors for your long-term savings?" • "How much savings do you have?" • "How do you make ends meet?" • "What Investment your family has to provide financial stability?" Research process and data analysis techniques Our study is experimental and exploratory research. The data collection was both quantitative and qualitative; therefore, a mixed method was used in this study. Details of each research phase are shown in Fig. 2. We used descriptive statistics, principal component analysis, and multiple correspondence analysis to understand the variables that are relevant to investment. It is followed by an unsupervised machine learning method that combines Kohonen SOMs and hierarchical clustering to identify clusters of potential customers with similar investment patterns in the retail banking context. To implement the models, the calculations were carried out in the R environment [55]. Principal component analysis, multiple correspondence analysis, and hierarchical clustering were performed using the 'factoextra' package [56], while Kohonen self-organizing maps were created using the 'kohonen' package [57,58]. The research was conducted in four phases. In phase one, we present a review of the literature and research background. In phase two, the data were explored using descriptive statistics, principal component analysis, and multiple correspondence analysis to understand the relevant investment factors. In phase three, unsupervised machine learning was used to identify clusters of potential investors. In the last phase, the characteristics of these clusters were analyzed further to describe similarities and differences. The confidence levels of our results are limited by the number of samples and variables available in our research. Therefore, the results should be recognized as qualitative rather than quantitative. The identification of customer clusters was begun by applying the Kohonen SOM method, as the dataset describing customer behavior included both categorical (e.g., types of financial assets they hold) and numerical data (e.g., amount of discretionary savings they hold). The output of the Kohonen SOM is a map with a predefined number of nodes (in our case 36), holding differing numbers of customers. This initial step helps overcome the problem of having mixed data of categorical and numerical variables and creates a clustered representation of customers, where the similarity between the nodes (distances from neighbors) can be measured as Euclidean distance. The second step of our method is the hierarchical clustering of the nodes using node distances and the subsequent estimation of the ideal number of final clusters using well-known methods, such as the silhouette method [59], elbow method or gap statistics [60]. The combination of these two methods is proposed because it overcomes the problem of having mixed categorical and numerical variable data, and the Kohonen method can handle large datasets efficiently. Other methods, such as K-means or hierarchical clustering, were discarded due to the nature of our dataset. The process of our two-step clustering process is illustrated on Fig. 3. Data collection, data quality This study uses data that were collected through an online investment questionnaire published by a leading Hungarian financial portal called "Portfolio". The questionnaire is accessible in a web-based format at https:// www. portf olio. hu/ befek tetesi-kerdo iv/? page=1 in the Hungarian language. "Portfolio" is an online financial portal in Hungary with a user visit count over 15 million per month as of December 2020, ranking as the 28 th most visited side in Hungary [61]. "Portfolio" has a distinct emphasis on business, financial, and economic news. In addition to its online media platforms, the enterprise also offers a trading platform and presents a personal analysis of financial markets. The company also has activities in the field of commercial enterprises and organizes annual professional fora in the fields of agriculture, insurance, lending, asset management, corporate finance, capital markets, the car sector, monetary IT, and real estate [62]. Our data consist of 1542 responses to the online questionnaire received through the portfolio.hu website in 2019. The investment questionnaire was designed in partnership with Corvinus University of Budapest and the Dorsum company, one of the region's leading providers of innovative investment software. It is the result of joint research with the purpose of determining how conscious their readers The web-based investment questionnaire has 74 variables (questions), grouped into seven main sections exploring the respondent's financial awareness and affinity to information technology and novel financial services, as well as collecting data about their demographics ( Table 2). The questionnaire starts with asking about the respondent's affinity to information technology and their use of social media and online, followed by questions about their financial awareness, investment portfolios, investment approaches, risk profiles, and spending and savings habits. The next section of the questionnaire is about the respondent's personality profile, followed by a single openended question about short-and long-term financial planning. The next, substantial section is about the respondent's relationship with banks and their understanding and experience with novel banking (fintech) products. The questionnaire concludes with questions about the respondents' demographics. There are categorical variables (questions), either single-choice or multiple-choice; numerical variables measured on a 5-point Likert scale; and unstructured, textual data for the open-ended questions. Data quality was good, and missing data were below 1%, except for some demographic data. In a significant portion, 45% of respondents did not provide information about their age; therefore, we could not use this in our models. However, for those who provided this information, 30-34 years was the most frequent age bracket. Interestingly, 7.8% of respondents stated that their age was 15-19 years old. Similarly, a poor response rate was received for the highest levels of education: 77% did not provide this information. There were better responses for other demographic data: most respondents were from the capital city (50%) or from larger cities and towns (20% + 17%). Only 8% were from rural areas. The most frequent (42%) current occupation is a "graduate employee", meaning that they are employed by a company, that their occupation requires a graduate degree, and that they do not have management duties. Results and discussion This section provides an overview of the results of descriptive statistics and multiple correspondence analysis. We discuss, among other influencing factors of investment decisions, familiarity with novel financial products, important factors for long-term savings, risk appetite, appropriate investment products based on respondents' opinions, and means to provide financial stability. The confidence levels of the results described below are limited by the limited number of samples and therefore should be recognized as qualitative results. Understanding variables: descriptive statistics and multiple correspondence analysis (MCA) The following section introduces the data using descriptive statistics and MCA. Current investment portfolio Thirteen investment product options were presented to the respondents to select from, indicating what assets they currently hold (multiple choice option). On average, they reported holding 3 different products (with a range of 1 to 9). Government bonds were the most popular choice, followed by cash, real estate and a bank account in popularity (Fig. 4). The multiple correspondence analysis did not reveal any obvious dimensions of preferences among the variables. Influencing factors of investment decisions Respondents were asked to rank their preferences of where they seek advice for investment decisions, with choices of relying on their own opinions; through family, friends, Familiarity with novel financial (fintech) products Respondents were asked to indicate what novel financial (fintech) products they know about or have used. PayPal was the most well-known product; 96% of the respondents knew it, and 75% used it. It was followed by Simple, a local fintech product used for online and mobile payments (Fig. 6). The MCA analysis revealed that Plus 500 and eToro are rather different products from the others based on the respondents' knowledge and prior experience. This is in line with the nature of the products, the former being online trading and the others being predominantly online payment platforms (Fig. 7). Appropriate investment products based on respondents' opinion Respondents were asked to choose multiple investment products from a list that they think is appropriate for them. On average, they chose 2 out of the 5 options. The most popular was government bonds (74%), followed by individual shares (Fig. 8). The MCA analysis revealed that respondents thought that there were two groups of products: (Fig. 9). Important factors for your long-term savings and risk appetite Respondents were asked to choose the factors that are important for long-term investment (multiple choice). Opportunity to achieve high yield was chosen as the most important factor (almost 70%), followed by low risk (Fig. 10). Interestingly, 29% of respondents chose low risk and high yield simultaneously, even though these options are mutually exclusive. One explanation of this choice could be that they are seeking a balanced portfolio with multiple products and different risk profiles. The MCA analysis revealed that respondents look at investment cost and risk profile as two different factors: low cost and government subsidy (providing guaranteed income) are one set of factors, while low risk and high yield are the other (Fig. 11). Savings and making ends meet Respondents were asked to indicate how much savings they have and how they make ends meet. Most respondents have significant funds and could make discretionary savings at month's end, indicating that they could be targets for investment products (Fig. 12). Means to provide financial stability The next question was about different means to provide financial stability (multiple choice). Most respondents (64%) indicated having insurance for the real estate they own, followed by having sufficient cash (Fig. 13). The MCA analysis revealed two dimensions of the variables: the first being financial instruments (insurance products and retirement funds) and the second being more traditional support structures: family and cash (Fig. 14). Clustering customers using Kohonen SOMs Customer responses to the five selected questions were analyzed by building a 6 × 6 Kohonen self-organizing map with hexagonal topology. The neural network was trained using 100,000 iterations, followed by hierarchical clustering of the nodes. Using a neural network enables the use of both categorical and numerical variables, using Euclidean distance for the numerical variables and Hamming distance [63] for the categorical variables. As the output of the Kohonen map includes the distance matrix of the nodes, it enables hierarchical clustering. The optimal number of clusters was determined using the average silhouette approach [59,64], suggesting three clusters of customers (Fig. 15). The validity of the method was checked by training the Kohonen SOM repeatedly and checking the differences, while the optimal number of clusters was checked by using the gap statistics [60] and the elbow method, suggesting similar values to the optimal numbers of clusters. The hierarchical clustered nodes of the Kohonen SOM were therefore cut into 3 main clusters, as suggested by the silhouette approach. There was a large cluster containing most respondents (89%) and two smaller clusters (9% and 2%). The Kohonen map also makes it possible to analyze the clusters by individual variables used for creating the self-organizing map. In this way, we could gain insights about the groups to use when selecting products, offerings, or ways of communication with the clusters of customers. The following section describes the pertinent features of the clusters. Figure 15 shows the box-and-whisker plots for customer discretionary savings and funds they are left with after making ends met. Boxes represent the interquartile range, lines the minimum and maximum values without outliers, whiles dots the outliers. Respondents from Cluster 1 have significant funds and make ends meet with large discretionary savings. Cluster 2 respondents have little savings; however, they could still make ends meet left with sufficient funds; therefore, they could also be 15 Kohonen map with clusters of respondents targets for investment products. Cluster 3 respondents, the smallest group, have significant savings but make ends meet with the smallest amount (Fig. 16). Insights gained from analyzing the clusters When thinking about financial stability for the future, the difference between the clusters could be described: while Cluster 1 thinks about all instruments as almost equally important, Cluster 2 underplays the importance of cash and liability insurance, and Cluster 3 deemphasizes private health insurance and retirement funds (Fig. 17). The approach toward investment risk and potential yield is rather similar to Clusters 1 and 2. Cluster 3 is different from these two clusters in that they underrate the value of government subsidies received on certain investment products and seem to be the least cost conscious (Fig. 18). Finally, our two-stage clustering method enables us to describe what investment products the three clusters think are appropriate for them. All three clusters think about government bonds as an important element of their portfolio. Cluster 1 would overweight individual shares in their portfolio compared to the other two groups of potential customers, in line with their balanced approach to investment risks and potential yields. The difference between Clusters 2 and 3 is mainly how they think How clusters think about appropriate investment products about retirement funds, Cluster 3 underplaying its importance in line with their thinking about providing financial stability (Fig. 19). Summary of insights about the clusters In summary, the three clusters could be identified using the two-stage clustering method: • The largest group of the most affluent customers, having significant savings and making ends meet easily, left with sufficient discretionary savings (Cluster 1), and thinking about financial stability using a multitude of means would be looking at a balanced portfolio of both government bonds with low risk and individual shares of potentially higher yield. • The second group reported having little savings but still making ends meet with sizeable savings potential (Cluster 2). This group of potential customers may overlook the importance of insurance products when thinking about financial stability in the future and would be looking at an investment portfolio that is overweight in government bonds. • The smallest group had significant savings but reported making ends meet with less disposable income than the other two groups (Cluster 3). This group underrates the importance of retirement funds and private health insurance when thinking about financial stability and wants to have the least number of individual shares and retirement funds in their portfolio. In addition to these insights, multiple correspondence analysis highlighted that retirement funds and other investment products are perceived to be rather different from government bonds, individual shares, and unit-linked investment groups; therefore, their offering might require different communication strategies. Furthermore, risk and yield are perceived to be rather different from the cost of investment (including subsidies, which may be used to offset investment costs), which could also be used when designing communication protocols. Conclusion The main goal of this research was to demonstrate the effectiveness of the two-stage clustering method to explore and identify investment patterns in potential retail banking customers. The results confirmed that the method is effective in identifying distinct groups of customers, describing their investment patterns and investment factors. The unique feature of this research compared with the previous ones is the use of a new AI-related approach to targeting "potential customers". This was supported by the application of text analysis and a knowledge map in the literature review to reveal the characteristics of the research field. As a result of the literature review, we found that the ten important factors related to investments are financial, business decision-making, environmental, economic, market, foreign direct investment, management, risk, development, and investors. Similar to the literature review, our experimental results obtained from the analysis of respondents' data showed that the "risk" factor is an important factor for potential customers. The analysis of the semantic relationship of basic concepts resulted in a knowledge map. The four main concepts of the knowledge map were investment, investment funds, savings, and retail banking (Fig. 1). The main contribution of this research is that MCA and Kohonen SOM have been combined for the clustering of potential customers. Other researchers applied Kohonen SOM and MCA as well, but they targeted different problems than ours. Elsäßer and Wirtz [65] examined the success factors of branding in a businessto-business setting and analyzed their performance impact on customer satisfaction and brand loyalty. Lamprinopoulou and Tregear [49] investigated the structure and content of network relations among SME clusters and explored the link to marketing performance. Albert et al. [48] used MCA for those respondents who expressed their love for a particular brand. He applied MCA to estimate the coordinates in a multidimensional space of the words that express the feeling of love. It is noteworthy that customer information was not examined here, but the data of ordinary respondents were examined, and based on this, future customers were predicted. We could identify three clusters of respondents, which were described by their current investment patterns and by the most important investment factors for the long term. Lai et al. [30] studied seven major effective factors on the decision-making underlying the R&D investment process along with treating R&D investment behavior. Hwang et al. [9] estimated the probability of customers' return using a machine learning approach on the received feedback comments and satisfaction ratings regarding the previous usage of the service. Higuchi and Maehara [38] used a factor-cluster analysis to cluster customers. Our investigation also resulted in appropriate investment patterns (funds and products) for potential investors. The analysis of the online investment questionnaire highlighted many important insights about the respondents as potential customers that could help to build better relationships with them and to offer more appropriate products to them. In our study, the experiences of respondents as potential customers were analyzed. Klaus and Maklan [21] provided the scale EXQ to measure customer experience quality. After one year, Klaus presented an updated customer experience quality (EXQ) scale that challenged the conceptualization and operationalization of customer experience. Kuppelwieser and Klaus [23] systematically explored the scale's psychometric properties and found that the EXQ scale comprises two or more dimensions rather than one. They explored the nature of the relationships between these dimensions and increased the understanding of the role in which customers experience quality in different research settings. In our study, analyzing the responses to individual questions showed that government bonds seem to be the most popular assets among the respondents when thinking about investment choices. However, they may also think of a portfolio of assets, wherein selected individual shares, other investment products, and retirement funds would play a role. Government bonds and individual shares or unit-linked investment would be one set of choices, while retirement funds or other investment products would be the second set of considerations when thinking about appropriate investment portfolios. Respondents think about the opportunity for high yield as the most important factor for investment, followed by the low risk and/or low cost associated with this opportunity. Most of them think about more than one factor. Low-risk and high-yield opportunities are understood as potentially mutually exclusive factors. Low cost and government-provided subsidies are considered related factors. Regarding the respondents' current portfolio of investments, most of them claim they have 3 or more types of assets. Government bonds and individual shares are the most popular ones, followed by retirement funds and unit-linked investment products. When thinking about the measures to provide financial stability, insurance for the real estate they may own, sufficient cash savings, and family members they could rely on are considered the most important ones in this order. Information about demographics was provided with many gaps and missing data, limiting its usability. However, those who provided information about themselves represented most age groups (with significant numbers from the 30-to 34-yearold group). It is also interesting that a significant portion of the respondents were employed as a "knowledge worker" ("graduate employee"). Kohonen SOM-based clustering using multiple questions of "Assets the family has to provide financial stability", "Savings the person has", "How to make ends meet", "Appropriate investment products", and "Important investment factors" resulted in three distinct groups of potential customers. When thinking about appropriate investment products, government bonds are always part of their preferred portfolio, with different mixes of other products varying by clusters. When thinking about risks and yield opportunities as factors of investment, the opportunity for high yield is always identified as an important one. The three clusters can be described: The first group of respondents or potential customers, which is the largest group, had significant savings (Cluster 1). They easily meet their needs. This group of customers maintains their financial stability by using various tools. To that end, they seek to have a balanced set of low-risk, high-yield individual government bonds. According to the second group, they had little savings but again they had the potential to meet their needs by saving (Cluster 2). This group of potential customers is likely to overlook the importance of insurance products and think of an overweight government bond investment portfolio. According to the smallest group, they had significant savings (Cluster 3). This group probably considers the importance of private pension and health insurance funds and is interested in having the least number of stocks and pension funds. The unique features of this research are the mixed methodological approach in the analysis of investment patterns. The limitation of the study is that data were collected in Hungary and the respondents were financially aware; they were interested in financial issues. Future research includes the application of the combination of Kohonen SOM and MCA for investment pattern analysis. Designing a recommendation system based on the results can be a research project for the future.
9,178.4
2021-11-02T00:00:00.000
[ "Business", "Computer Science" ]
Personalized and adaptive learning: educational practice and technological impact Aprendizagem personalizada e adaptativa: Education Technology advances many aspects of learning. More and more learning is taking place online. Learners’ learning behaviors, style, and performance can be easily profiled through learning analytics which collects their online learning footage. It enables and encourages educational research, learning software application development, and online education practices towards personalized and adaptive learning. As we continue to see personalized and adaptive learning progress, we must also pay attention to the negative impacts that feed into our research. In this paper, we will present our introspection of personalized and adaptive learning and argue that it is the social and moral responsibility of educators and institutions to apply personalized and adaptive learning wisely in their education practice. Educators and institutions should also recognize the realistic diversities of individual students’ learning styles and variable learning progress, contextually dependent learning accessibility, and their correspondent support needs for the fine-grained learning activities. We argue that the strategically balanced practices and innovated learning technology are crucial towards an optimized learning experience for the learners. Introduction About 2500 years ago, the Chinese philosopher and educator, Confucius implemented his philosophy, "the golden mean" (zhōn yōn) in mentoring his students, which was recognized as the origin of the well-known educational concept, "teaching to the talent" (yīn cái shī jiào). It addresses that teaching has to be adjusted according learners' capability, personality, and interest. The educational concept has been leading educational theory and guiding educational practice for many thousand years. However, since modernized classroom teaching, it has become difficult and impractical to implement the education concept. Educational institutions have to meet educational mandate and regulations made by the governments. Teachers have to face a large number of students in the classroom. Students all have to follow the same educational protocols and procedures and to progress with courses designed under the same curriculum. In addition, the same evaluation rules, the same assessment standards, and the same course requirements create a lot of stresses, barriers, and failures to the students. Commonly, students have intense and stressful learning experiences in classroom learning setting. Thanks to the recently advanced educational technology a new teaching perspective can be implemented. It empowers personalized learning and adaptive learning in this digital era through implementing the personalized and adaptive learning; it largely removes learning barriers from learners and makes learning become easier and friendly to learners fostering a lifelong learning process, indispensable to be able to work and live in society. With the development of education technology, online learning has advanced into a new era of personalized and adaptive learning. Through applying emerging information and computing (IC) technologies to learning analytics and machine learning, we can now more precisely and accurately identify an individual learner's style, behavior, strength and weakness, as well as more effectively and efficiently assess performance and progress. This is especially true in online education because big data about learners and their learning is readily available so that personalized and adaptive learning can be easily implemented to accommodate an individual learner's style and behavior in order to largely remove learning barriers and reduce stress in the learning. It is aimed at enhancing learners' performance and enabling learners to more effectively achieve course objectives and program learning outcomes prescribed by the curriculum. Therefore, students could complete their degrees in a learner-friendly and stress-free education. In this paper, we present our introspection of the personalized and adaptive learning. Following Section 2, we introduce definitions of the personalized learning and adaptive learning. In Section 3, we review research and development in educational technology towards the personalized learning and adaptive learning, providing evidences. In Section 4, we present pros and cons of the personalized and adaptive learning and our argument based on our view points and stand. In Section 4.3, we discuss trends and opportunities of personalized and adaptive learning. Finally, we conclude this paper in Section 5. What are The Personalized Learning and Adaptive Learning In January 2017, U.S. Department of Education gave a definition of Personalized Learning in the 2017 National Education Technology Plan Update: it insisted on the fact personalized Learning should be focused on the developing of learning and the instructional approach. In the Update, personalized learning has been considered as the great opportunity empowered by the educational technology development. Students' data can be collected and analysed to profile the students then to provide students personalized learning objectives, instructional approaches and contents as well as the way and schedule of assessment. Students are benefit individually with their own Personalized Learning Plan, which promotes student agency and motivation and makes students' lives so much easier 1 (U.S. DEPARTMENT OF EDUCATION, 2017). In this paper, we value that adaptive learning allows to use technology, especially smart technology, such as artificial intelligence, machine learning, deep leaning, sensor technology, mobile technology to develop learning tools or learning application systems that can learn from and adjust to learners' learning profile, learning method, learning style, and learning behavior to provide learners personalized and adaptive learning contents, learning activities, and learning instructions. It is considered as an effective way to allow learners to have personalized learning experience. Personalization in education The emergence of personalization in education is derived from recent teaching needs to be addressed in current multicultural and diverse classrooms, with students with different learning styles and rhythms and with a workplace with high professional demands to be assumed. Society is developing in such a way, education has to make an effort and offer students an adapted training to be able to participate in society, live as active citizens and develop themselves at work with the necessary abilities, skills and capabilities. These ideas of how learning takes place and the necessary elements to reach it need the scientific acknowledgement of Didactics and Psychology to better understand how learning emerges. No doubt, the starting point refers to how new input is embedded in the student's mind and consequently how they are engaged in task development. This shows how personalization is assumed in the teachinglearning processes; students once engaged in their task need input to make learning meaningful and significant. In the history of education, an endless number of didactic models have been created and applied to teaching and learning processes. From a diachronic perspective, the most remarkable author who studied how learning takes is the figure of John Dewey in the U.S. context. He was a representative of pragmatism, a philosopher and educationalist and promoted the so-called reflexive methods, emphasizing the work of teachers should be transmitting content, but always based on specific and suitable learning methods. In this sense, as students are different and each one of them have different learning styles, teachers must offer a personalized education in an attempt to make learning easier and promote a lifelong learning process, as European Council of Education states. Dewey already proposed the teaching style may start from a selection of problematic situations related to the students' lives and growing up context, discussion of tasks in groups, formulation of hypotheses for its resolution, development of observations and experiments to collect data which may allow verification of the ideas or hypotheses and transference of the results found as part of the learning process. This implies the teacher should interact with each student and with the groups created to develop simple investigations, avoiding any competition. The teacher, hence, must present new content always from a global perspective, engaging students in experiences to learn. The basis is that he conceived learning is not something done but rather something the student creates based on individual experiences as all of them are different and their own interests (DEWEY, 1971;JOYCE;WEIL, 1986). Dewey followed a child-centred approach to education. Piaget (1970), psychologist well-known for his work on child development, named Director of the International Bureau of Education, also followed the student-centered approach to education. He created the constructivist approach mainly based on the idea that students create new learning from prior knowledge they already have through new inputs. Students ask questions, do research, reflect on the environment, etc. He insisted on the idea children learn when playing, through motivation and in a stress-free education environment. Teachers focus on students' abilities and attitudes and support their curiosity towards new learning. It is also important to create a suitable learning environment for students to feel free, comfortable towards learning. The approaches exposed along this section are samples of how education has been developed, what were the basis to be considered, how initially Didactics was assumed under specific methodological principles. Hence, the reflexive methods, the student-centered approach, the constructivist model, the experiential learning lead to the emergence of the current concept of personalization in education. In fact, the ideas previously exposed constituted the basis of this new approach in teaching. It is not the idea of introducing in the classroom content in isolation but rather in awakening students inspiring them towards the conquest of new learning through motivation and relevant input in suitable learning environments. These new teaching trends and learning methods and strategies thanks to the great and recent expansion of advanced educational technology allow the personalization of learning and the promotion of adaptive learning to help students achieve a lifelong learning process. Even more, IC technologies highly contribute to help students in their learning achievements as they provide the necessary tools to identify students' learning style, rhythm and behaviour. Development of adaptive learning Nowadays, distance education has been mainly conducted online through Web-based Learning Management Systems (LMS), which enables distance education institutions easily collect students' online footage, furthermore analyse students' learning style and learning behaviour to profile students. Athabasca University, the Canadian open university is a pioneer of public distance education institution, where students learning mainly happens online. We have been conducting research on personalized informative for more than ten years. We have developed an adaptive mobile learning research framework (TAN; KINSHUK, et al., 2010, p. 12). Following the framework, a Location-based dynamic grouping algorithm and mobile virtual campus (MVC) system were developed to create the virtual campus for online students to enable online students to meet face to face with their peers to tackle the issue that online students are lack of peer-to-peer collaboration and inspiration, in which students (users)' information, such as learning profile, learning style, learning behaviour, and their location information is collected to group them (TAN; KINSHUK, et al., 2010, p. 55). Later, location-based adaptive mobile learning management system, the 5R adaptive learning content generation platform, and Augmented Reality (AR) adaptive learning application were developed based on the 5R adaptation framework with many learning scenarios of implementing the 5R adaptation, especially in teaching physical geography courses (TAN; ZHANG We provided many learning scenarios of implementing the 5R adaptation, especially in teaching physical geography courses (AKO-NAI; TAN, et al., 2012). We also studied innovations and personalization issues from e-learning to u-learning to identify that seamless immersion of formal and informal learning activities can contribute to the personalized learning through the adaptive learning (TAN; CHANG; KINSHUK, 2015). What are the outcomes of adaptive learning, it is a question that hardly has a concrete answer. Major study on adaptive learning becomes inconclusive 2 . However, it shows great enthusiasm and satisfaction with adaptive courseware. Researchers in Ireland have studied how effective is adaptive learning from implementation of their Adptemy system and measuring the impact. They used four measures, pass rates, engagement, grade improvements, and enjoyment to evaluate the results of adaptive learning. From their research, they found 88.01% of students' engagement where the students' optimal learning experience, only 5.7 of students felt anxiety and 6.24% felt bored (LYNCH; GHERGULECSCU, 2017). The research provides an evident that personalized and adaptive learning system does make the learning environment friendly to learners. Further advanced adaptive learning based on AI and machine learning technologies to more accurately replicate the one-on-one instructor experience is called as Adaptive Learning 3.0, which enhances personalized learning through enhancing the relationships between learning content, learning objectives, learning activities and learning evaluation and individual learner. It aims to achieves a more efficient, effective learning experience by real-time adaptation, data-driven personalized feedback and knowledge reinforcement, learning prediction, mastery of skills and knowledge, and reduction in learning times (WEIR, 2019). From adaptive learning application development to adaptive learning system implementation, positive outcome of adaptive learning and the perspective of future development of adaptive learning 3.0 convince that the adaptive learning is the way to conduct personalized learning and to enable learners have effective, efficient, and friendly learning experience. Big Data and Learning Analytics in Personalized and Adaptive Learning One of the most important steps in implementing personalized and adaptive learning is to really know about the person -the learner. The things we need to know about the learner before implementing any effective personalized and adaptive learning may include the following: Knowing what the learner wants to learn and wants to be is very important in creating personalized program, courses, specific learning content and study plan. 4. The learning style of the learner: does the learner like to read, to watch, or to listen? Does the learner like to attend lectures in person or just learn on his own? Does the learner like self-paced learning, or like to follow a learning schedule set by teachers? 5. The learning progress of the learner in a program and in a course: where is the learner in the program, and in the course, he or she is taking? How well has he or she performed in the course or program? These information about the learner and his or her learning is very important to dynamically personalize the learning content for the learner such as add further lessons to bridge the gaps, to add more quizzes and assignments to assess the learners or reinforce the learning. In the past, and even in today's classroom-based education, it is rather difficult to know any of these above a learner. A professor might be able to know a learner very well in his classroom, but it is unlikely for him to be able to prepare and deliver personalized education to even just a few learners he knows well in the classroom. In today's online education, however, data about every learner is readily available either in database, or in logs of the learning management systems the learners are using. Not all these data are qualified as big data, but it may be enough to learn about each learner through learning analytics. What Personalized and Adaptive Learning can offer us? A well-training requires a good teaching process. Currently, Higher Education institutions are modifying their teaching guides and syllabus designs to offer an updated training with the necessary pedagogical and competence knowledge. Undoubtedly, the educational technology remains unquestioned in this digital era. The expansion of Information and Communication Technologies (ICT) worldwide enhances the need to introduce them to face up the new socio-educative demands. Educational technology provides the framework to develop teaching empowering personalized and adaptive learning. We need to bear in mind classrooms are characterized by an increasing multiculturalism and students' diversity and schools and educational institutions have to be able to adapt to those needs and offer quality education processes. On the one hand, the image of the teacher as a reflective professional (SCHÖN, 1991) is in full agreement with Dewey's proposals on reflective methods because reflection, as a heuristic tool, must be seen both in teaching and in learning. Dewey's approach on the resolution of problems or development of small research can be considered as the prelude to the cognitivist assumptions about learning. Strictly speaking, the future of education especially in the university level should be fully characterized by the principles of the process-product paradigm (PÉREZ, 1983). Personalized education allows the use of specific methodological principles to provide students with learning adapted to their needs. Adaptive learning complements teaching as it offers the educational technologies to enhance motivation, complete a competence education and achieve learning outcomes. In the framework of the European Higher Education Area, teaching tends to be much less academic. It is focused on the need to offer students a competence teaching in face to face or blended learning situations to be able to train learners in this global and technological world. Teachers must offer students the role of active learners, active protagonists of their learning processes appealing to their intense cognitive activity, facilitated by the logical gradation of the complexity of the proposed activities. If we want a meaningful education we need to provide students with the necessary input and strategies to be able to integrate their prior knowledge with that which is newly acquired, so that the latter becomes meaningful. Thus, we will enhance learning by reception and within the constructivist paradigm. In order to transform content into learning we may pay attention to two paradigms, two models Didaxis offers us. On the one hand, the mediational teacher-centered paradigm conceives the teacher as a reflective "planner" of his teaching. This implies abandoning the standard models, typical of the tradition of the process-product paradigm, and rather assuming planning as a process that attempts to assess and prepare adequate attention to the students' needs. No doubt, as teachers, we must plan and program from a previous analysis of the educational needs of the class group (initial assessment). This will offer us the possibility to anticipate to the students' responses. Hence, the syllabus design must be contextualized and flexible (SOLER, 2013). In this process of offering students input to build knowledge and acquire skills, the role of guidance is necessary as far as we must start on a programming based on the knowledge of the level of students' cognitive development. In this sense, the "formal behaviours" included in the general taxonomies by Bloom (1972) --for the field of knowledge--and Krathwohl (1973) --for the affective domain--, including the cognitive and affective levels, which are basic reference points for the acquisition of basic skills. On the other hand, the constructivist approach to learning -note that Educational Psychology is the epistemological position of the mediational paradigm-contributes to the acquisition of knowledge in such a way it allows students to build new knowledge and learn by association. Within this learning theory, we may not consider errors as such when acquiring new knowledge but rather as part of the reception process. The new teaching era requires a competence-based teaching in an attempt to offer students quality and updated education to be able to engage in the working world with the necessary knowledge and skills. The competence-based teaching integrates cognitive processes, attitudinal processes, axiological processes based on the meditational paradigm. Competences and skills must lead to global and interdisciplinary thinking processes. To be able to acquire knowledge and apply it to different situations a set of competences are required (SOLER, 2013). As it can be seen, the teaching models and paradigms highly helped in the development of learning. The way teachers choose each one of the paradigms conditions the teaching-learning process to the learner's achievements (DE MIGUEL, 2006). A combination of paradigms with the support and resources offered by adapted learning really help in the process. Lifelong learning demands a competence-based teaching that enables students to achieve the necessary knowledge, attitudes, values and competences to be able to participate in society. Educational technology highly contributes as it provides technological resources to adapt learning to each student needs and interests. All these issues appeal for a suitable learning environment where students learn from a learner-friendly and stress-free education. Benefits offered by personalized and adaptive learning As literally implied by the term, personalized learning is to personalize the learning curriculum, learning contents, learning format and process for each individual learner, to adapt his or her capability, capacity, educational and career goal, interest, learning style, and learning progress in an education program or a course, so that the learning can be more effective and efficient for the individual learner. Both the society and the individual can benefit from implementing personalized and adaptive learning. As we all agree that people are different, and often born with different talent and gifted skills and abilities, it would be for the best interest of the society to let each individual grow to be the best he or she can be through personalized and adaptive learning, and do the best he or she can do when he or she works for the society. Personalized and adaptive learning, if properly and effectively implemented, would be the best way to maximize the collective human resources of the society, and the society would be best advanced and developed with the maximized collective manpower. That's how the society would benefit greatly from implementing personalized and adaptive learning. For individuals, because personalized and adaptive learning is to allow each individual to do learn what he or she is interested to learn, and to learn what he or she is good at, and in a way that best fits his or her capacity, capability, learning style and progress, the learning would be more enjoyable, more efficient and more effective, and would achieve the best in the learning. Negative effects of personalized and adaptive learning While embracing personalized learning and taking the advantage of adaptive learning, we should see some negative effects of personalized and adaptive learning. We argue that it is the social and moral responsibility of educators and institutions to apply personalized and adaptive learning in their education practice. Here we present our views and arguments on the negative side of personalized and adaptive learning. Stress Management: Learning is not entertainment. Learning under stress is norm and even necessary. Stressful events in learning setting happen commonly or frequently because learners usually have to follow a generalized learning protocol, learning process, learning schedule, and same evaluation of learning achievement, especially in classroom learning setting. Many researches in physiology, psychology, and neuroscience have been done to understand the stress affects to learning and memory. The researchers have identified "stress and the hormones and neurotransmitters released during and after a stressful event as major modulators of human learning and memory processes, with critical implications for educational contexts" (CHANG; TAN, 2010, p. 25). Facing the stressful situation during learning, learners need to learn and to management the stress in order to achieve the course objectives. Hence, we view that experiencing stress in learning is an important training for learners even it is out of the curriculum. Personalized and adaptive learning could largely reduce learners' learning stress, which unfortunately downgrade learners' need to learn and manage stress. When students come out of their learner-friendly education environment, they will have to face many reallife challenges. Their future occupations are unlikely to be personalized for them and the society will not necessarily adapt to their specific needs. We argue that experiencing stress and difficulties during the training period acts as an important part of training to prepare for the real world. Nowadays personalized and adaptive learning is the fastest growing field in education. We think it is our social and moral responsibility to address the shortfall of personalized and adaptive learning and to call for responsibly implementing personalized and adaptive learning in education practices. Success Under Constrains: success comes from effort under constrains. Personalized and adaptive learning tend to cater learners' individual learning behaviour, learning style, learning methods with adapted learning contents, which probably makes learners learn more easily and might make learners feel successful in their learning assessment and get higher grade on their learning report. We argue that forcing or encouraging learners to adapt whatever learning setting is the training to the learners to face the life reality, which teach learners to adapt the environment and to be successful. Time Management and Discipline: following schedule and meeting deadline are commonly required in people's daily work and life. During learning process, learners usually need to follow the course schedule, submit assignments on time, take and pass exams at the same time with class, which can be stressful and intensive. To be successful in learning, it requires learners to well manage time and have good self-discipline. Through the learning process, learners can be trained to gain the important skill and characteristics of any successful person. Personalized learning allows learners make their own personalized learning plan (PLP) (CHANG; . Adaptive learning can fit the learners' schedule and provide flexibility and adaptive learning process for each individual learner. Collaboration and Cooperation: personalized and adaptive learning is relatively easier to implement in online learning setting, such as at distance education institution. While technology enables students learning at anytime and anywhere, the drawback is that the students are lack of peer inspiration and having opportunity for collaboration (TAN;KINSHUK, et al., 2010). At Athabasca University, we strive to encourage students to collaborate and cooperate with each other through course designed and required group works. Personalized learning experience largely depends on technologybased adaptive courses. When students are offered with adaptive learning courses, students will have less motivation and interest to collaborate with others. The students would take the advantage of independence and personalization in their learning, which the students loss the opportunity to be trained as a team player. Personal Development: pushing out of one's comfortable zone is a training that can be done during learning, which helps learners to recognize their potential, to face the challenge, and to strive to be better. If learning always happens within learners' comfortable zone, they may loss the opportunity to taste failure and to learn how to handle their failure. The personalized learning prevents learners from competitive learning environment, which may help their self-esteem but leave them out of the reality. It puts learners into their own bubbles and create the illusion by their self-satisfaction but not see their own weakness. We view that it could be a great negative affect of personalized and adaptive learning. Furthermore, we can also point out other negative effects of personalized and adaptive learning from different views. Fairness in Evaluation: personalized and adaptive learning could make difficulty to give a fair evaluation on students' performance. How to differentiate the excellence and the weak then to reward the hardworking and to correct the laziness when learners are in their comfortable zone? We argue that it is hardly to maintain fairness in learning assessment if there is no a generic measurement and a common evaluation standard which is likely in personalized and adaptive learning setting. Resource Limitation and Financial Cost: implementing personalized learning needs to create massive amount of learning material and learning activities to meet or adapt many learners' personalized needs, which could take a lot of resources and could not be affordable financially to educational intuitions. Therefore, we view the personalized and adaptive learning is a costly approach. The cost will eventually reflect on tuition to learners or require higher education budget from government. Personal Privacy and Information Security: to implement personalized and adaptive learning, the fundamental requirement is to be able to collect learners' online footage and learning information. The personal online learning data will be turned into personally identifiable information in order to profile each individual learner. With AI and Machine Learning empowered big data analytics, even data only collect from learners' online learning footage, it can easily obtain a learner's private information, which could violate learners' personal privacy. We suggest that learners' privacy protection and information security will add extra responsibility and cost to the institutions and generates a great risk and legal liability. The perspective of personalized and adaptive learning Technically, personalized and adaptive learning merits to further research and development to leverage learners' effective learning experience. In the specific implementation of the learning system, however, depending on the realistic capacity and needs, strategic balance between the practices and innovated learning technology must be considered to diminish any potential negative effects addressed in section 4. Orthogonal Architecture for Personalized and Adaptive Features: In computer science, separation of concerns is a design principle for separating a computer program into distinct sections such that each section addresses a separate concern. A concern is a set of information that affects the program. Personalized and adaptive features of learning system can be organized as well-separated concerns in the learning components, by hiding the implementation details of the learning component modules behind personalized interface and interaction. The separation of concerns results in more degrees of freedom for the program design, deployment, usage, security and access control, etc. Learning Analytics Enhanced Personalized Learning: Recent advance in bigdata and smart learning analytics technology promises potential for data-driven cognitive technologies at fine-grained levels through learners' learning and cognitive processes that typically appears personalized learning behaviour and characteristics. Lodge and Lewis (2012) suggested that behavioural data alone cannot be used to determine the quality of learning. The use of behavioural data to understand student learning is far from a novel approach. Even learning analytics may only provide the limited answer to improving learning, the technology may help bridge some gaps between education, psychology, and neuroscience by providing deeper insight into student behaviour as they learn in real educational settings (LODGE; CORRIN, 2017). Learning analytics helps identify and enhance every aspect of each student's learning and development. Bigdata facilitated learning analytics helps reveal latent interrelation among personalized learning activities along with variance in the time, place, and pace of learning for each student. Block Chains and Smart Contract: The blockchain and smart contract are disruptive technology that could transform the information within decentralized mechanism for personalized and disruptive learning. Blockchain may help support decentralised information system for educational record, reputation and reward, especially while personalized learning processes demanding critical access control and ownership of the daily information through personal learning activities. Smart contracts can be defined as the computer protocols that digitally facilitate, verify, and enforce the contracts made between two or more parties on blockchain. In the context of learning environment, smart contract may help implement various assignment, assigned personalized learning actions, and automatic marking and learning assessment. Conclusions The evolution of society, economy, politics… in this global world conditions the way we must educate students. In this 21st society students are born already with a tablet under their arms. It is so expanded the use of technology that schools necessarily need to offer digital training. Always understood as educative tools, education technology provides in the teaching classroom the necessary strategies to develop learning. It also offers support as there is a wide range of students, and each one has his own learning rhythm, style, interests, etc. Particularly in this new teaching scenario where the need to focus teaching on technology is a constant and allows progression and learning achievements a set of methodological principles and methods are also required. In order to offer students suitable training methodology has to be previously planned, selected and developed. On the other hand, the growing multiculturalism and diversity in students also requires new teaching styles to offer quality in the didactic processes. In this paper, we have provided introspection on how personalized education and adaptive learning help to reach efficient learning in the current teaching context. On the one hand, personalized education allows overcoming the need to adapt to students' needs and interests making teaching something already created for each one of them. It allows offering them the input they need to be able to keep on learning, that is to say, helps to promote a lifelong learning. On the other hand, adaptive learning provides the necessary teaching framework to allow for different learning profiles, learning styles, learning behaviour being able to access information and face to face interaction. The learning scenarios that can be offered with adaptive learning are endless and can highly contribute to the personalization of education, enhancing e-learning, u-learning… In reference countries such as Canada or Ireland adaptive learning is highly recommended. The main goals to be achieved through the implementation of personalization in education and adaptive learning refer to offer students better learning scenarios with the updated strategies, techniques and procedures. All the elements that constitute the didactic process help in the students' outcomes and achievements providing a learner-friendly and stress-free education. They also help to offer them a competency-based training to be able to transfer knowledge to skills required in the workplace. The ultimate goal of education is to train students to be able to live in society and develop as citizens. Competences contribute to reach the necessary lifelong learning.
7,522.2
2021-09-14T00:00:00.000
[ "Education", "Computer Science" ]
Foresight as a Tool for the Planning and Implementation of Visions for Smart City Development : Global change, including population growth, economic development and climate change constitute urgent challenges for the smart cities of the 21st century. Cities need to e ff ectively manage their development and meet challenges that have a significant impact on their economic activity, as well as health and quality of life for their citizens. In the context of continuous change, city decision-makers are constantly looking for new smart tools to tackle it. This article addresses this gap, indicating foresight as an e ff ective tool that anticipates the future of a smart city. Its aim is to develop a methodology for planning and implementing a vision of smart city development based on foresight research. The proposed methodology consists of five stages and was developed with the use of methodology for designing hybrid systems. It is an organised, transparent and flexible process which can facilitate the development of sustainable and smart future visions of smart city development by virtue of the involvement, knowledge and experience of a large number of urban stakeholders at all stages of its creation. The article discusses in detail the operationalisation of each stage of the methodology in which the following main methods were used: megatrend analysis, factors analysis: social (S), technological (T), economic (E), ecological (E), political (P), relating to values (V) and legal (L) (STEEPVL), structural analysis, Delphi, creative visioning, scenarios and identifying actions related to the development of a smart city, divided into four categories: new, so far not undertaken (N); implemented so far, to be continued (C); redundant, to be discontinued (R); actions that have been implemented in the past and to be restored (R) (NCRR). The summary enumerates the benefits that foresight implementation can bring to the smart city. Introduction Together with a growing popularity of the smart city concept, there appear numerous new projects based on innovative technologies helping cities to solve the emerging problems of technical infrastructure [1,2], pollution and environmental protection [3,4], waste management [5,6], spatial development [7,8], logistics and urban transport [9,10], ageing society [11,12], poverty [13,14] and low levels of involvement of residents in public affairs [15,16]. After all, many of them fail to satisfy their initial objectives due to the fact that they are not adapted to the complexity, diversity and uncertainty characterising contemporary cities [17]. They are excessively focused on investing in advanced technologies, often without real recognition of the problems in cities and without the involvement of the local community in city co-management [18][19][20][21]. A smart city should ultimately aim to become a creative, sustainable area, providing a high standard of living, a friendly environment and broad economic development prospects [22]. Cities can be defined as smart if they include the following elements: smart economy, smart mobility, smart environment, smart people, smart living and smart governance [23,24]. The literature, however, provides a number of definitions that are mainly focused on the technological aspect [25][26][27][28][29][30]. It should be noted that only by taking due account of other areas and their mutual relations will smart cities be able to achieve effective implementation and long term success [31,32]. A city cannot be considered smart if it is based solely on technology [33]. The holistic approach of a smart city focuses on people and their needs, and technology plays a supporting role. Its most important aspect involves determining whether new technologies constitute better solutions to problems related to city development than previously available technologies [34]. The needs and preferences of citizens, social interactions and cooperation should be at the heart of designing smart solutions for cities. Technical solutions and infrastructure should serve the interests of the people who live and work there. A modern city is based on the citizen and his or her specific characteristics and abilities [35,36]. The success of a smart city is largely based on the adoption and use of smart solutions by citizens, supporting decision-making and encouraging behavioural change [37]. Its residents will be the end users of these solutions, hence they must have a clear positive impact on their daily life [38]. In the new generation of cities ('Smart City 2.0'), infrastructure and technologies are no longer the focus of development. It is the wisdom of citizens, visitors and businesses in interconnected ecosystems that engage in social inclusion activities that play a significant role [39]. A city is smart when it is managed in a smart, efficient and sustainable way [40]. Through this perspective, the distinguishing features of future cities should be the following [41][42][43][44]: • Sustainable development-through the achievement of social and economic stability, effective, multifaceted management of the urban environment; • Orientation towards citizens-planning should be focused on the needs of citizens, including elements such as life satisfaction, physical and mental health, level of independence, education, social relations, and cultural diversity; • Effectiveness, attractiveness, and dynamics-these are necessary to attract investment and stimulate entrepreneurship; • Accessibility-to enable local communities to participate in all aspects of city life; • Resilience to crises and shocks, flexibility and competitiveness-the city should have the potential to adapt to changing social, economic and cultural conditions; • Good governance-the city should make optimal use of its resources in order to effectively implement short and long-term development programmes; • Responsiveness-it should have a rationally developed digital infrastructure to respond to emerging problems and make appropriate decisions in real time; • Future-oriented-through appropriate city resource planning and management. Therefore, decision-makers are challenged with a search for new, often unconventional tools that enable effective management of the sustainable development of a smart city and provide an opportunity to avoid threats resulting from an increasing complexity and uncertainty of the environment. According to the literature review, the development of new smart city solutions involves a growing use of tools that involve many stakeholders, including end users, in working together to create mutual and shared benefits [45,46]. Foresight may provide an answer to the existing need to create a vision for the development of a smart city which will enable sustainable development. Among many definitions of foresight, the best-known ones were those proposed by Martin and Georghiou [47]. According to Martin, foresight is a process involved in systematic attempts to envisage the long-term future of science, technology, economy and society. This process aims to identify strategic areas of science and technology that should provide maximum economic and social benefits [48]. Georghiou, on the other hand, presents foresight as a systematic way to assess the development of science and technology, which can clearly affect industrial competitiveness, generating wealth and boosting quality of life [49]. Foresights were first applied by the U.S. Army during World War II in order to enhance preparations for the "unpredictable" moves of the enemy [50]. However, since the second half of the 1960s, foresight Energies 2020, 13, 1782 3 of 24 methods have been used and improved to predict the development of technology in large industrial corporations in the United States, including the energy sector [47]. As a result of its observed range of benefits, foresight was also used at the national level in the public sector. It was recognized as a process enabling the construction of development strategies, addressing global challenges and shaping the long-term policies of many large and small countries, e.g., the USA, Japan, Great Britain, Germany, France, Austria, Poland, Hungary, the Czech Republic, Ukraine, Spain, Mexico and Peru [49][50][51][52]. At present, it is gaining popularity at the regional [50,51] as well as urban level, in such countries as the USA [53], Great Britain [54], Spain [55], Poland [56]. Despite a number of benefits that arise from the use of foresight in relation to the development of a country, region or cities, thus far no foresight project focused on smart city development has been initiated. Smart city foresight can be defined as a systematic process based on the participation of a wide range of stakeholders. It is a process that co-creates coherent visions of the city in order to effectively manage future long-term changes and create opportunities for sustainable development of the city in the following aspects: economy, mobility, environment, people, living and governance. Among the advantages of using this tool, the following deserve special consideration: • It allows us to build a long-term development perspective for the smart city; • It enables the detection of problems related to contemporary challenges of the smart city before they appear, and enables us to take preventive measures; • It allows for an assessment of the consequences of current actions and decisions taken in the smart city; • It enables a broadly understood creation of a smart city, anticipating the future and making the most effective solutions; • It involves a large number of stakeholders in the development of the smart city, by means of which it will support the development of a vision of the future and development priorities that can be implemented; • It creates an open ground for discussing the future of smart city development and allows a consensus to be reached in cases of divergent opinions and expectations among stakeholders; • Due to the application of various research methods, it enables the confrontation of views of many stakeholders of smart city development; • It strives to establish cooperation between stakeholders and arouses their sense of responsibility for the implementation of the results achieved. In connection with the above-mentioned reasons for using foresight in the smart city concept, the article incorporates the author's research methodology. Its aim is to develop a methodology for planning and implementation of a vision of smart city development based on foresight research. Foresight is intended be an organised, transparent and flexible process, which will facilitate the development of sustainable and intelligent future visions for the development of a smart city by means of engagement, knowledge and experience of a wide range of urban stakeholders at all stages of its development. Ultimately, all stakeholders will be able to understand and use it efficiently and subsequently implement its results. The article has the following structure. Section 2 presents issues related to foresight-its definitions and evolution, as well as methods used in the research process. Section 3 describes the methodology of designing hybrid systems used to develop the methodology of planning and implementation of the vision of smart city development. Section 4 is dedicated to detailed characteristics of the developed methodology of planning and implementation of the vision of smart city development. Section 5 presents conclusions on the benefits of using foresight research in planning smart city development, and enumerates the limitations of the research, as well as directions for future scientific work. Fundamentals of Foresight In 1985, Coates made an attempt to define foresight and described it as a process offering a full understanding of the forces shaping the distant future, which should be taken into account in formulating policy, planning and decision making. This process monitors signals of emerging trends that have considerable implications for the policy. This makes the implementation of policies more appropriate, flexible and effective in the context of time and changing operating conditions [57]. Miles defines it as an equivalent of a stream of systematic efforts aimed at looking towards the future and making the most effective choices. However, foresight assumes that there is no single future. Depending on the present-day action or lack of action, many variants of the future are possible, but only one of them will come into existence [58]. Saritas and Loveridge point to the need to involve new stakeholder groups in research, going beyond traditional research area experts [59]. According to the definition containing an element of social anticipation of future changes (created within the Foresight for Regional Development Network (FOREN) project), foresight is systematic, participatory, focused on gathering knowledge about the future, a process of building a medium-term and long-term vision, oriented on today's decisions and mobilising joint activities [60,61]. In the context of this definition, five main elements of foresight were also identified: anticipation and future design, participation, social networks, strategic vision and current decisions, as well as action [61]. Cassingena Harper also attaches great importance to participation and consensus-building. This distinguishes foresight from other future-oriented approaches. The foresight process is based on intensive iterative periods of open reflection, networking, consultation and discussion, which are necessary to create to a common vision for the future and a sense of ownership of the developed strategy. It discovers common space for open thinking about the future and the incubation of the strategy [47]. Anderson expresses a similar view, pointing out that foresight involves shaping the future through concerted actions of self-sustaining networks of interested groups [62]. Foresight is a set of tools that facilitate the construction of scenarios for development over a relatively long period of time (usually 10-30 years), as well as in cases of development that are difficult to predict [63]. It is an attempt to collectively anticipate important factors as well as threats that may affect the future of society [64]. Foresight should not be dominated by science and technology alone-socio-economic factors also play an important role here, having a significant impact on innovation, wealth creation and quality of life [47][48][49]65,66]. It is a deliberately structured process that combines the expectations of different actors in order to formulate strategies for the future [66]. It fosters a dialogue between process participants and provides a framework for communication and sharing opinions on possible future scenarios [67]. It establishes a language of social debate and creates a culture that involves society's considerations towards the future [50]. In the course of preparation and implementation of foresight projects, entrepreneurs, scientists, representatives of public administration, non-governmental and social organisations, as well as politicians, participate in the conducted analyses and evaluations. These participants directly deal with science, as well the as economy and its regulations, thus ensuring a substantively accurate description of problems and indicating the possibilities of solving them. Foresight should be carried out through iterative, gradual and even experimental tasks as a result of which stakeholders will become more aware of future opportunities and, at the same time, commit themselves to take actions that reflect their better understanding [68]. In summary, the foresight process can be synthetically characterised by means of a description proposed by Martin, referred to as '5K' [69]: • Communication-a platform enabling communication between partners involved in the process as well as the flow of information between organisers and the public concerned by the results of the undertaken initiative; Energies 2020, 13, 1782 of 24 • Focus on the long-term perspective-focusing on the development of the future and thinking systematically about long-term processes; • Coordination-partnership-based management of the knowledge generated by the project, as well as at the organisational level, managing activities within the implemented process; • Consensus-reaching an agreement in cases of diverging opinions and expectations among the participants of the process which may result from a loss of compatibility of objectives or visions for the development of the research area, as well as in the context of the obtained results requiring implementation after completion; • Consistency-systematic involvement of stakeholders in the long-term process in order to generate expertise necessary for executing different stages of the process, while eliminating problems associated with obstacles to participation in the initiative related to the performance of professional duties. A typical downside of foresight and future studies is a disconnection or mismatch between the outcome of the process and its application in the studied area (country, region, industry, company). There is a 'linear' (1.0) model of foresight within a deterministic frame. An 'evolutionary' (2.0) model of foresight is more focused on adaptive innovation. In contrast, there is a 'co-evolutionary' 3.0 model of synergistic foresight: focused on co-learning, co-creation and co-intelligence, not only within the foresight programme but also across a wider city and its economies, governances and technologies [18]. The term 'synergistic' can be ascribed to science and art, understanding and working in synergy, which literally means 'working together' [70]. Synergies exist between people, organisations, communities, economies, political systems, technological systems. Synergistic foresight focuses on synergic features in four main dimensions-subject, process, agenda, and object. Synergistic foresight presents a more open, multifaceted, interconnected and co-intelligent way of working with many fields, with equal priorities for social, technical, political or cultural areas, worldviews and systems of values. At the same time, the process itself is based on a practical, gradual method of research and deliberation, adapted to the challenge of cognitive complexity. For this purpose, synergistic foresight operates in the '4S' cycle, at four main stages. In the co-evolutionary process, each of the four stages involves placing a stress on the principles of synergistic co-intelligence. The 'system mapping' stage focuses on collaborative 'co-learning'. The 'scenario mapping' stage induces collaborative thinking (i.e., 'co-thinking') to look beyond immediate trends, towards bigger pictures of change and uncertainty. The third stage, 'synergy mapping', focuses on co-creation and co-innovation for system transformation. Lastly, the fourth stage of 'strategy/road-mapping' reflects on the implications for action in co-production that leads towards co-intelligence [71]. Ravetz and Miles point out that the 3.0 model is not 'better' or more advanced than the 1.0-2.0 type foresight. The 3.0 model may be suited to different kinds of problems, less linear and bounded as well as more co-evolutionary and transformational. The 3.0 model does not replace 1.0-2.0 versions, but it can work better as a parallel and complementary layer [71]. In the context of city development, foresight focuses on the need to create a coherent vision of the city in order to plan and manage future long-term changes and create opportunities for new investments in the local urban economy [72]. The foresight project at the city level was launched in the 1980s in the Singapore City-State [73] and next in Atlanta in the USA [53], Birmingham, Bristol, Cambridge, Lancaster, Liverpool, Manchester, Milton Keynes, Newcastle, Reading, Rochdale in United Kingdom [54,72], Spain [55], Konin, Lublin, Wrocław in Poland [56,74,75], Rustavi in Georgia [76], Bulungan in Borneo [7]. These projects so far have involved attempts to combine research on the future, urban research and ecological thinking with parallel analyses of complex systems and innovations as well as technology assessment [71]. Urban foresight mainly focuses on creating a coherent vision of the city to plan and manage future long-term changes [7,53,54,56,72,[74][75][76]. Foresight, in relation to cities, has as well been applied in such specific research areas as urban spatial planning [55], demographic change [77], climate change and energy innovations [78,79] as well as ageing society [80]. Unfortunately, an overview of foresight projects conducted in cities reveals that so far none of them has launched such a project focused on smart city development and its elements: economy, mobility, environment, people, living, governance. Cities striving for smart development, apart from the implementation of technological solutions, need strategies that will make it possible, among other things, to socialise the vision of development or to identify trends affecting its activity as well as social and economic conditions. These visions can be successfully designed with the use of foresight, which allows for identifying changes in the micro-and macro-environment, interpreting their impact on the city and formulating visions and solutions that will ensure the long-term development of a smart city. This approach is in line with the latest trends in research, which diverges from the traditional perception of foresight as a set of research methods and techniques focused on detecting changes in the environment, and shifts towards socialisation, focusing on co-learning, co-creation and co-intelligence. Methods Used in Foresight Research Foresight is implemented with the use of a variety of methods, both strictly scientific and heuristic, based on expert intuition [81]. The catalogue of methods used in foresight research is wide and diversified, and due to the continuous development of foresight, is still open [47,82]. Nazarko claims that research instruments are systemic, analytical, algorithmic, heuristic, quantitative and qualitative methods [82]. Popper [81] made the most popular method classification in the literature and Magruk [82] prepared the most comprehensive and multifaceted typology. According to foresight practitioners, the classification proposed by Popper is called 'a foresight methodological diamond'. It is composed of four dimensions [83,84]: • Creativity-methods using a combination of original thinking with creative invention (e.g., Wild Cards, Scenarios, Brainstorming, strengths, weaknesses, opportunities, and threats (SWOT) analysis); • Expert knowledge-methods using the skills and knowledge of experts in a given field (e.g., Expert Panel, Key Technologies, Multi-Criteria Analysis, Impact Analysis); • Interaction-methods based on creating new knowledge and building a vision for development with the involvement of a wide range of stakeholders (e.g., Surveys, Conferences, Workshops, Citizen Panels, Stakeholder Analysis); • Facts-methods supporting the understanding of the current state of the research area (e.g., Literature Reviews, Weak Signals, Scanning, Bibliometrics). Quantitative methods use numerical parameters which characterise a studied phenomenon or research object. Qualitative methods are used to describe complex and difficult to quantify phenomena. The use of indirect methods makes it possible to present complex phenomena based on numerical data [82]. According to Popper, in order to design an effective research methodology, methods should be selected from each tip of the foresight methodical diamond [51]. Magruk noted that classifications often do not take into account many foresight research methods and cover only a few groups, which are based on general characteristics. Therefore, he developed a classification based on phenetic analysis, which makes it possible to indicate a common semantic plane for methods belonging to a given group and based on a similar research workshop. This type of classification, based on clusters, makes it possible to clearly identify the characteristics of individual groups, which should be considered in the process of formulating research methodology [85]. Table 1 presents classification developed by Magruk. Based on the analysis of descriptions of urban foresight projects available in the literature and the author's research experience in the implementation of foresight projects, Table 1 indicates methods that can be useful in smart city foresight. action and re-edition [87]. Along with the development of foresight research, Bishop, Hines and Collins proposed another modification, taking into account six phases of foresight: construction, scanning, forecasting, vision building, planning and action, but this approach failed to take into account the re-edition phase indicated by Miles [91]. Magruk provided another addition, indicating seven stages: initial, scanning, recruitment, main, planning, action, resumption [92]. Lastly, Nazarko added the evaluation phase to the process. In his synthetic approach, Nazarko distinguished eight research phases of the foresight process: initial, scanning, recruitment, knowledge generation, anticipation, action, evaluation and resumption [82]. In order to develop a methodology for planning and implementing the vision of smart city development, a hybrid systems design methodology was used. According to Magruk, the hybrid systems design methodology should involve four stages [92]: • Identifying factors influencing foresight research methodology; • Selecting foresight research methods in line with the classification, research context and stages of the foresight process; • Selecting methodological hybrids; • Constructing a hybrid system. The first stage involves the identification of factors influencing research methodology. The factors that play a key role in the selection of appropriate research methods are: access to quantitative and qualitative data, methodological competence, key attributes of the methods, relevance of the combination with other methods and cognitive nature [93]. At the second stage of designing the hybrid research methodology, foresight methods are selected according to the classification, research context and stages of the foresight process. Table 1 includes 116 methods, divided into 10 classes ( Figure 1). It also illustrates the strength of connecting 10 classes of methods with eight phases of foresight research, simultaneously assigned to four research contexts: cognitive, social, technological and economic [82]. The first stage involves the identification of factors influencing research methodology. The factors that play a key role in the selection of appropriate research methods are: access to quantitative and qualitative data, methodological competence, key attributes of the methods, relevance of the combination with other methods and cognitive nature [93]. At the second stage of designing the hybrid research methodology, foresight methods are selected according to the classification, research context and stages of the foresight process. Table 1 includes 116 methods, divided into 10 classes ( Figure 1). It also illustrates the strength of connecting 10 classes of methods with eight phases of foresight research, simultaneously assigned to four research contexts: cognitive, social, technological and economic [82]. The selection of the research context and the choice of methods are strongly interdependent and directly related to the individual stages of foresight research. Each class forms groups of methods that are mutually substitutable and complementary to those in other classes. Methods from only one class should not be used. Such an approach may cause an undesirable effect, in which methods based on similar information resources will generate results in a similar way. Such a situation will make it impossible to obtain the synergistic effect [92]. Proper methodology will be ensured by selecting methods from different classes in each phase. At the same time, it is important to provide a that are mutually substitutable and complementary to those in other classes. Methods from only one class should not be used. Such an approach may cause an undesirable effect, in which methods based on similar information resources will generate results in a similar way. Such a situation will make it impossible to obtain the synergistic effect [92]. Proper methodology will be ensured by selecting methods from different classes in each phase. At the same time, it is important to provide a strong reference to all research contexts. Putting too weak an emphasis on all contexts or placing too strong a stress on one of them may lead to the undesirable dominance of a specific domain [82,92]. The third stage should focus on the selection of methodical hybrids. Magruk distinguishes hybrids with the following structures: • Sequential-output values from one method become input values in the next method. It is used when the results of a method from one stage of foresight constitute input to the next stage; • Loosely related-information is exchanged between individual methods, even though each method works separately; • Nested-a high degree of integration. There is a frequent interweaving and exchange of information between the applied methods (multiple feedbacks). This structure allows for major and auxiliary methods. Information flow takes place in both directions; • Supporting-they are characterised by a clear division into basic and supporting methods. An auxiliary method (not always active) may use the same input as the basic method. However, the results of the auxiliary method must be processed by means of the main method [92]. The fourth design stage leads to the development of a final hybrid system, where appropriately selected methods can achieve a synergistic effect [92]. Methodology of Planning and Implementation of the Vision of Smart City Development The methodology of planning and implementation of the vision of smart city development was construed with regard to three key areas: stages of foresight process, research context and classification of methods. The developed methodology ensures a balance between referring to the four contexts. The methods used belong to six different classes and refer to different contexts, thus they remain complementary. The methodology uses three types of hybrids: sequential, nested and supporting. For the needs of the methodology for the planning and implementation of a vision of smart city development (Figure 2), 15 methods belonging to six different classes were chosen. These methods were selected in such a way as to maintain a balance between references to contexts (economic, technological, social and cognitive) related to the research area-the city and its individual aspects, and eight phases of foresight. The cognitive context is linked to desk research, web research, Delphi, survey and conference. The social context was expressed by megatrend analysis, citizen panels, brainstorming, workshops, conference and voting. The technological context refers to megatrend analysis, structural analysis, creative visioning and identifying actions related to the development of a smart city, divided into four categories: new, so far not undertaken (N); implemented so far, to be continued (C); redundant, to be discontinued (R); actions that have been implemented in the past and to be restored (R) (NCRR) [94], and the economic context refers to the factors analysis: social (S), technological (T), economic (E), ecological (E), political (P), relating to values (V) and legal (L) (STEEPVL) [95], scenarios and the NCRR. Megatrend analysis carried out on the basis of desk and web research allows us to gather necessary theoretical knowledge about the conditions of smart city development (initial phase). Within the framework of sequential hybrids, megatrend analysis provides an input to the work of a citizen panel (scanning phase) in the form of the STEEPVL analysis. The STEEPVL analysis within a sequential hybrid with structural analysis allows for the indication of key factors of smart city development. These analyses are a primary tool that facilitates the identification of driving forces of smart city development scenarios. Another method is Delphi. Respondents of the Delphi survey, city stakeholders, are identified in the course of the work of the citizen panel (recruitment phase). The Delphi method (knowledge generation phase) within the sequential hybrid provides input to the scenario method (anticipation phase) and the NCRR method (action phase). The creative visioning method is part of another sequential hybrid, providing information for both scenario and NCRR methods. The scenario method, by indicating assumptions and conditions for building the vision of the future, then provides input to the NCRR method. The analysis is concluded by a conference dedicated to presenting research results to a large number of smart city stakeholders. Feedback received during the conference, if necessary, can be taken into account as part of the supplementation/correction of the results obtained from the NCRR method. The last two phases of the process-evaluation and resumption-should be initiated at a certain interval to examine the effectiveness and efficiency of the implementation of research results, and subsequently to resume the process. The sequential hybrid is based on an evaluation survey, providing input to be presented at the conference. The aim of the conference is to take measures that resume the foresight research process that will contribute to the development of an updated vision of a smart city. Within the framework of nested, highly integrated hybrids exist interweaving and exchanges of information between the citizen panel and survey research methods. In this structure, there are also relations in which desk research, web research, citizen panels, brainstorming and survey research act as auxiliary methods to megatrend analysis, STEEPVL analysis, Delphi method, scenario method, creative visioning and NCRR method. The methodology also uses a supporting hybrid based on basic and auxiliary methods. The supporting method, i.e., voting (which is not always active), supports structural analysis and the scenario method when structural analysis fails to provide information about the two driving forces of smart city development scenarios. The methods from the consultative class are applied in seven out of eight foresight phases and relate mainly to the social context. Particular emphasis was placed on methods from the consulting class due to the fact that one of the key elements of foresight is social participation, and methods from this class require the involvement of a wide range of stakeholders from the research area. The methodology also includes three methods from the strategic class, two from the overview and analytical class and one from the creative and diagnostic class. Each of the methods indicated a high, strong or very strong strength of connection with particular phases of the foresight process. Operationalisation of the methodology for planning and implementation of the vision of smart city development Megatrend analysis carried out on the basis of desk and web research allows us to gather necessary theoretical knowledge about the conditions of smart city development (initial phase). Within the framework of sequential hybrids, megatrend analysis provides an input to the work of a citizen panel (scanning phase) in the form of the STEEPVL analysis. The STEEPVL analysis within a sequential hybrid with structural analysis allows for the indication of key factors of smart city development. These analyses are a primary tool that facilitates the identification of driving forces of smart city development scenarios. Another method is Delphi. Respondents of the Delphi survey, city stakeholders, are identified in the course of the work of the citizen panel (recruitment phase). The Delphi method (knowledge generation phase) within the sequential hybrid provides input to the scenario method (anticipation phase) and the NCRR method (action phase). The creative visioning method is part of another sequential hybrid, providing information for both scenario and NCRR methods. The scenario method, by indicating assumptions and conditions for building the vision of the future, then provides input to the NCRR method. The analysis is concluded by a conference dedicated to presenting research results to a large number of smart city stakeholders. Feedback received during the conference, if necessary, can be taken into account as part of the supplementation/correction of the results obtained from the NCRR method. The last two phases of the process-evaluation and resumption-should be initiated at a certain interval to examine the effectiveness and efficiency of the implementation of research results, and subsequently to resume the process. The sequential hybrid is based on an evaluation survey, providing input to be presented at the conference. The aim of the conference is to take measures that resume the foresight research process that will contribute to the development of an updated vision of a smart city. Within the framework of nested, highly integrated hybrids exist interweaving and exchanges of information between the citizen panel and survey research methods. In this structure, there are also relations in which desk research, web research, citizen panels, brainstorming and survey research act as auxiliary methods to megatrend analysis, STEEPVL analysis, Delphi method, scenario method, creative visioning and NCRR method. The methodology also uses a supporting hybrid based on basic and auxiliary methods. The supporting method, i.e., voting (which is not always active), supports structural analysis and the scenario method when structural analysis fails to provide information about the two driving forces of smart city development scenarios. In the individual sequences of the research process, methods closely related to social participation, which are an indispensable element of foresight research, were nested. These methods include desk research, web research, citizen panels, brainstorming, surveys, workshops and conferences. Additionally, voting was indicated as an optional method to be used in the case of such a need. The research process should be initiated by a team consisting of the representatives of city In the individual sequences of the research process, methods closely related to social participation, which are an indispensable element of foresight research, were nested. These methods include desk research, web research, citizen panels, brainstorming, surveys, workshops and conferences. Additionally, voting was indicated as an optional method to be used in the case of such a need. The research process should be initiated by a team consisting of the representatives of city authorities, business and science sectors, NGOs (non-governmental organizations) and the media. It should engage a foresight expert as an advisory body who should supervise the correctness of the process and assist in methodological issues. The team's composition should remain unchanged at all stages of the research process. In the course of preliminary works, the team should form a group of stakeholders in the city's development, constantly participating in the works of the citizen panel. It should be composed of the representatives of city administration, business (companies differentiated by type of activity), science, field specialists (e.g., spatial planning, infrastructure, environmental protection, energy, transport, social policy, etc.), NGOs, the media and residents belonging to different age groups (youths, students, adults, seniors). The group should be diversified in terms of the represented professional sphere, education, gender and age. The size and structure of the implementation team and stakeholder groups should be adapted to the specific conditions of each city (depending on, e.g., city size, human capital, etc.). Figure 4 illustrates the operationalisation scheme of the Stage 1 of the research process. Six research steps are presented in relation to nine research methods: megatrend analysis, desk research, web research, STEEPVL analysis, structural analysis, civic panel, brainstorming and survey research. should engage a foresight expert as an advisory body who should supervise the correctness of the process and assist in methodological issues. The team's composition should remain unchanged at all stages of the research process. In the course of preliminary works, the team should form a group of stakeholders in the city's development, constantly participating in the works of the citizen panel. It should be composed of the representatives of city administration, business (companies differentiated by type of activity), science, field specialists (e.g., spatial planning, infrastructure, environmental protection, energy, transport, social policy, etc.), NGOs, the media and residents belonging to different age groups (youths, students, adults, seniors). The group should be diversified in terms of the represented professional sphere, education, gender and age. The size and structure of the implementation team and stakeholder groups should be adapted to the specific conditions of each city (depending on, e.g., city size, human capital, etc.). Figure 4 illustrates the operationalisation scheme of the Stage 1 of the research process. Six research steps are presented in relation to nine research methods: megatrend analysis, desk research, web research, STEEPVL analysis, structural analysis, civic panel, brainstorming and survey research. Step 1C: Identification of major factors in each of the identified groups of STEEPVL analysis factors affecting smart city development • Survey research Step 1D: Assessment of major factors in terms of their importance and the predictability of their evolution over a certain period of time Step 1F: Breakdown of factors into groups of key, determining, regulatory, independent, goal, result and external factors As part of the implementation of Step 1A, the implementation team should identify megatrends, i.e., fundamental phenomena that are very likely to occur in the long run. In the context of the impact of megatrends on the development of a given city, this analysis should be carried out with the use of desk research and web research methods. Megatrends are a good tool to be used on a city scale, for each of its areas of operation-for instance, in the energy sector, where the life cycle of large electricity or heat generation installations usually exceeds thirty years. In this way, when deciding to build a power plant these days, it is important to bear in mind conditions that will arise in the perspective of, e.g., 2050 [96]. Step 1B is aimed at identifying internal and external factors determining smart city development, based on the results of megatrend analysis as well as further use of desk research and web research in order to identify conditions for the development of the city at a local, regional and national level. The result of the work of the implementation team may be individual cards of identified factors from seven areas of the STEEPVL analysis: social (S), technological (T), economic (E), ecological (E), political (P), relating to values (V) and legal (L) [95,97]. In the next step, the implementation team, using the brainstorming method, should complete the final list of STEEPVL analysis factors, which is the starting material for the next step. Step 1C should focus on identifying key factors in each of the identified groups of STEEPVL analysis factors. Their selection should be made by city development stakeholders who are part of the citizen panel. During the panel meeting, each member should select a specific number of factors in each of the groups which, in their opinion, are most important from the point of view of smart city development. The number of the most important factors may be limited from three to five in each group. Step 1D should be aimed at the evaluation of the list of key factors of the STEEPVL analysis in terms of their importance and predictability for smart city development. This approach is primarily focused on identifying the most relevant factors that are potential driving forces of the resulting scenarios. Assessment should be made taking into account a specific time horizon (e.g., 10, 20, 30 years). The members of the citizen panel should make an assessment of the importance and predictability of the factors by means of a seven-level Likert scale, using a research form constituting the PAPI (Paper and Pen Personal Interview) or CAWI (Computer-Assisted Web Interview) survey. The aim of Step 1E is to determine interrelationships between major factors in the STEEPVL analysis. This assessment should be made by the members of the citizen panel, and also with the use of an electronic research form constituting the CAWI or PAPI survey. Stakeholders should determine whether and to what extent individual factors affect other factors. Assessment should be made on a scale from zero to three (0, no impact; 1, weak impact; 2, medium impact; 3, strong impact) for each pair of factors. In Step 1F, the implementation team should prepare a resultant matrix of mutual influence of STEEPVL factors, which can be analysed with the use of MIC-MAC software [98]. This analysis will lead to the distinction of groups of factors influencing smart city development, divided into: key, determining, regulating, auxiliary, independent, goals and results and external. The aim of the structural analysis is to identify key dependencies of STEEPVL factors with regard to their strength of impact. Their comparison with the results obtained during the assessment of the importance and predictability of the factors should be an important stage in the development of smart city development scenarios. Key factors identified at this stage with a high level of importance and low level of predictability and the highest degree of influence on other factors and dependencies on them may become axes of smart city scenarios. A regional project relating to the development of nanotechnology [97,99] and universities [100] in Poland constitutes an example of a description of research with the use of the STEEPVL and structural analyses. In order to identify factors constituting scenario axes (driving forces), it is advisable to organise a workshop with the participation of members of the citizen panel. The results obtained so far should be presented over the course of a meeting. During discussions, stakeholders should be asked to express their opinions on factors that are the driving forces of development scenario axes. If the number of Energies 2020, 13, 1782 14 of 24 factors is higher than three, stakeholders should vote in order to make a clear choice between two of them. Figure 5 presents a diagram of the operationalisation of Stage 2. Six research steps are presented in connection with the research methods: Delphi, citizen panel, brainstorming. The main research method is Delphi, with an aim to providing material for development scenarios and collecting data for the NCRR method. Energies 2020, 13, x FOR PEER REVIEW 15 of 25 method is Delphi, with an aim to providing material for development scenarios and collecting data for the NCRR method. In Step 2A, the citizen panel should formulate a set of theses and supporting questions for the Delphi method. The Delphi technique is a forward-looking description of the relationship between issues arising from the specificity of the study and the context determined by the study objective. It is a research question related to the future, presented in the form of a thesis. Supporting questions should include such elements as the time of thesis implementation, probability of its occurrence, factors supporting thesis implementation, barriers to thesis implementation and expected effects of thesis implementation [101][102][103][104]. It is important that the members of the citizen panel are involved in the development of Delphi theses. They can be divided into groups and formulate theses in relation to a specific area, e.g., infrastructure, environment, energy, security, transport, social policy, etc. The work of the implementation team should lead to the elaboration of questions that are auxiliary to the Delphi theses. Step 2B should focus on preparing a research questionnaire for the first round of the Delphi survey. Research material obtained in Step 2A should be used in the preparation process. It should be selected and analysed by the implementation team with regard to its methodical and factual correctness. The team should also identify potential participants in the pilot and proper study from a wide range of stakeholders in city development. When selecting stakeholders for the Delphi survey, targeted recruitment and snowballing is recommended. The group should be diversified in terms of the represented professional sphere, education, gender and age. Step 2C is aimed at conducting a pilot study using the CAWI survey within a group of several respondents. This measure allows for the verification of the questionnaire in terms of its comprehensibility, as well as the elimination of errors from its final version. Steps 2D-2F are related to the Delphi method. In its first round, the CAWI questionnaire should be dispatched to a specific group of city stakeholders. Once the results have been obtained, the form should be developed for round 2 of the survey, including summary statements and selected comments from round 1. In the second round of the survey, respondents will be able to change their opinion based on the knowledge of others [104]. The developed survey form should be sent only to those respondents who participated in the first round of the survey. After obtaining the results from the second round of the survey, the implementation team should make a final analysis and In Step 2A, the citizen panel should formulate a set of theses and supporting questions for the Delphi method. The Delphi technique is a forward-looking description of the relationship between issues arising from the specificity of the study and the context determined by the study objective. It is a research question related to the future, presented in the form of a thesis. Supporting questions should include such elements as the time of thesis implementation, probability of its occurrence, factors supporting thesis implementation, barriers to thesis implementation and expected effects of thesis implementation [101][102][103][104]. It is important that the members of the citizen panel are involved in the development of Delphi theses. They can be divided into groups and formulate theses in relation to a specific area, e.g., infrastructure, environment, energy, security, transport, social policy, etc. The work of the implementation team should lead to the elaboration of questions that are auxiliary to the Delphi theses. Step 2B should focus on preparing a research questionnaire for the first round of the Delphi survey. Research material obtained in Step 2A should be used in the preparation process. It should be selected and analysed by the implementation team with regard to its methodical and factual correctness. The team should also identify potential participants in the pilot and proper study from a wide range of stakeholders in city development. When selecting stakeholders for the Delphi survey, targeted recruitment and snowballing is recommended. The group should be diversified in terms of the represented professional sphere, education, gender and age. Step 2C is aimed at conducting a pilot study using the CAWI survey within a group of several respondents. This measure allows for the verification of the questionnaire in terms of its comprehensibility, as well as the elimination of errors from its final version. Steps 2D-2F are related to the Delphi method. In its first round, the CAWI questionnaire should be dispatched to a specific group of city stakeholders. Once the results have been obtained, the form should be developed for round 2 of the survey, including summary statements and selected comments Energies 2020, 13, 1782 15 of 24 from round 1. In the second round of the survey, respondents will be able to change their opinion based on the knowledge of others [104]. The developed survey form should be sent only to those respondents who participated in the first round of the survey. After obtaining the results from the second round of the survey, the implementation team should make a final analysis and interpretation of the results. The obtained data will constitute the input to smart city development scenarios. By means of the Delphi survey, it is possible to verify the correctness of the definition for scenario axes. The obtained results illustrate the conditions of implementing individual scenarios and the probability of their occurrence. The research with the use of the Delphi method is best described by foresight conducted for the city of Newcastle [54] and regional foresight related to the development of tourism in Poland [105]. Figure 6 presents an operationalisation scheme for Stage 3 of the research process. It consists of three steps related to four research methods: creative visioning, scenario method, citizen panel and brainstorming. Energies 2020, 13, x FOR PEER REVIEW 16 of 25 scenarios. By means of the Delphi survey, it is possible to verify the correctness of the definition for scenario axes. The obtained results illustrate the conditions of implementing individual scenarios and the probability of their occurrence. The research with the use of the Delphi method is best described by foresight conducted for the city of Newcastle [54] and regional foresight related to the development of tourism in Poland [105]. Figure 6 presents an operationalisation scheme for Stage 3 of the research process. It consists of three steps related to four research methods: creative visioning, scenario method, citizen panel and brainstorming. Step 3A should incorporate the creative visioning method in order to generate ideas for measures aimed at smart city development as part of the work of the citizen panel. The members of the citizen panel can work individually, as well as merge into groups. The method can be used only after developing tools that constitute a research form with questions about the future. They can be formulated in the form of unfinished sentences referring to a specific time perspective-for example, 2050 [54]: During the meeting, the members of the citizen panel may provide answers in writing, but a graphic form (e.g., drawings) may also be an interesting solution. Foresight projects conducted for the cities of Newcastle and Cambridge in United Kingdom provide examples of the description of research with the use of the creative visioning method [54]. Step 3B should lead to the formulation of smart city development scenarios based on the results of the STEEPVL analysis, structural analysis, the Delphi method and creative visioning. These scenarios can be built on the basis of two key factors identified as a result of the first stage of the research process. The identified factors should be applied on two axes, leading to the creation of a matrix. The upper right field will have a positive meaning and the lower left field a negative meaning. The other two fields will take the positive and negative values of the first or second factor, respectively. This will result in four scenarios showing different visions of how the future of a smart city may develop. Alternative states of the future developed with the use of the scenario method should create a coherent, reliable picture of smart city development [106][107][108]. In this way, it is possible to show not only the most probable or desired developments, but also alternative versions. The three Ps, i.e., the division into visions: the predictable, the possible, and the preferred, is the Step 3B: Developing smart city development scenarios • Citizen panel • Brainstorming Step 3C: Assessment of the state of STEEPVL analysis factors in individual scenarios and calculation of the probability of the implementation of smart city scenarios Step 3A should incorporate the creative visioning method in order to generate ideas for measures aimed at smart city development as part of the work of the citizen panel. The members of the citizen panel can work individually, as well as merge into groups. The method can be used only after developing tools that constitute a research form with questions about the future. They can be formulated in the form of unfinished sentences referring to a specific time perspective-for example, 2050 [54]: • Where I live in 2050 is . . . • What I wish was different is . . . • In my free time, I . . . • The main thing I worry about is . . . • What I love about my city is . . . During the meeting, the members of the citizen panel may provide answers in writing, but a graphic form (e.g., drawings) may also be an interesting solution. Foresight projects conducted for the cities of Newcastle and Cambridge in United Kingdom provide examples of the description of research with the use of the creative visioning method [54]. Step 3B should lead to the formulation of smart city development scenarios based on the results of the STEEPVL analysis, structural analysis, the Delphi method and creative visioning. These scenarios can be built on the basis of two key factors identified as a result of the first stage of the research process. The identified factors should be applied on two axes, leading to the creation of a matrix. The upper right field will have a positive meaning and the lower left field a negative meaning. The other two fields will take the positive and negative values of the first or second factor, respectively. This will result in four scenarios showing different visions of how the future of a smart city may develop. Alternative states of the future developed with the use of the scenario method should create a coherent, reliable picture of smart city development [106][107][108]. In this way, it is possible to show not only the most probable or desired developments, but also alternative versions. The three Ps, i.e., the division into visions: the predictable, the possible, and the preferred, is the most frequently applied method [109]. Foresight projects conducted for the cities of Rochdale in United Kingdom [54], Lublin in Poland [75] and Rustavi in Georgia [76] extensively describe research with the use of the scenarios method. In Step 3C, the citizen panel should focus on assessing the status of other factors in the STEEPVL analysis in each scenario. It is also advisable to develop the characteristics of individual smart city development scenarios in a specific time perspective. Calculations can be made on the basis of the results of the Delphi study in relation to two theses closely related to the axes of smart city development scenarios. Figure 7 illustrates the operationalisation scheme for Stage 4 of the research process. It consists of three steps related to five research methods: the NCRR, citizen panel, brainstorming, surveys and conferences. Energies 2020, 13, x FOR PEER REVIEW 17 of 25 United Kingdom [54], Lublin in Poland [75] and Rustavi in Georgia [76] extensively describe research with the use of the scenarios method. In Step 3C, the citizen panel should focus on assessing the status of other factors in the STEEPVL analysis in each scenario. It is also advisable to develop the characteristics of individual smart city development scenarios in a specific time perspective. Calculations can be made on the basis of the results of the Delphi study in relation to two theses closely related to the axes of smart city development scenarios. Figure 7 illustrates the operationalisation scheme for Stage 4 of the research process. It consists of three steps related to five research methods: the NCRR, citizen panel, brainstorming, surveys and conferences. In Step 4A, the main research method is the NCRR. It involves identifying actions related to smart city development divided into four categories: new, so far not undertaken (N); implemented so far, to be continued (C); redundant, to be discontinued (R); actions that have been implemented in the past and to be restored (R) [94]. Actions may also be grouped in such specific areas as infrastructure, environment, energy, security, transport, social policy, etc. As part of Step 4B, the identified catalogue of actions divided into individual categories (according to the NCRR method) in the form of the CAWI or PAPI survey should be distributed to a wide range of stakeholders (primarily those participating in the Delphi survey). Each of these respondents should indicate the three most important activities which, in their opinion, will have an impact on the implementation of the selected scenario (depending on the choice-the most probable or the most desirable) in a specific time perspective. In case of grouping activities into narrower research areas, the three most important ones should be selected from each of them. The result of the NCRR survey will be a set of priority actions allowing for the achievement of a specific vision of the future. Step 4C should involve a conference for a large number of stakeholders. It should focus on the discussion of development scenarios and a proposed catalogue of priority actions. This discussion and its findings may lead to the completion or modification of the final catalogue of priority actions for smart city development. Research with the use of the NCRR method can be exemplified by a regional foresight project relating to the development of tourism in Poland [94]. Figure 8 presents the operationalisation scheme for Stage 5 of the research process. It illustrates four steps in connection with the following research methods: citizen panels, conferences and surveys. Step 5A is based on the regular organisation of conferences reminding stakeholders about the vision of smart city development. The conference should primarily focus on making participants implement the developed activities and solutions in the course of their professional and social In Step 4A, the main research method is the NCRR. It involves identifying actions related to smart city development divided into four categories: new, so far not undertaken (N); implemented so far, to be continued (C); redundant, to be discontinued (R); actions that have been implemented in the past and to be restored (R) [94]. Actions may also be grouped in such specific areas as infrastructure, environment, energy, security, transport, social policy, etc. As part of Step 4B, the identified catalogue of actions divided into individual categories (according to the NCRR method) in the form of the CAWI or PAPI survey should be distributed to a wide range of stakeholders (primarily those participating in the Delphi survey). Each of these respondents should indicate the three most important activities which, in their opinion, will have an impact on the implementation of the selected scenario (depending on the choice-the most probable or the most desirable) in a specific time perspective. In case of grouping activities into narrower research areas, the three most important ones should be selected from each of them. The result of the NCRR survey will be a set of priority actions allowing for the achievement of a specific vision of the future. Step 4C should involve a conference for a large number of stakeholders. It should focus on the discussion of development scenarios and a proposed catalogue of priority actions. This discussion and its findings may lead to the completion or modification of the final catalogue of priority actions for smart city development. Research with the use of the NCRR method can be exemplified by a regional foresight project relating to the development of tourism in Poland [94]. Figure 8 presents the operationalisation scheme for Stage 5 of the research process. It illustrates four steps in connection with the following research methods: citizen panels, conferences and surveys. activity. Such meetings should be organised in various places, districts, institutions in the city. They should be accompanied by other initiatives, interesting from the point of view of the development of the city and to inhabitants of different ages, including, for example, lectures, training and workshops. Step 5B concerns the evaluation of the implementation of the vision and the formulated priority actions. This step should be carried out by the implementation team with the involvement of the members of the citizen panel, who have detailed knowledge of the research process and the results achieved. Evaluation should be carried out with the use of, e.g., the CAWI and PAPI surveys. The implementation of Step 5C should be aimed at taking action to resume the research process. Reactivation of the foresight research cycle should start after a certain time (e.g., after several years). This step should also involve a conference during which the results of the evaluation study will be presented and confronted with the stakeholders. Discussions may provide information on the need (or lack of it) to resume the process. Conclusions The application of foresight in the process of planning and implementation of the vision of smart city development can bring many benefits. First of all, it enables undertaking intelligent planning activities, setting sustainable development priorities as well as building pro-social and pro-innovative policies in the city. It fosters the development of forward-looking, long-term scenarios of the smart city development vision, taking into account a wide range of social, economic, technological, political, legal and environmental conditions [110]. It also helps to assess the consequences of current actions and detect problems before they occur, thus allowing for their avoidance [111,112]. Foresight is a process where one of its indispensable elements is social participation. The idea of foresight initiatives is to involve a wide range of stakeholders in the process of creating a vision of smart city development. It involves a wide range of stakeholders from various spheres: entrepreneurs, scientists, representatives of administration, non-governmental organisations, the media and inhabitants of various ages with diverse knowledge and experience in the field of city development, who, through their bottom-up view of the research problem, can make the formulation of development visions much more realistic. These visions should take into account the complexity of the urban system and skilfully use its potential [113] to better respond to social, economic and environmental change. Foresight initiatives foster a dialogue between research participants and provide a framework for communication and sharing of insights into possible future developments in a smart city. The use of foresight as a process to identify future events and make participants aware of their ability to influence changes in the city through a pro-active attitude is more useful in practice than previously used activities aimed at anticipating and forecasting the future. Foresight does not assume the existence of a single, strictly defined future. Depending on the level of stakeholders' activity and the • Conferences Step 5A: Actions aimed to activate municipal stakeholders • Citizen panel • Survey Step 5B: Evaluation of the effects of strategic assumptions • Conference Step 5C: Actions aimed to relaunch the research process Step 5A is based on the regular organisation of conferences reminding stakeholders about the vision of smart city development. The conference should primarily focus on making participants implement the developed activities and solutions in the course of their professional and social activity. Such meetings should be organised in various places, districts, institutions in the city. They should be accompanied by other initiatives, interesting from the point of view of the development of the city and to inhabitants of different ages, including, for example, lectures, training and workshops. Step 5B concerns the evaluation of the implementation of the vision and the formulated priority actions. This step should be carried out by the implementation team with the involvement of the members of the citizen panel, who have detailed knowledge of the research process and the results achieved. Evaluation should be carried out with the use of, e.g., the CAWI and PAPI surveys. The implementation of Step 5C should be aimed at taking action to resume the research process. Reactivation of the foresight research cycle should start after a certain time (e.g., after several years). This step should also involve a conference during which the results of the evaluation study will be presented and confronted with the stakeholders. Discussions may provide information on the need (or lack of it) to resume the process. Conclusions The application of foresight in the process of planning and implementation of the vision of smart city development can bring many benefits. First of all, it enables undertaking intelligent planning activities, setting sustainable development priorities as well as building pro-social and pro-innovative policies in the city. It fosters the development of forward-looking, long-term scenarios of the smart city development vision, taking into account a wide range of social, economic, technological, political, legal and environmental conditions [110]. It also helps to assess the consequences of current actions and detect problems before they occur, thus allowing for their avoidance [111,112]. Foresight is a process where one of its indispensable elements is social participation. The idea of foresight initiatives is to involve a wide range of stakeholders in the process of creating a vision of smart city development. It involves a wide range of stakeholders from various spheres: entrepreneurs, scientists, representatives of administration, non-governmental organisations, the media and inhabitants of various ages with diverse knowledge and experience in the field of city development, who, through their bottom-up view of the research problem, can make the formulation of development visions much more realistic. These visions should take into account the complexity of the urban system and skilfully use its potential [113] to better respond to social, economic and environmental change. Foresight initiatives foster a dialogue between research participants and provide a framework for communication and sharing of insights into possible future developments in a smart city. The use of foresight as a process to identify future events and make participants aware of their ability to influence changes in the city through a pro-active attitude is more useful in practice than previously used activities aimed at anticipating and forecasting the future. Foresight does not assume the existence of a single, strictly defined future. Depending on the level of stakeholders' activity and the range of activities to be undertaken in the present, many versions of the future are possible. However, it should be remembered that only one of them will come into existence. The involvement of a large number of stakeholders is also a key element in the process aimed at the implementation of the obtained results and practical use of the generated knowledge. Foresight, through wide-scope involvement of its participants, facilitates their mutual learning and, as a result of many discussions, leads to social acceptance of the designated directions of development. The participation of stakeholders at each stage of obtaining results shapes their awareness of being co-creators, which also triggers the assumption of responsibility for their implementation of the results into their everyday professional and private activities. The success of the process does not depend only on a properly designed sequence of research activities that allow for achieving a synergistic effect, but, above all, also on the involvement of stakeholders in smart city development, whose knowledge and experience will make it possible to implement priority actions. The use of foresight also creates conditions for the creation of partnership networks through which joint initiatives in specific areas of the smart city can be undertaken. Such an activity also allows for the formation of a culture of thinking oriented towards co-learning, co-creation and co-intelligence. Foresight research must be carried out systematically in order to bring measurable benefits. Iterative periods of open discussion, consultation, networking can lead to achieving measurable results. Successive repetition of such research allows for updating the developed priorities and visions of the future in the context of ongoing changes, as well as monitoring the effects obtained by means of, e.g., the implementation of the developed foresight results by decision-makers and stakeholders. The repetitiveness of the process will contribute to its acknowledgement as a permanent approach in thinking about the future of the smart city and a useful tool for managing it. The results obtained within the foresight research can definitely be used as a set of data and proposals for socially acceptable solutions in creating long-term strategies for smart city development, preparing/improving smart city spatial development plans, developing environmental protection programmes, social integration plans and many other important documents from the point of view of smart city functioning. Urban decision-makers may, on the basis of the results obtained within foresight research, divide tasks to be performed by particular organisational units within city authorities. They may also create a separate organisational unit for smart city foresight or for the implementation of future visions of a smart city. This unit will be responsible for dividing tasks between organisational units within city authorities, monitoring the progress of implementation and regularly resuming the foresight research process in specific time perspectives. Planning tools connected with the classic model of public city management are oriented around the top-down decisions of city decision-makers or the engagement of experts in developing strategic documents or assumptions of specific projects. Foresight, on the other hand, makes it possible to gather opinions and gain benefits from the expert knowledge of a wide range of stakeholders. This tool allows for an examination of which solutions are important and necessary from the point of view of society and which, despite their potential (e.g., technological), will not be used by society due to a variety of factors (e.g., social, economic, environmental or value-related factors). For example, advanced technological solutions, considered by constructors and engineers as responding to social needs, may be of little or no use to them and their environment. This can be exemplified by a measure related to the installation of innovative LED lighting in city streets. Its benefits will definitely include, e.g., saving electricity and increasing safety. The indicated benefits constitute a single-sided aspect. Foresight, through the involvement of a wide range of stakeholders, provides an opportunity for a multidimensional view of the issue under consideration, open to the emergence of different opinions that concern a given solution. For instance, high expenses related to the installation of innovative LED lighting may result in the failure to meet other more important needs of the residents (particularly in the context of an aging society), e.g., the construction and functioning of the city centre for senior citizens. Such problems can be avoided by collecting opinions from a wide range of stakeholders representing different social groups (of different ages, professions) and by using various foresight research methods (qualitative and quantitative, scientific and heuristic). Unfortunately, when designing solutions with the sole participation of the representatives of public administration or field experts, there is a high probability of narrowing down the view on smart city development. A skilfully prepared strategy of a coherent, logical and long-term character, responding to signals coming from the environment will enable a smart city to take actions aimed at necessary, thorough transformations that condition sustainable development of the area. At the same time, it will constitute a tool counteracting accidental and chaotic decisions. The presented methodology was developed on the basis of recommendations and good practices available in the literature, as well as the author's personal experience gained in the course of executing foresight projects. The article takes into account the fact that, apart from determining the methodology of the study, there are many other issues that need to be answered in detail. First of all, they concern the principles and methods of selecting implementation team members and stakeholders, determining the time horizon of the research, methods of maintaining the activity of stakeholders at each stage of the research process, expected results, the evaluation process and the designation of entities responsible for implementation. These issues, due to their relevance, require separate considerations for each city where the process is to be implemented. The paper takes into account the limitations of this research, i.e., that the proposed methodology of planning and implementing the vision of smart city development has not yet been tested in practice. Its validation via its implementation in selected cities (diversified in terms of size and socio-economic development) is a further step in the planned research process. The methodology of planning and implementing the vision of smart city development presented in the article is not finite or unchangeable. In the course of the research into its application, if necessary, it can be flexibly adapted to the specific needs of individual cities and their inhabitants.
16,707.6
2020-04-07T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Monoindole Alkaloids from a Marine Sponge Spongosorites sp. Seven (1–7) monoindole derivatives were isolated from the MeOH extract of a marine sponge Spongosorites sp. by bioactivity-guided fractionation. The planar structures were established on the basis of NMR and MS spectroscopic analyses. Compounds 1–5 are unique indole pyruvic acid derivatives. Compounds 1–2 and 4–6 are isolated for the first time from a natural source although they were previously reported as synthetic intermediates. Compound 3 was defined as a new compound. Co-occurring bisindoles such as hamacanthins and topsentins might be biosynthesized by condensation of two units of these compounds. The compounds were tested for cytotoxicity against a panel of five human solid tumor cell lines, and compound 7 displayed weak activity. In our previous study on cytotoxic compounds from the marine sponge Spongosorites sp., we isolated a series of bisindole alkaloids [11,12]. In our continuing search for cytotoxic metabolites from the same sponge, seven monoindole alkaloids were isolated. Compounds 1−2 and 4−6 were isolated for the first time from a natural source although they were previously reported as synthetic intermediates (Figure 1). Compound 3 was defined as a new compound. Herein we describe the structure elucidation and the biological evaluation of these compounds. Result and discussion Compound 1 was isolated as a yellow, amorphous powder. The molecular formula was established as C 11 H 8 BrNO 3 on the basis of the EIMS and NMR data. In the LREIMS of 1, a (M) + ion cluster was observed at m/z 281/283 in the ratio of 1:1 that is characteristic of a monobrominated compound. The NMR spectrum of 1 were reminiscent of reported indole alkaloids. 11,12 Analysis of the 1 H, 13 C, COSY, HMBC, and HSQC data, along with comparison of chemical shift values with those of known indole alkaloids, allowed us to establish a 6-bromoindol-3-yl residue as a partial structure of 1. The singlet at δ H 8.45 (H-2), and a spin system comprised of signals at δ H 8.07 (1H, d, J=8.0 Hz, H-4), 7.40 (1H, dd, J=8.0, 2.0 Hz, H-5), and 7.73 (1H, d, J=2.0, H-7) indicated the presence of a 6-bromoindol-3-yl moiety (Table 1). Long-range correlations from H-4 (δ H 8.07) to C-3 (δ C 112.5) and C-6 (δ C 116.2), along with the COSY correlation between H-4 and H-5, and the long-range correlations from H-5 (δ H 7.40) to C-3a (δ C 124.8) and C-7 (115.5) strongly suggested the presence of a 6-bromoindol-3-yl moiety. The NMR signals at δ C 178.2 (C-8), δ C 164.0 (C-9), and δ H 3.89 (-OCH 3 , 3H), along with the HMBC correlations of -OCH 3 /C-9, suggested an oxoacetic acid methyl ester moiety. The EIMS fragments at m/z 194/196, corresponding to C 8 H 5 BrN, corroborated the presence of a bromoindole group. These fragments, along with the fragments at m/z 222/224 revealed the presence of a 3-carbonyl-bromoindole group, and established the connectivity between the 6-bromoindole moiety and the oxoacetic acid methyl ester moiety ( Figure 2). Therefore, compound 1 was defined as (6-bromo-1H-indol-3-yl) oxoacetic acid methyl ester. Compound 1 was known as an intermediate in the synthesis of some marine natural products, such as didemnimides A and B [13], whereas it has not been reported from a natural source. Pyruvic acid derivatives are unusual natural products, and most of indole pyruvic acid derivatives were isolated from marine sponges [14−16] and ascidians [6]. Compound 2 was isolated as a yellow, amorphous powder. The molecular formula was established as C 11 H 9 NO 3 on the basis of the FABMS and NMR data. In the LRFABMS of 2, a (M + H) + ion was observed at m/z 204. The main difference from compound 1 was lack of bromine atom on the indole ring. Therefore, compound 2 was defined as (1H-indol-3-yl) oxoacetic acid methyl ester. Compound 2 was known as an intermediate in the synthesis of natural products, such as didemnimides A and B [13], rebeccamycin, and 11-dechlororebeccamycin [17], whereas it has not been reported as a natural product. Compound 3 was isolated as a yellow, amorphous powder. The molecular formula was established as C 11 H 9 NO 4 on the basis of the EIMS and NMR data. In the LREIMS of 1, a (M) + ion was observed at m/z 219. The main difference from compound 2 was an additional hydroxyl group on the indole ring. , and H-7 (δ H 6.87) to C-3a (δ C 118.5), from H-2 to C-3 (δ C 112.5) and C-7a (δ C 138.5), and from H-5 (δ H 6.74) to C-6 (δ C 154.4), indicated the presence of a 6-hydroxyindol-3-yl moiety. The EIMS fragments at m/z 132 and 160 corroborated the proposed structure ( Figure 2). Therefore, compound 3 was defined as (6-hydroxy-1H-indol-3-yl) oxoacetic acid methyl ester. To the best of our knowledge, compound 3 has not been reported previously either from a natural source or as a synthetic product. Compound 4 was isolated as a white, amorphous powder. The molecular formula was established as C 10 H 8 N 2 O 2 on the basis of the EIMS and NMR data. In the LREIMS of 3, a (M) + ion was observed at m/z 188. The main difference from compound 2 was the presence of an oxoacetamide moiety instead of the oxoacetic acid methyl ester moiety. The 13 C signals at δ C 182.9 (C-8) and δ C 165.9 (C-9), the 1 H singlets at δ H 8.06 and δ H 7.69 (each 1H, -NH 2 ) (Tables 1 and 2), along with the long-range correlation between -NH 2 (δ H 7.69) and C-8 (δ C 182.9), established an oxoacetamide moiety. The EIMS fragments at m/z 116 and 144 revealed the presence of a 3-carbonylindole group, and established the connectivity between the oxoacetamide moiety and the indole moiety ( Figure 2). Thus, compound 4 was defined as (1H-indol-3-yl) oxoacetamide, which was also known as an intermediate in the synthesis of some marine natural products, such as arborescidines [18] and dihydrohamacanthins [19], but has not been isolated previously from a natural source. Compound 5 was isolated as a yellow, amorphous powder. The molecular formula was established as C 10 H 7 BrN 2 O 2 on the basis of the EIMS and NMR data. In the EIMS data of 5, a (M) + ion cluster was observed at m/z 266/268. The main difference from compound 4 was an additional bromine atom on the indole ring. The fragments at m/z 194/196 and 222/224 revealed the presence of 3-carbonylbromoindole group (Figure 2). Therefore, compound 5 was defined as (6-bromo-1H-indol-3-yl) oxoacetamide, which was also reported as an intermediate in the synthesis of some natural products, such as arborescidines [18], dihydrohamacanthins [19], but has not been isolated from a natural source. Compound 6 was isolated as colorless oil. The molecular formula was established as C 10 H 9 NO 3 on the basis of the EIMS and NMR data. In the LREIMS of 6, a [M] + ion was observed at m/z 191. Analysis of the 1 H, 13 C, COSY, HMBC, and HSQC data, allowed us to establish a 6-hydroxyindol residue as a partial structure of 6. The long-range correlation from H-2 (δ H 7.86, 1H, s) and -OCH 3 (δ H 3.76, 3H, s) to C-8 (δ C 164.8) established the presence of a formic acid methyl ester and the connectivity between the 6-hydroxyindol moiety and the carboxylic acid methyl ester. The EIMS fragments at m/z 132 and 160 corroborated the proposed structure ( Figure 1). Therefore, compound 6 was defined as (6-hydroxy-1H-indol-3-yl) carboxylic acid methyl ester, which was known as an intermediate in the organic synthesis of a 5-HT 4 receptor antagonist [20], but has not been reported from a natural source. Compound 7 was also isolated as a yellow, amorphous powder. According to the MS and NMR data of 7, the main difference from 6 was lack of a hydroxyl group in the indole moiety. The MS and NMR data of 7 matched well with reported data [8], and was identified as (1H-indol-3-yl) carboxylic acid methyl ester which was previously reported from marine-derived bacteria [8] and fungi [21], and red alga [22], with cytotoxicity against K562 human chronic leukemia (MIC s 14.0 μg/mL) [21]. It is expected that (1H-indol-3-yl) oxoacetamide derivatives serve as intermediate for the biogenesis of co-occurring bisindole alkaloids, topsentins and hamacanthins [11,12] (Scheme 1). Schiff base formation between amino and carbonyl groups may (either via a or b) leads to the genesis of hamacanthin A (Ⅰ) and topsentin (Ⅱ) skeletons. Cleavage of the C−N bond (c) in the topsentin skeleton, and successive Schiff base formation between newly generated amino group and the intact carbonyl group may lead to a genesis of hamacanthin B skeleton (Ⅲ). Animal Material The sponges were collected by hand using SCUBA (20 m depth) in October 2002, off the coast of Jeju Island, Korea. The collected sample was a loose association of two sponges Spongosorites sp. and Halichondria sp. The two sponges were separated and only Spongosorites sp. was subjected to chemical analysis. The morphology of the sponge was described elsewhere [11]. A voucher specimen (registry No. Spo. 44) is deposited at the Natural History Museum, Hannam University. Korea. Evaluation of Cytotoxicity A panel of five human solid tumor cell lines, human lung cancer, human ovarian cancer, human skin cancer, human CNS cancer, and human colon cancer, were used to screen cytotoxicity of the compounds based on an established protocol [11,12].
2,206.4
2007-06-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
In Silico and Ex Vivo Studies on the Spasmolytic Activities of Fenchone Using Isolated Guinea Pig Trachea Fenchone is a bicyclic monoterpene found in a variety of aromatic plants, including Foeniculum vulgare and Peumus boldus, and is used in the management of airways disorders. This study aimed to explore the bronchodilator effect of fenchone using guinea pig tracheal muscles as an ex vivo model and in silico studies. A concentration-mediated tracheal relaxant effect of fenchone was evaluated using isolated guinea pig trachea mounted in an organ bath provided with physiological conditions. Sustained contractions were achieved using low K+ (25 mM), high K+ (80 mM), and carbamylcholine (CCh; 1 µM), and fenchone inhibitory concentration–response curves (CRCs) were obtained against these contractions. Fenchone selectively inhibited with higher potency contractions evoked by low K+ compared to high K+ with resultant EC50 values of 0.62 mg/mL (0.58–0.72; n = 5) and 6.44 mg/mL (5.86–7.32; n = 5), respectively. Verapamil (VRP) inhibited both low and high K+ contractions at similar concentrations. Pre-incubation of the tracheal tissues with K+ channel blockers such as glibenclamide (Gb), 4-aminopyridine (4-AP), and tetraethylammonium (TEA) significantly shifted the inhibitory CRCs of fenchone to the right towards higher doses. Fenchone also inhibited CCh-mediated contractions at comparable potency to its effect against high K+ [6.28 mg/mL (5.88–6.42, n = 4); CCh] and [6.44 mg/mL (5.86–7.32; n = 5); high K+]. A similar pattern was obtained with papaverine (PPV), a phosphodiesterase (PDE), and Ca2+ inhibitor which inhibited both CCh and high K+ at similar concentrations [10.46 µM (9.82–11.22, n = 4); CCh] and [10.28 µM (9.18–11.36; n = 5); high K+]. However, verapamil, a standard Ca2+ channel blocker, showed selectively higher potency against high K+ compared to CCh-mediated contractions with respective EC50 values of 0.84 mg/mL (0.82–0.96; n = 5) 14.46 mg/mL (12.24–16.38, n = 4). The PDE-inhibitory action of fenchone was further confirmed when its pre-incubation at 3 and 5 mg/mL potentiated and shifted the isoprenaline inhibitory CRCs towards the left, similar to papaverine, whereas the Ca2+ inhibitory-like action of fenchone pretreated tracheal tissues were authenticated by the rightward shift of Ca2+ CRCs with suppression of maximum response, similar to verapamil, a standard Ca2+ channel blocker. Fenchone showed a spasmolytic effect in isolated trachea mediated predominantly by K+ channel activation followed by dual inhibition of PDE and Ca2+ channels. Further in silico molecular docking studies provided the insight for binding of fenchone with Ca2+ channel (−5.3 kcal/mol) and K+ channel (−5.7), which also endorsed the idea of dual inhibition. Introduction Bronchodilators are used to treat respiratory disorders such as asthma and chronic obstructive pulmonary disease (COPD), both acutely and on a long-term basis. [1,2]. COPD Tissue Preparation The guinea pigs were killed by cervical dislocation followed by tracheal tube isolation. The tracheal tubes were immersed in Krebs solution, which was kept at 37 • C and gassed with the carbogen (95 percent O 2 : 5% CO 2 ) [15]. The trachea was cut into 2-to 3 mm wide rings after adherent fat, and connective tissues were carefully removed. To make a tracheal chain, all of the rings were sliced open, opposing the trachealis muscle, and sutured together [16]. The strips of trachea were then mounted in an organ bath filled with enough Krebs solution to dip the tissue, and an optimal temperature (37 • C) was maintained by the attached thermocirculator with carbogen gas aeration. A constant tension (1 g) was given to each tracheal strip throughout the experiment. After an equilibration period of at least 60 min, the preparations were tested for contractile responses to carbamylcholine (CCh, 1 µM) repeatedly using the isometric force transducer, connected to emkaBATH data acquisition system (France). Once the tonic contraction became stable, the test material EC 50 values were obtained by constructing the inhibitory concentration-response curves (CRCs) by the cumulative addition of test substance in the organ bath in a concentration-dependent manner starting from a lower concentration of 0.01 mg/mL to the maximum tested final bath concentration of 10 mg/mL. To assess the pharmacodynamics involved in their bronchial relaxant activity, different spasmogen-mediated contractions were used [17]. Determination of the Possible Mechanisms of Action The spasmolytic effects of the test samples were tested on low K + (25 mM) and high K + (80 mM)-induced contractions, respectively, to elucidate the possible involvement of K + channel opening and/or Ca 2+ channel inhibitory-like mechanism(s) [18]. Fenchone was added in a cumulative method after a sustained contraction in response to low and high K + to produce concentration-dependent inhibitory responses. The relaxation of the tissue preparation was calculated as a percentage of the K + -mediated control contraction. To characterize the specific type of K + channel activation involved in the bronchodilator effect, the bronchodilator effects of fenchone were reproduced in the absence and presence of different K + channel antagonists such as TEA (1 mM); a nonselective K + channel blocker [19], 4-aminopyridine (4-AP, 100 µM); a selective blocker of voltage-sensitive K + channels [20] and glibenclamide (Gb, 10 µM); a selective blocker of ATP-dependent K + channels [21]. The selection of the concentrations of these antagonists was facilitated based on the previously reported studies conducted in different types of isolated tissues [19][20][21]. The effect of test material on the blockade of Ca 2+ channels is based on the pilot studies conducted against high K + -mediated contractions [22]. K + at a concentration more than 30 mM produces excitatory peaks in the isolated smooth tissues via the opening up of voltage-driven Ca 2+ channels, particularly L-type Ca 2+ channels, thus facilitating the inward movements of calcium ions from the extracellular fluid. This will eventually elevate the intracellular concentration of Ca 2+ that finally produces strong contractions in the preparations [23,24]. Any agent that will suppress high K + -mediated excitations might be labeled as a Ca 2+ channel inhibitor [25], while K + channel openers will selectively inhibit low K + -evoked spasms, whereas Ca 2+ channel inhibitors show comparable potencies to relax low and high K + -evoked spasms [26]. Hence, these experiments differentiate K + channel openers from Ca 2+ channel blockers [27]. Once the contraction is sustained in the form of a straight line after the application of K + , the test compound and/or standard drug was added to the organ bath in an accumulative manner to finally obtain inhibitory CRCs [17]. The tracheal tissues were stabilized in standard Krebs solution, which was then replaced with a calcium-free Krebs solution containing a chelating agent, EDTA, for roughly half an hour, resulting in calcium-free tracheal segments. This Ca 2+ -free Krebs solution was then replaced with Krebs solution, K + -rich and Ca 2+ -free [28]. After an incubation period for a period of around half an hour in a calcium-free and potassium-rich bathing solution, the control curves of calcium were obtained in a dose-related manner. When the control CRCs were attained, the segments were pre-incubated with increasing concentrations of the test samples for an hour, and the calcium curves were re-obtained to observe the CCB-like actions [22]. To elucidate an additional mechanism(s), the test material inhibitory effect against CCh and high K + was critically observed, and the involvement of a PDE-inhibitory-like mechanism is expected if the CCh and high K + are inhibited at comparable potencies. The PDE-inhibitory-like mechanism was confirmed further by isoprenaline-mediated inhibition in CCh-induced contraction in the absence (control) and [29,30]. Molecular Docking Studies To obtain insight into possible molecular mechanisms for spasmolytic activities of fenchone, molecular docking studies were carried out on the various phosphodiesterase receptor proteins and calcium ion channels. The protein receptors considered for these docking studies had PDB IDs 3ITU, 4NPW, 5LAQ, 6JPA, 6EBM, and 7VNP. The studies were carried out on a Windows 10 platform using the AutoDock Vina program on the PyRx platform [31]. Discovery Studio 4, provided by BIOVIA solutions, was used for the visualization of docked poses of ligands and proteins [32]. The crystal structures of the proteins were downloaded from the protein databank found on the RCSB website. The downloaded proteins were subjected to preparing and repairing processes for the missing residues and charges using a Discovery Studio visualizer. The co-crystallized ligands were removed from their proteins and saved separately in PDB format, which was used for redocking in the active domains of their respective proteins to validate our docking methodology [33]. The structures of fenchone and papaverine were downloaded from the protein database and then converted to PDB format with the help of Open Babel software. ADMET Studies ADMET studies (Adsorption, Distribution, Metabolism, Excretion, and Toxicity) are the key characteristics to be considered while developing a novel molecule in the drug discovery cascade. ADMET studies of fenchone were predicted by pkCSM software, which is a web-based program [34]. First of all, SMILES for the fenchone molecule were taken from the PubChem database, and the same software was used to carry out complete ADMET profiling of fenchone. Statistical Analysis Results are presented as mean ± standard error of the mean (n = number of experiments) and median effective concentrations (EC 50 ) with 95% confidence intervals (CI). The bronchodilator activities were evaluated using one-way ANOVA and Dunnett's test. Statistical significance is defined as p < 0.05. Non-linear regression was used to evaluate the concentration-response curves using the standard statistical software (GraphPad, version-4, San Diego, CA, USA). Determination of the Possible Effect of Fenchone on Activation of Subtype of K + Channels After preliminary experiments, results observed higher potency against low K + and fenchone was tested for its spasmolytic effect in the presence of different K + channel antagonists ( Figure 2). In the presence of Gb (10 µM), the spasmolytic effect of fenchone in isolated trachea against low K + -mediated contractions was shifted towards higher concentrations with EC 50 values of 6.24 mg/mL (5.98-6.86; n = 5) (Figure 2A). In parallel assays, 4-AP (1 mM) ( Figure 2B) and TEA (10 mM) ( Figure 2C) pre-incubated tracheal tissues also shifted the fenchone spasmolytic curves against low K + towards higher concentrations Determination of the Possible Effect of Fenchone on Activation of Subtype of K + Channels After preliminary experiments, results observed higher potency against low K + and fenchone was tested for its spasmolytic effect in the presence of different K + channel antagonists ( Figure 2). In the presence of Gb (10 µM), the spasmolytic effect of fenchone in isolated trachea against low K + -mediated contractions was shifted towards higher concentrations with EC50 values of 6.24 mg/mL (5.98-6.86; n = 5) (Figure 2A). In parallel assays, 4-AP (1 mM) ( Figure 2B) and TEA (10 mM) ( Figure 2C) pre-incubated tracheal tissues also shifted the fenchone spasmolytic curves against low K + towards higher concentrations with obtained EC50 values of 5.78 mg/mL (5.44-5.92, n = 5) and 6.22 mg/mL (5.84-6.72, n = 5), respectively. Determination of the Possible Effect of Fenchone on Activation of Subtype of K + Channels After preliminary experiments, results observed higher potency against low K + and fenchone was tested for its spasmolytic effect in the presence of different K + channel antagonists ( Figure 2). In the presence of Gb (10 µM), the spasmolytic effect of fenchone in isolated trachea against low K + -mediated contractions was shifted towards higher concentrations with EC50 values of 6.24 mg/mL (5.98-6.86; n = 5) ( Figure 2A). In parallel assays, 4-AP (1 mM) ( Figure 2B) and TEA (10 mM) ( Figure 2C) pre-incubated tracheal tissues also shifted the fenchone spasmolytic curves against low K + towards higher concentrations with obtained EC50 values of 5.78 mg/mL (5.44-5.92, n = 5) and 6.22 mg/mL (5.84-6.72, n = 5), respectively. Confirmation of PDE Inhibitory-like Spasmolytic Effects of Fenchone The spasmolytic CRCs of fenchone against CCh and high K + at comparable EC 50 values of [6.28 mg/mL (5.88-6.42, n = 4); CCh] and [6.44 mg/mL (5.86-7.32; n = 5); high K + ] ( Figure 3A) was found similar to the inhibitory effect of standard drug, papaverine ( Figure 3B), whereas verapamil showed selectively higher potency against high K + compared to CCh-mediated contractions with respective EC 50 values of 0.84 mg/mL (0.82-0.96; n = 5) 14.46 mg/mL (12.24-16.38, n = 4), as shown in Figure 3C. Confirmation of the PDE-inhibitory effect was observed when tracheal tissues pre-incubated with fenchone (3 and 5 mg/mL) potentiated and shifted isoprenaline-induced inhibitory curves to the left ( Figure 4A) similar to papaverine (1 and 3 µM) ( Figure 4B). In contrast, verapamil did not show any shift in the curves at both tested doses of 0.1 and 0.3 mg/mL ( Figure 4C). PDE-inhibitory effect was observed when tracheal tissues pre-incubated with fenchone (3 and 5 mg/mL) potentiated and shifted isoprenaline-induced inhibitory curves to the left ( Figure 4A) similar to papaverine (1 and 3 µM) ( Figure 4B). In contrast, verapamil did not show any shift in the curves at both tested doses of 0.1 and 0.3 mg/mL ( Figure 4C). Confirmation of Ca++ Channel Inhibitory-like Spasmolytic Effects of Fenchone In tracheal tissues, fenchone, the tested compound ( Figure 5A), in a concentration-dependent manner (3 and 5 mg/mL) similar to verapamil (0.1 and 0.3 µM) ( Figure 5B), shifted the Ca 2+ CRCs to the right with suppression of the maximum response. Papaverine, a dual inhibitor of Ca 2+ channels and PDE, also suppressed the maximum response of Ca 2+ CRCs in a concentration-dependent manner (3 and 10 µM), as shown in Figure 5C. Molecules 2022, 27, x FOR PEER REVIEW 6 of 14 PDE-inhibitory effect was observed when tracheal tissues pre-incubated with fenchone (3 and 5 mg/mL) potentiated and shifted isoprenaline-induced inhibitory curves to the left ( Figure 4A) similar to papaverine (1 and 3 µM) ( Figure 4B). In contrast, verapamil did not show any shift in the curves at both tested doses of 0.1 and 0.3 mg/mL ( Figure 4C). Confirmation of Ca++ Channel Inhibitory-like Spasmolytic Effects of Fenchone In tracheal tissues, fenchone, the tested compound ( Figure 5A), in a concentration-dependent manner (3 and 5 mg/mL) similar to verapamil (0.1 and 0.3 µM) ( Figure 5B), shifted the Ca 2+ CRCs to the right with suppression of the maximum response. Papaverine, a dual inhibitor of Ca 2+ channels and PDE, also suppressed the maximum response of Ca 2+ CRCs in a concentration-dependent manner (3 and 10 µM), as shown in Figure 5C. Confirmation of Ca++ Channel Inhibitory-like Spasmolytic Effects of Fenchone In tracheal tissues, fenchone, the tested compound ( Figure 5A), in a concentrationdependent manner (3 and 5 mg/mL) similar to verapamil (0.1 and 0.3 µM) ( Figure 5B), shifted the Ca 2+ CRCs to the right with suppression of the maximum response. Papaverine, a dual inhibitor of Ca 2+ channels and PDE, also suppressed the maximum response of Ca 2+ CRCs in a concentration-dependent manner (3 and 10 µM), as shown in Figure 5C. Molecular Docking Analyses Further, to obtain insight into the molecular mechanism of fenchone and its spasmolytic effects, molecular docking studies were carried out. Various receptors of PDE and calcium channel were docked with fenchone, co-crystallized ligands, and the standard drugs. The various proteins considered for the study were PDE2A (3ITU), PDE1B (4NPW), PDE4B (5LAQ), and a voltage-gated Ca 2+ channel (6JPA). The protein coordinates (x,y,z) search space and grid box size are provided in Table S1. The docking results shown in terms of binding affinity are represented in Table 1. The docking methodology was validated by redocking the co-crystallized ligands, and the RMSD value and superimposed images of the same are shown in Table S2. Fenchone, being a very small molecule in comparison to PPV and VRP, showed a differing binding affinity than the standards. Fenchone presented binding affinities at −5.2, −5.1, −5.3, and −6.3 kcal/mol with the receptors 3ITU, 4NPW, 5LAQ, and 6JPA, respectively, while the standard drugs exhibited binding affinities at −8.3, −8.2, −8.4, and −7.8 with the same receptors, respectively. Various hydrogen bindings and Van der Waal interactions were observed in the docked structures of FNC and PPV/VRP within the active domain of receptors, as shown in Figure 6 and Figure S1. Further, to explore the voltage-activated potassium channels, the RCSB database was extensively studied, and two isoforms, 6EBM and 7VNP, were utilized. The blind docking results were interesting, as shown in Table 2. With the potassium channel Kv1.2-2.1, FNC and retigabine (RTG) showed binding affinities at −5.7 and −7.6 kcal/mol, respectively, while with KCQN 4, −4.7 and −7.3 kcal/mol, respectively. Molecular Docking Analyses Further, to obtain insight into the molecular mechanism of fenchone and its spasmolytic effects, molecular docking studies were carried out. Various receptors of PDE and calcium channel were docked with fenchone, co-crystallized ligands, and the standard drugs. The various proteins considered for the study were PDE2A (3ITU), PDE1B (4NPW), PDE4B (5LAQ), and a voltage-gated Ca 2+ channel (6JPA). The protein coordinates (x,y,z) search space and grid box size are provided in Table S1. The docking results shown in terms of binding affinity are represented in Table 1. The docking methodology was validated by redocking the co-crystallized ligands, and the RMSD value and superimposed images of the same are shown in Table S2. Fenchone, being a very small molecule in comparison to PPV and VRP, showed a differing binding affinity than the standards. Fenchone presented binding affinities at −5.2, −5.1, −5.3, and −6.3 kcal/mol with the receptors 3ITU, 4NPW, 5LAQ, and 6JPA, respectively, while the standard drugs exhibited binding affinities at −8.3, −8.2, −8.4, and −7.8 with the same receptors, respectively. Various hydrogen bindings and Van der Waal interactions were observed in the docked structures of FNC and PPV/VRP within the active domain of receptors, as shown in Figures 6 and S1. Further, to explore the voltage-activated potassium channels, the RCSB database was extensively studied, and two isoforms, 6EBM and 7VNP, were utilized. The blind docking results were interesting, as shown in Table 2. With the potassium channel Kv1.2-2.1, FNC and retigabine (RTG) showed binding affinities at −5.7 and −7.6 kcal/mol, respectively, while with KCQN 4, −4.7 and −7.3 kcal/mol, respectively. ADMET Profiling For any molecule to be a successful drug, it must have an acceptable ADMET profile. Table 3 represent the physicochemical parameters of the fenchone molecule, and when fenchone was subjected to ADMET prediction by the pkCSM server, it yielded acceptable ADMET values, as shown in Table 4. Drug likeness modules of fenchone exhibited an ac-ceptable ADMET profile to be considered a drug and have passed drug-likeness tests as per Pfizer, Veber, and Igan. The drug expressed 98% intestinal absorption and 63% blood-brain barrier permeability, without AMES and hepatotoxicity, only preseting skin sensitization. Discussion Fenchone is found in a variety of aromatic plants, including F. vulgare and P. boldus, and is used to treat respiratory problems [9]. We, therefore, attempted to explore the possible bronchodilator effect of fenchone with its detailed pharmacodynamics using guinea pig tracheal muscles as an ex vivo model [26]. Its binding to different molecular targets was confirmed by in silico docking studies. In previous research, we discovered that spasmolytic effects of natural compounds are usually mediated via K + channel opening [35], PDE-inhibition [36], and/or Ca 2+ channel blockade [37] hence the tracheal relaxant effects of fenchone were tested against sustained contractions induced by low K + (25 mM), high K + (80 mM), and CCh (1 µM)-mediated spasms [38]. Interestingly, fenchone selectively inhibited low K + -mediated contractions at lower concentrations compared to its spasmolytic effect against high K + , thus showing predominantly K + channels openers-like effect followed by Ca 2+ channel blockade [36]. On the other hand, verapamil, a standard Ca 2+ antagonist [39], inhibited both low and high K + -mediated contractions at comparable potencies as expected [40]. From a mechanistic standpoint, these trials effectively distinguish potassium channel openers from calcium channel blockers [27]. Based on the selectively high potency against low K + , fenchone was explored further to understand its action on the subtype of the K + channel involved in its spasmolytic effect. Fenchone spasmolytic effect was repeated in the tissues pretreated with different blockers, namely glibenclamide, an ATP-mediated K + channels blocker [21], 4-AP (voltage-dependent K + channel blocker) [20], and TEA; a nonselective K + channel blocker [19]. All the tested K + channel blockers shifted the inhibitory CRCs of fenchone towards higher concentrations, thus showing involvement of subtypes ATP-dependent, voltage-dependent, and non-specific K + channels activation. Different types of K + channels are abundant in the smooth muscle of the airways, modulating its physiological and pathophysiological states [41,42]. NO causes soluble guanylyl cyclase to be activated, resulting in the formation of cyclic guanosine monophosphate, which activates potassium channels via its dependent protein kinase [43]. K + channel activation causes cell membrane hyperpolarization, Ca 2+ influx reduction, and inhibition of cellular excitability, resulting in smooth muscle relaxation [44,45]. These findings support an earlier study that found fenchone's antidiarrheal and antispasmodic effects are evoked by ATP-dependent K + channels [11]. Fenchone inhibited CCh and high K + -mediated spasm in tracheal chains at comparable concentrations similar to papaverine, a dual inhibitor of PDE and Ca 2+ channels [46]. The papaverine-like PDE-inhibitory effect of fenchone was confirmed when it deflected inhibitory CRCs of isoproterenol constructed against CCh towards the left, thus showing potentiation. PDE inhibitors elevate the intracellular level of cyclic adenosine monophosphate (cAMP) by inhibiting PDE, which is a relaxant in smooth muscles and a stimulant in the heart [47]. The phosphodiesterase is a superfamily of enzymes and/are classified into 11 families in mammals, known as PDE1 to PDE11 [48]. The PDE enzyme type-4 (PDE-4) is considered more specifically involved in the smooth muscles of the airways; therefore, the inhibition of PDE4 will increase cAMP levels in tracheal tissues, which will result in bronchodilation [49]. Therefore we recommend further studies to precisely determine the inhibitory role of fenchone on PDE subtype-4. Verapamil, a standard Ca 2+ channel blocker, showed significantly higher potency against high K + compared to CCh and did not affect isoprenaline inhibitory curves as was expected [50]. As fenchone inhibited complete efficacy against high K + at higher concentrations, further experiments were conducted to confirm its Ca 2+ channel inhibitory activity. In Ca 2+ free medium, pretreatment of tracheal tissues with fenchone shifted CaCl2 curves towards the right with suppression of maximum response, similar to verapamil and papaverine, thus suggesting a Ca 2+ channel blocking effect [51]. The main limitation seen with PDE inhibitors or anticholinergics in the cardiovascular system is their cardiac stimulatory effect if applied alone [52,53]. However, the additional mechanism of K + channel opening and/or Ca++ channel inhibitory will perhaps offset the cardiac stimulation as both are cardio-suppressive [54]. As a result, the present study validates a novel combination of activities with synergistic and/or side-effect neutralizing potential [55]. Further docking studies of fenchone with PDE receptors and calcium channel receptors suggested the insight that Fenchone binds the same active binding site where the PPV and VPML were binding, respectively. Fenchone has exhibited PDE inhibitory activity similar to PPV but showed lesser binding affinity, and the small shape and size of the fenchone molecule can be regarded as the reason for this unique behavior. Fenchone and PPV showed binding with some common residues of PDE receptors such as Tyr655, His656, Ile826, and Phe823, in 3ITU; His223 and His267 in 4NPW, His450, Asn455, and Asp564, in 5LAQ. However, Fenchone showed interaction with some other residues as well which were not bonded to PPV and this might be the reason for the comparable activity of fenchone despite showing lesser binding affinity in silico. Likewise, fenchone exhibited a Ca 2+ channel blocking effect comparable to VPML while binding to a separate set of residues in 6JPA. Further, the docking insights obtained from the voltage-activated potassium channels exhibited similar patterns, and the binding of fenchone was found to be in totally different pockets. In 6EBM, the standard drug bonded with the residues ( Figure 7A,B. Fenchone showed profound potassium channel activation; it is evident that this is because of the binding at a different active binding site. Furthermore, fenchone comprises 11 heavy atoms compared to 22 heavy atoms in retigabine, making the former less complex than the latter. Retigabine has HBD 3 and HBA 3, whereas fenchone only has HBD 0 and HBA 1, giving the former higher binding energy when engaging with protein binding pockets. This might also explain why fenchone, while being a tiny molecule, surprisingly displayed K + Channel activation capabilities in vitro. A compound must have appropriate hydrophilic and lipophilic nature in order to be developed into a successful drug, and fenchone demonstrated adequate drug-likeness when calculated by the pkCSM software. Conclusions This study shows that fenchone possesses an antispasmodic effect in isolated guinea pig trachea mediated possibly by multiple pathways, predominantly by different types of K + channel activation followed by the dual inhibition of PDE and Ca 2+ channels; additional mechanism(s) cannot be ruled out. Further in silico molecular docking studies Conclusions This study shows that fenchone possesses an antispasmodic effect in isolated guinea pig trachea mediated possibly by multiple pathways, predominantly by different types of K + channel activation followed by the dual inhibition of PDE and Ca 2+ channels; additional mechanism(s) cannot be ruled out. Further in silico molecular docking studies provided insight for the binding of fenchone with the Ca 2+ channel (−5.3 kcal/mol) and K + channel (−5.7), which also endorsed the idea of dual inhibition. This study may recommend further molecular assays to probe the precise pharmacodynamics involved and will therefore support the development of fenchone in the future for the treatment of hyperactive tracheal disorders. Supplementary Materials: Figure S1: A to D show the ligand receptor interactions. Figure A-B and C-D show the binding mode of fenchone (FNC) and papaverine (PPV) with 4NPW and 5LAQ, respectively. Dark green circles and lines show hydrogen bonds while the light green spheres are showing Vander Waal interaction between ligand and residues; Table S1: Vina Grid Search Space and box size of the receptors; Table S2: Redocked structures overlayed on the co-crystallized ligands.
6,083
2022-02-01T00:00:00.000
[ "Biology", "Chemistry" ]
Map-invariant spectral analysis for the identification of DNA periodicities Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. Introduction Many researchers have noted that the occurrence of repetitive structures in a DNA sequence is symptomatic of a biological phenomena. Specific applications of this observation include identification of diseases [1], DNA forensics [2], and detection of pathogen exposure [3]. Some of these structures are simple repetition of short DNA segments such as exons [4], tandem repeats [5], dispersed repeats [6], and unstable triplet repeats in the noncoding regions [7] while other forms more elaborate patterns such as palindromes [8] and the period-3 component [9][10][11][12][13], a strong periodic characteristic found primarily in genes and pseudogenes [14]. Methods that detect these DNA periodicities are either probabilistic or deterministic. Most of the deterministic techniques rely on spectral analysis of the DNA sequence using the shorttime discrete Fourier transform (ST-DFT) [15][16][17]. The main idea is as follows: given a DNA sequence of length N, numerical values are first assigned to every element in F = {A, C, G, T}, where these letters denote the four nucleotides in the DNA, namely the two purines: adenine (A) and guanine (G) and the two pyrimidines: thymine (T) and cytosine (C). A typical DNA double helix is shown in Figure 1. The symbolic to numerical map is clearly not unique, typically has a biological interpretation, and needs to preserve the specific structure of the DNA sequence under study. One such popular map is the Voss representation F −→ D = {0, 1}, where four binary indicator sequences x l (n), l ∈ F, are generated with 1 indicating the presence of a nucleotide and 0 its absence [18]. An example of the mapping of a single DNA strand to x l (n), ∀ l ∈ F is shown in Figure 2. Once the DNA symbolic sequence is mapped into numerical version(s), a set of discrete time sequences are generated and are the numerical equivalence of the DNA http://bsb.eurasipjournals.com/content/2012/1/16 sequence. These numerical sequences can then by processed using standard signal processing techniques. In particular, the ST-DFT for each elementary sequences can be computed as ∀ l ∈ F, where n is the window starting point, R is the amount of window shift, and h(m) = 1 for −M + 1 ≤ m ≤ 0 and zero otherwise. If R = 1, then, the window slides one nucleotide at a time whereas if R = 3, the displacement of the window is on a 3-nucleotide basis. Note that the all-ones function h(m) does not affect the value of X l (Rn, k). However, it serves as a place holder for other filters that can be used to replace it, as will be shown in the following section. One popular application of the ST-DFT based technique that has received considerable attention in the past is the identification of the period-3 component using the DNA spectrum, defined for R = 3 as follows (2) A number of researchers have advocated the use of the period-3 component to discriminate between coding and non coding regions (see for example [11,13,16,[19][20][21][22][23] to name a few) but the subject remains highly controversial as it is successful for certain genes but does not work for others. To better comprehend the underlying reasons behind this disparity in performance, a new multirate DSP model that provides a full understanding of the inner workings of the DNA periodicity has been first proposed in [24], and studied in details in [25]. This model is shown in Figure 3. This model provides closed form expressions for the DNA spectrum that generalize and unify some of the already existing results in the literature were obtained. One of these expressions in particular clearly shows that the identification of the period-3 component in the DNA spectrum, a signal processing problem, is equivalent to the detection of the nucleotide distribution disparity in the codon structure of a DNA sequence, a genomic problem. The disparity in the nucleotide distribution within the codon structure of a DNA sequence is termed the codon bias. Using this model, the DNA spectrum is completely characterized by a set of digital sequences, termed the filtered polyphase sequences. By processing these sequences, signal processing techniques can potentially have an impact on understanding and detecting biological structures of this nature. From a computational cost perspective, the computation of the DNA spectrum x A (n) x C (n) x G (n) x T (n) The Multirate DSP model for general R. The period-3 case is easily obtained by setting R = 3. using this model does not require any complex valued operations [26]. This finding is rather surprising given the existence of complex multipliers in the proposed DSP model as clearly illustrated in Figure 3. It is shown that the direct computation of the DNA spectrum using (2) requires essentially double the amount of arithmetic operations compared to the DSP model approach. It is important, however, to keep in mind that the above conclusions and results were obtained using the Voss symbolic to numerical transformation. A fundamental research issue is to therefore determine the sensitivity of the signal processing based method to the choice of the symbolic to numerical map. In particular, the core question here is: how dependent are the above results on the Voss representation? Are these results invariant with respect to the other popular maps in the literature? Can we derive necessary and/or sufficient conditions for the invariance of the DNA spectrum to the symbolic to numerical transformation? Is there a general mathematical framework that can help us generate new symbolic to numerical maps for which the DNA spectrum remains essentially the same? These are the type of questions we address in this article and provide answers to. One approach to answer this question was presented in [27], where a novel framework for the analysis of the equivalence of the mappings used for numerical representation of symbolic data based on signal correlation was presented, along with strong and weak equivalence properties. In [28], we attempted to answer the same question starting at the aforementioned DSP model for a limited set of mappings. Our main goal in this study is to deembed the symbolic to numerical mapping process from the DNA spectrum computation process. We answer a set of other relevant questions along the way. A key remark is in order at this point: while the DSP model approach proposed in Figure 3 has many advantages, it is not well suited for investigating the role of the symbolic to numerical map in the identification of DNA harmonics. It follows that a completely new paradigm for detecting DNA harmonics is required. The main contribution of this article is therefore the derivation of a novel matrix-based framework for the computation of the DNA spectrum that is extremely well fitted to the study of the symbolic to numerical transformation. Specifically, we first derive a new matrix-based expression of the DNA spectrum that: 1. comprises most of the existing mappings in the literature as special cases, 2. shows that the DNA spectrum is in fact invariable under all these mappings, 3. generates a necessary condition for the invariance of the DNA spectrum to the symbolic to numerical mapping used to compute it. Furthermore, the new algebraic framework presented here decomposes the frequency identification problem into several fundamental components that are totally independent of each other. It follows that sophisticated digital filters and/or alternative transformations to the DFT such as the discrete cosine, sine, and Hartley transforms can always be easily incorporated in the harmonics detection scheme irrespective of the choice of the symbolic to numerical map. Finally, although the newly proposed framework is matrix based, we show that similar to the DSP model approach, the computation of the DNA spectrum using this new framework is very efficient. The article is organized as follows. In Section 2, we derive a new matrix based framework to efficiently compute the ST-DFT-based spectrum. New expressions for the ST-DFT X l (Rn, M R ) and its magnitude squared |X l (Rn, M R )| 2 are obtained and indicate that these quantities are completely parameterized by some pre-defined matrices. The numerical values of these matrices simply depend on our choice of filtering (e.g., rectangular window versus non-rectangular one versus general FIR filters) as well as our choice of data transform (e.g., the DFT versus the DCT versus the DST). Using these results, in Section 3, a new expression of the DNA power spectrum is derived and is also completely defined by these matrices. The elegance of this matrix based approach is that it allows the incorporation of general symbolic to numerical maps into the newly derived DNA spectrum expression provided these generic maps can be expressed as affine transformations of the Voss representation. This last assumption is motivated by the fact that all the popular maps that are available in the literature satisfy the affine condition. Furthermore, the maps are now completely characterized by the affine transformation (two matrices A and b) and can be therefore changed http://bsb.eurasipjournals.com/content/2012/1/16 without affecting the remaining matrices in the DNA spectrum expression. In conclusion, the newly derived DNA spectrum expression is stated as a function of a number of matrices. Each of these matrices captures an essential component of the process (filtering, data transform, symbolic to numerical map) and the elements of each matrix can be changed without affecting the other matrices. In Section 4 and using the above results, we show that the Voss-based DNA spectrum is essentially invariant under some of the most popular maps in the literature. A necessary and sufficient condition for the invariance of the DNA spectrum under any map is also derived. In Section 5, we show how the special structure of the filtering matrix allows the efficient use of sophisticated digital filters to improve the detection performance of DNA harmonics through the computation of the DNA spectrum. We also show how to replace the DFT by other fast transforms such as the discrete cosine transform (DCT), the discrete sine transform (DST), and the discrete Hartley transform (DHT). Finally, some concluding remarks are mentioned in Section 6. A list of the different notation used in the article is summarized in Table 1. A new algebraic framework for computing the ST-DFT Given a sequence x(n) of length N, the ST-DFT is typically implemented using a sliding window approach as shown in Figure 4. Windows of length M that overlap with a factor R are first generated to form x r (n), r = 1, 2, . . . , N w , where N w = (N − M + 1)/R is the number of resulting windows. Once we map the DNA sequence into an integer number of numeric sequences γ , given by x l (n), l = 1, . . . , γ (F −→ D), the ST-DFT's X l (n), l = 1, . . . , γ can be found and their squared magnitudes are added to result in the DNA Spectrum S(n) as summarized in Figure 5. It was shown in [26] that the ST-DFT of x(n) can be written as where the quantities X r (n), ∀ r ∈ {0, 1, . . . , R − 1} are the so-called filtered polyphase sequences given by x l (n) A discrete time sequence of length N whose elements belong to the mapped field D x l (n) The n th window of length M, extracted from x l (n), l = 1, . . . , γ The interleaved version of x(n) with an interleaving factor R, l = 1, . . . , γ The ST-DFT of x l (n), generated using a sliding window of length M and a window shift of length R An array of length M/R whose elements are all equal to one The affine transformation matrices of size γ × 4 and γ × 1, respectively, that map the four V-based sequences into the γ D-based sequences. discrete time sequence, and subsequently its magnitude squared. In this section, we re-express these equations in matrix form, and then use the new formula to derive an expression for |X(Rn, M R )| 2 . Throughout the article, vectors and matrices (arrays) are always expressed in bold http://bsb.eurasipjournals.com/content/2012/1/16 Figure 4 Splitting x(n) into N w overlapping sections x r (n) using a sliding window approach. letters. The notation for the various matrix operations is given in Table 2. Matrix formulation of the ST-DFT Using the defined matrix notation, we can restate Equation (3) as The real valued array is the vector whose elements are the R filtered polyphase components. Similarly, the complex valued R-element array is the vector whose elements are the R equispaced phasors located on the unit circle with 2π R phase deviations as shown in Figure 6 for R = 3 and R = 8. Note that ∀R = 1, which implies that the sum of elements in C is equal to 0. This is a key feature of the complex array C that will be used in later sections to simplify important expressions. On the other hand, we observe that (4) can be written in the following matrix format . . . where h is an all-one vector of length M/R, andx r (n) of length M/R is the r th polyphase component of the window x(n) of length M. Using (9), the R filtered polyphase components X r (n) can be arranged in the following array format Using the identity it follows that ⎡ . . . which can be restated in matrix format as where H . The windowx(n) of length M is a block interleaved version of the sliding window x(n) of length M starting at index n. Generatingx(n) can be accomplished by blocking the window x(n) into an array of R elements per row (hence M/R rows), and then reading the array out column by column. The ST-DFT X(Rn, M R ) can therefore be completely identified as a function of C, h, andx(n) as follows The complex row vector C T H is an array of R blocks, each of length M R as given by which represents M/R repetitions of the elements in C. Similar to C, the sum of elements in C T H is equal to 0. A matrix based expression for the magnitude squared of the ST-DFT Using (5), the magnitude squared of the ST-DFT can be expressed as where matrix D . D is obviously a right circulant (hence Toeplitz) matrix whose rows and columns are rotated versions of C. Obviously, the sum of any row or column elements in D is equal http://bsb.eurasipjournals.com/content/2012/1/16 to 0. Substituting (12) in (14), or equivalently using (13), implies that the spectrum S(n) can be stated as where Matrix W can be represented as a Kronecker product of D and an M R × M R all-one matrix. Note that any row or column in W is a rotated version of C T H, therefore, the sum of the elements of any row or column in W is equal to 0. The new DNA spectrum expression A first step towards finding the DNA spectrum S(n) is the symbolic to numeric mapping F −→ D as was shown in Figure 5. Once the symbolic DNA sequence is mapped into γ numeric sequence(s), the short-time discrete Fourier transform is applied to each of them and the sum of the squared magnitudes of the ST-DFTs will result in the DNA spectrum at the frequency point k = M R as given by For simplicity, we denote S(Rn, k)| k= M R as S(n) in the following sections. Several mappings were introduced in the literature using both real and complex numerical values with typical number of sequences γ = 1 up to 4 to maintain reasonable computation complexity. In this section, we use the results of Section 2 to derive general expressions for the M/R ST-DFT and spectrum for any symbolic to numeric mapping. The Voss-based DNA spectrum The simplest and most commonly used map of a DNA sequence is the Voss representation F −→ V: that is to form γ = 4 binary indicator sequences x A (n), x C (n), x G (n), and x T (n) where a 1 would indicate the presence of a base and 0 indicates its absence [18]. This approach has been extensively used in relevant genomic research. Note that the four sequences are not linearly independent since for any index n, the four sequences will add up to one. That is This redundancy plays an important role in the derivations of this section. Moreover, it follows that for any length-M window starting at n, the four mapped Voss windows will add up to an all-one length-M sequence and the same fact holds for the interleaved windows (17) For illustration, Figure 7a shows a sample DNA window that is mapped into the corresponding numeric windows Figure 7b,d,f,h. With an example interleaving factor R = 3, the interleaved windowsx l (n), ∀ l ∈ F are shown in Figure 7c,e,g,i. Each of the four sequences is a discrete time sequence that can be processed using the analysis of Section 2. Therefore, the ST-DFT of each sequence can be found using (13) to be ∀ l ∈ F, and the power spectrum of each sequence can hence be derived as in (15) to be An obvious step at this point is to simplify (19) to avoid the summation over different bases. To do this, we use Equation (18) to arrange the ST-DFT's of x l (n), ∀ l ∈ F in the following format Using (11), it follows that ⎡ The interleaved windows are generated by rearranging the original windows in an R = 3-interleaved format. In this example, data points ofx l (n) at (1,2,3),(4,5,6), (7,8,9) are mapped from those in x l (n) at (1,4,7), (2,5,8), (3,6,9). We define ϒ v (n): the array of the four Voss-based ST-DFTs. It can now be written as where I 4 is the 4×4 identity matrix, and the vectorx v (n) of length 4M is an array of the four Voss interleaved windows starting at index n:x l (n), ∀ l ∈ F. Using the identity the Voss-based DNA power spectrum can be manipulated into In (23), I 4 and W are constant matrices ∀n. Hence the computation of the spectrum S v (n) for different windows of a DNA sequence needs only the evaluation of the Voss interleaved arrayx v (n). Computing the DNA spectrum under general symbolic to numerical maps Similar to the Voss representation case, any map F −→ D of γ sequences can be processed using the analysis of Section 2. It directly follows that the ST-DFT and spectrum of a single sequence are given by The D-based DNA spectrum can easily be shown to be where the vectorx d (n) of length γ M is an array of the γ D-mapped and interleaved windows starting at index n:x l (n), ∀ l = 1, 2, . . . , γ . It is clear that for every different map F −→ D, a new interleaved windows array http://bsb.eurasipjournals.com/content/2012/1/16 x d (n) has to be evaluated in order to compute a spectrum point S d (n). In this following, we introduce a different new approach to recompute (25) without updatingx d (n) for every map. Basically, we derive a new expression for S d (n) in terms ofx v (n) and a new constant matrix so that we incorporate the map dependance in the matrix part rather than the interleaved array part. In other words, since the map F −→ V is already well-defined, we use the map V −→ D to complete the chain F −→ V −→ D and hence find the spectrum S d (n). Consider the following affine transformation from Voss sequences to a general array of D-mapped sequences ⎡ possibly complex valued arrays. It follows that the array of the D-mapped interleaved windowsx d (n) can be written in terms of the array the Voss-mapped interleaved windowsx v (n) in the following form is an array of γ M-element blocks, each block is M repetitions of one element of b. Substituting forx d (n) in (24) results in a new formula for the array of D-mapped An important result at this point is that the second term in ϒ d (n) is actually equal to 0. This can be verified by reducing it into the following form Recall that the sum of elements in C T H is equal to 0. Therefore, sinceb l is a constant vector, the product C T H .b l is equal to 0, ∀ l = 1, 2, . . . , γ and hence The ST-DFTs array ϒ d (n) can therefore be simplified using the Kronecker product identity (22) into It follows that the D-based DNA spectrum S d (n) is where B . = A H A. Equation (30) indicates that when a certain symbolic to numeric mapping F −→ D is used, the DNA power spectrum S d (n) is completely defined in terms of the Voss-based interleaved arrayx v (n) along with constant matrices W and B which is a function of the transformation matrix A (V −→ D). Note that if A = I 4 then B = I 4 at which (30) reduces to (23) which is the Voss-based spectrum case. Invariance of the DNA spectrum under popular mappings The results found in Section 3 can be applied to some mappings that are widely used in the literature. In specific, by defining the corresponding transformation matrices A and B (V −→ D), closed form expressions for S d (n) are obtained. Furthermore, for a number of mappings, we show that the D-mapped spectrum S d (n) is in fact a scaled version of the Voss-based spectrum S v (n). Four-to-four (γ = 4) representations In this scheme, each Voss sequence is scaled by a possibly complex coefficient according to the following transformations matrices where a, a, g, and t are real or complex coefficients used to scale x A (n), x C (n), x G (n), and x T (n), respectively. The corresponding array of ST-DFT's ϒ d (n) is subsequently given by Now, we extend this result to certain transformations where numeric values of the scale factors a, a, g, and t are specified. The so-called tetrahedral representation has been proposed in [13,29]. In this mapping scheme, the four nucleotides are represented by four equal length vectors oriented towards the corners of a tetrahedron. Projecting the basic tetrahedron on a plane will reduce the dimensionality of the representation to two. This mapping can be defined by the mapping matrix It can be easily seen that in this case: |a| = |c| = |g| = |t| = √ 2 which implies that B = 2I 4 . The corresponding DNA spectrum is Since B = αI 4 (α = 2), the tetrahedral-based DNA spectrum is a scaled version of the Voss-based spectrum. § Quaternion mapping. A more involved step is to replace the complex number set of the tetrahedral mapping with its algebraic generalization, the set of quaternions. Quaternions have been used to map DNA sequences F −→ H [30] and are simply defined as hypercomplex numbers given by p ∈ H = {a + bi + cj + dk|a, b, c, d ∈ R}, where i,j,k are complex coefficients such that i 2 = j 2 = k 2 = ijk = −1 and The transformation matrix is given by In this case, |a| = |c| = |g| = |t| = √ 3, B = 3I 4 . The corresponding DNA spectrum is § Higher order mappings. An alternative Quaternion transformation is given by A = diag(1+i+j +k, 1+i−j −k, 1−i−j +k, 1−i+j −k), which results in B = 4I 4 and consequently S d (n) = 4S v (n). In general, for a complex representation system with η dimensions and equal amplitude coefficients: B = ηI 4 and hence the spectrum S d (n) = ηS v (n). Four-to-three (γ = 3) mappings In order to reduce the DNA spectrum computational cost, several mappings have been proposed with smaller numbers of sequences. One such important symbolic-to-numeric map is the Z-curve mapping [24], which is a unique 3-dimensional curve representation whose sequences have values 1 and −1. One advantage of the Z-curve mapping is that each of its three sequences has a biological interpretation. This scheme is given by Therefore, the transformation matrices are Note that the term involving B 1 in S d (n) can be manipulated into Recall from (17) that l∈Fx l (n) =[ 1 1 · · · 1] T . Take also into consideration that the sum of elements of any row or column in W is equal to 0. This implies that W l∈Fx l (n) = 0, at which it is easy to see that S d (n)| B 1 = 0. Similarly, S d (n)| B 2 = 0. Therefore, only the first term in B contributed to S d (n) at which the Zcurve mapped DNA spectrum is a scaled version of the http://bsb.eurasipjournals.com/content/2012/1/16 Voss-based DNA spectrum This ratio is consistent with the result we first derived in [24] for R = 3, but is now shown to be general for any value of R. We are now ready to state an important result. Theorem. Necessary and Sufficient condition for the invariance of the DNA spectrum. Consider the following affine transformation from Voss sequences to a general array of D-mapped sequences ⎡ The proof follows by simply observing that if B i has constant rows and/or constant columns, then S d (n)| B i = 0. We remind the reader at this point that the vector b γ ×1 has no bearing on the invariance of the DNA spectrum. § Simplex mapping. The simplex mapping is essentially another tetrahedron structured mapping that aims to eliminate the computational redundancy. Its transformations matrices are Matrix B in this case can be written as Similar to the Z-curve case, S d (n)| B 1 = 0. It follows that the simplex-based DNA spectrum is also a scaled version of the Voss-based spectrum, and is given by This ratio is consistent with the result in [31] which was limited to direct DFT and is now shown to be extended to M/R ST-DFT with any value of R. Four-to-two (γ = 2) mappings Pairing couples of nucleotides together was proposed in the literature in order to exploit certain biological features in addition to complexity reduction. For example, it was suggested that exons are rich in nucleotides C and G, while introns have more A and T [29]. This claim inspired the transformation It is obvious that the DNA spectrum in this case can be simplified to which obviously is not a scaled version of S v (n) since B in this case can not be written as αI 4 + i B i , where B i holds constant rows and/or constant columns ∀ i. Four-to-one (γ = 1) mappings Single sequence representations can be generated by assigning each nucleotide a certain coefficient [4,13] in order to keep the single sequence structure using the transformation array and matrix Note that the coefficients chosen for the tetrahedral, quaternion, and paired coupled mappings can be reused along with the single sequence formulation. For example, the paired couples case can be reformulated in a single sequence of 1's and −1's using A = −1 1 1 −1 and at which the DNA spectrum is Similar to the previous case, S d (n) is not a scaled version of S v (n). http://bsb.eurasipjournals.com/content/2012/1/16 Experimental verification. To briefly verify the results of this section experimentally, we apply Equation (30) to real DNA sequences, when the Voss, tetrahedral, quaternion, Z-curve, and simplex maps are employed. For comparison with previous study, we consider first the DNA sequence F56F11.4 in the C. elegans chromosome III. This sequence is 8060 nucleotides and has been used as a benchmark by many researchers [13] to extract the periodicity component at R = 3. The DNA spectra at R = 3 are shown in Figure 8 for the five former mappings, and are obviously related by the constant scale factors derived earlier in the section which clearly verifies our results. Although we lack the space for more general simulations, it is important to state that all the spectra relations are maintained experimentally at other values of R associated with higher order periodicities. For generality purposes, we test two more sequences extracted from the well known Burset-Guigo database [32]. In specific, DNA spectra at R = 3 of the zeta globin gene (ECZGL2) of length 1563, and the Alouatta seniculus epsilon-globin gene (ALOEGLOBIM) of length 1691 are shown in Figures 9 and 10, respectively, for the five former mappings. It can be seen that the relations are still preserved. Alternative measures of DNA periodicities Alternative DNA periodicity measures using fast data transforms [33][34][35], wavelets, and finite impulse response (FIR) digital filters [25,36] were recently proposed to improve the detection performance of these periodicities. However, each method was obtained separately from the other using seemingly a different approach. In this section, we show that our proposed framework can systematically generate all these results by simply changing a number of matrices. It therefore provides a generic unified framework for generating alternative measures of DNA periodicities. For example, we can re-express the matrices D and W in terms of general digital filters and use these filters to modify (30) in order to generate new spectrum formulas. Furthermore, using symmetry based decompositions of D and W, we simplify (30) into a formula with low computational complexity. Modified periodicity measures Recall from Section '2' that matrix W is given by Obviously, W is completely defined by the real array h and the generally complex array C. Note that h and C can be viewed as the impulse responses of two FIR filters defined by the z-transforms H(z) and C(z). Updating the real filter h The FIR filter H(z) is the standard rectangular window filter and has a low pass frequency response with a −13 dB attenuation. To improve its filtering performance, we can use a more general FIR filter, denoted byH(z) and expressed as which is the Z-transform of the general arrayh given bỹ From a signal processing perspective, achieving better performance can be obtained by replacing the rectangular window with another one,H(z), that has slightly wider main lobes but much more attenuated side lobes, as shown in Table 3. The impulse responses of such windows are depicted in Figure 11a weighting is given to all nucleotides. It turns out that the Blackman window has the best main-to-first side lobe attenuation behavior as shown in Figure 11b compared to the rectangular window case and therefore provides the best smoothing of the DNA spectrum. and the complex row vector C TH is now given by It can be easily seen that the sum of elements in C TH is still equal to zero as was the case for C T H. Consequently, it follows that the sum of any row or column in http://bsb.eurasipjournals.com/content/2012/1/16 W =H H DH is still equal to zero. This is a fundamental result which, in turn, implies that all the derivations of Section 3 are still the same even whenh replaces h. In particular, the V-based DNA spectrumS v (n) and the D-based oneS d (n) can be stated as Moreover, all the mathematical relations derived in Section 3 between the D-based spectrum and the Vossbased one are all still valid even when h is replaced byh. Experimental verification. To experimentally verify this result, we consider finding the DNA spectrumS d (n) of the three DNA sequences used in the previous section wheñ h is set to a Blackman window. The relations between the spectra when using the Voss, tetrahedral, quaternion, Zcurve, and simplex mappings are still the same as shown in Figures 12, 13, and 14. Updating the complex filter C Similar to H(z), the FIR filter C(z) can be replaced by a more sophisticated filterC(z) expressed as which is the Z-transform of the general arrayC given bỹ Note that, in this case, the elements in arrayC do not necessarily add to zero anymore. Consequently, the sum of elements in any row or any column inD =C C T or W = H HD H is not necessarily zero. We also note that unlike the case ofh, usingC instead of C keeps the spectrum formulas in (36) correct but does not preserve the mathematical relations between the different D-mapped spectra and the Voss-based spectrum. Joint optimization ofh andC It should be clear at this point that better DNA harmonics detection performance can be potentially achieved through a joint "optimization" ofh andC. For example, a learning paradigm can be used with a least-mean-square (LMS) criterion to find the optimal set,h andC. Alternatively, a biologically induced criterion can yield a substantial boost in performance but it is not clear which criterion to use. This interesting but challenging research topic is however outside the scope of this article and will not be further pursued here. Example. Standard discrete time transforms have been proposed to replace the ST-DFT in the periodicity detection problem. In particular, the short time discrete cosine transform (ST-DCT), sine transform (ST-DST), and Hartley transform (ST-DHT) were introduced and analyzed for this purpose [33]. In this example, we show that these three transforms fit naturally within our proposed analysis when the two arraysh andC are adjusted correctly for each case. Although these standard transforms are not optimized for certain data sets, they can serve as preliminary tests for better periodicity detection. In [33], the short time DFT, DCT, DST, and DHT at k = M/R where shown to be given by where t ∈ f , c, s, h indicates Fourier, cosine, sine, and Hartley transforms, respectively, C (t) r are possibly complex coefficients, and Values of the parameters α, a, b, and θ r for every transform are adjusted according to Table 4. For illustration, setting α = 1, a = 1, b = 0, and θ r = −2πr/R in (37) results in the ST-DFT case. An efficient implementation to calculate Equation (37) is shown in Figure 15 which generalizes Figure 3. This model provides a general framework that encapsulates the computation of the short-time Fourier, cosine, sine, and Hartley transforms at frequency point k = M/R. Therefore, the same matrix-based analysis of Sections 2 and 3 can be used. Matrix W will be updated intoW =H HDH = I R ⊗h T HC C T I R ⊗h T , and therefore the D-based DNA spectrumS d (n) when one of the ST-DFT, DCT, DST, or DHT is employed can be stated asS where the values ofh andC are adjusted according to Table 5. Note that similar to the Fourier case, the sum of elements inC for the cosine and Hartley transforms cases is equal to zero. Therefore, under these two cases, the relations between different D-based DNA spectra and the V-based DNA spectrum are still the same as given in V −→ D mapping matrix A, the real arrayh, and the generally complex arrayC. This conclusion is summarized in Figure 16. A real approach for the spectrum computation A real computationally-efficient alternative for the evaluation of S d (n) can be found by observing the special properties of the circulant/toeplitz matrix D or equivalently the block matrix W. We use the fact that for a generally-complex matrix Q: y H Qy = 0, ∀y ∈ R, if Q is an antisymmetric matrix. We start by splitting D into its symmetric and antisymmetric parts where D s is a circulant and Toeplitz real R×R matrix given by Substituting for D in (15), we get a simple form of the spectrum S(n) Table 4 Parameter settings in Figure 15 to compute the short time Fourier, cosine, sine, and Hartley transforms where y H D as y = 0, ∀l ∈ F, y = Hx(n). The block matrix Following the same analysis of Section 3, (40) can be easily manipulated into a more elegant completely real form given by , or more generally, (30) can be updated into which provides a completely real approach for the computation of the D-mapped spectrum S d (n). Note that all http://bsb.eurasipjournals.com/content/2012/1/16 x(n) X (n) results and different spectra relations in Section 3 still hold when W s replaces W as in (41). Computational complexity comparison. To quantify the computational credit of this real approach, we compare the complexity of (39) to that of (15) of a single discrete time sequence. Sincex(n) can be complex as well according to the mapping used, we find the number of real multiplications and additions needed to evaluate (39) when each ofx(n) and W is either real or complex, as given in Table 6. Recall that the multiplication of the complex numbers x and y, where x = a + jb and y = c + jd requires the computation of ac − bd and ad + bc, which requires four real multiplications and two real additions. Example. For illustration, we evaluate the spectrum S v (n) using W s when R = 3, and compare the result to the Table 5 Modified arraysh andC to compute the short time Fourier-, cosine-, sine-, and Hartley-based DNA spectrum of (38) ST [37]. In specific, we use (40) to find the spectrum S(n) as follows Expanding and completing the square, it follows that S v (n) = l∈F [ X 2 l0 (n) + X l1 (n)(X l1 (n) − X l0 (n)) + X l2 (n)(X l2 (n) − X l0 (n) − X l1 (n))] where q = (r + 1) mod 3. The matrix-based DNA spectrum formula in (42) is consistent with the result derived using a different approach in [37]. Concluding remarks In this article, we have introduced a matrix based framework for locating hidden DNA periodicities using spectral analysis techniques that are invariant to the choice of the symbolic to numerical map. The primary advantage of the presented approach over some of the previous study is the decomposition of the spectrum expression into key matrices whose values can be set independently from each other. symbolic to numerical map, the data transform, and the filtering scheme. The above framework is derived under the assumption that the symbolic to numerical map can be obtained from the Voss representation using an affine transformation. This assumption is however quite loose given that most (if not all) of the proposed maps in the literature satisfy this requisite. Using the new framework, we have then shown that the DNA spectrum expression is invariant under these maps. We have also derived a necessary and sufficient condition for the invariance of the DNA spectrum in terms of the affine transformation matrix A (the b vector in the affine transformation does not affect the DNA spectrum). This condition can serve as the basis for generating novel symbolic to numerical map that preserve the DNA spectrum expression. Finally, in the latter sections of the article, we have shown the potential of using different filtering schemes, e.g., windows other than the rectangular one as well as alternate fast data transforms, e.g., the DCT, DST, and the Hartley transform. A number of simulation results that verify the findings of this article and a brief quantitative analysis of the computational complexity of the new approach were given in the same sections. Future research study would consider the optimization of the Table 6 Real multiplications and additions needed for the evaluation of (39) and (15) different building blocks, namely the symbolic to numerical map, the data transform, and the filtering scheme. This, in turn, requires a deep understanding of the biological significance of different DNA periodicities in order to set up a meaningful objective function and appropriate constraints. Ultimately, the framework proposed here can be incorporated in a more sophisticated system to study the complex structure of genomic sequences and understand the functionality of its various components. Finally, this efficient framework can be extended to the analysis of other types of symbolic sequences of various limited alphabets, either biological sequences (such as protein sequences) or even non-biological ones.
9,784.6
2012-10-15T00:00:00.000
[ "Mathematics" ]
Effect of the modified silica on the conductivity and sensory properties of polyaniline nanocomposites The introduction of nanosized fillers into composites with conductive polymers allows them to control physical and chemical characteristics of these polymers. Silica nanoparticles due to its remarkable properties, which include large ratio of surface area to volume, excellent chemical stability, low cost of synthesis, and low toxicity, especially convenient surface modification, have attracted much attention of researchers. Such materials may be as excellent platforms for development of smart sensing systems for numerous applications in analytical chemistry and bioanalysis, in medical diagnostics and therapy, environmental and food analysis, security. It is known that the presence of nanosized silica in the structure of hybrid polymeric composites can not only radically change the structure, but also lead to improved mechanical characteristics, sorption capacity, increase or decrease in specific conductivity. In this work the method of polymerization filling “in situ” was used for preparation of the hybrid composites of polyaniline with nanoparticles of silica modified by titanium (TAC-7) and phosphorus (F-2.1) compounds, studied their morphology, electrical and moisture absorption properties. Influence of the content of inorganic component in composites on their specific conductivity, activation parameters of conductivity and their changes under the action of moisture were studied. It is shown that the filler content of 1–4% increases the electrical conductivity of composites and the incorporation of modified nanoparticles F-2.1 helps stabilize the resistivity of nanocomposites at high humidity. The resistivity change less than 2% was observed throughout the whole range of possible moisture, therefore the obtained modified material can be recommended for using in the resistive sensors operating in the condition of high humidity. Moreover, F-2.1 enhances sensitivity of polymer matrix to hydrogen chloride vapors. So, the possibility of using chemically deposited thin films of polyaniline/modified silica nanocomposite for the optical gas sensors production for various purposes, including monitoring the state of environments in real conditions of atmosphere, is shown. Introduction Semiconductor polymers with a system of conjugated electronic bonds are promising materials for electronic technology, since they exhibit interesting optical properties, the ability to convert light energy, have sensitivity to chemical and physical influences Naveen et al., 2017;Pavase et al., 2018;Cichosz et al., 2018;Liu et al., 2019). One of the most important properties of these materials is their ability to change the resistivity in a wide range of values -from insulators to metals. Such changes can be controlled using methods of chemical or electrochemical doping (Awuzie, 2017;Lu et al., 2018). Polyaniline (PAn) is known as one of the most important in the technological layout of conductive polymers, due to its high electrical conductivity, ease of obtaining, atmospheric stability and relatively low cost (Ćirić-Marjanović, 2013). This polymer is of great interest because of the possibility of its application in various hitech aspects, for example, in electrochemical displays, sensors, catalysts, capacitors, electromagnetic screens, and also in accumulators (Fratoddi et al., 2015;Wang et al., 2016;Eftekhari et al., 2017;Tanguy et al., 2018;Liao et al., 2019). The introduction of nanosized fillers into composites with conductive polymers allows them to control the electrical properties of these polymers, their sensitivity and other physical and chemical characteristics (Seo et al., 2017;Konopelnyk et al., 2017;Chethan et al., 2019). With the rapid development of nanotechnology, the silica nanoparticles have attracted much attention of researchers as excellent platforms for development of smart sensing systems for numerous applications in analytical chemistry and bioanalysis, in medical diagnostics and therapy, environmental and food analysis, security (Wang et al., 2008;Ma et al., 2015;Bapat et al., 2016). Silica nanoparticles process a unique set of remarkable properties, which include large ratio of surface area to volume, excellent chemical stability, low cost of synthesis, and low toxicity, especially convenient surface modification (Filonenko et al., 2010;Liberman et al., 2014;Murugadoss et al., 2017). Most of the known works in the field of obtaining polymer/silica composites are devoted to the use of pyrogenic aerosil -nanoparticles of silicon (IV) oxide as a filler with an unmodified surface (Li et al., 2005;Liu, 2008). At the same time, it has been shown that the use of SiO 2 nanoparticles modified with metal oxides in hybrid polymer composites can not only change their structure but also improve the mechanical properties (Starokadomskyi et al., 2011). On the other hand, SiO 2 and its modifications absorb water very actively, due to the hydrophilic surface functional groups, such as Si-OH (Li et al., 2005;Filonenko et al., 2010). Probably the presence of fillers in the composite structure can lead to stabilization of electrical conductivity as a result of binding excess moisture, that eliminate participating of the water molecules in the processes of protonation of emeraldine chloride, respectively, changes of the electrical conductivity. The choice of gases for studying the sensory properties of hybrid structures is caused by the importance of their control in the atmosphere of residential, office and industrial premises, for the control of food quality (Li et al., 2005;Filonenko et al., 2010;Li et al., 2017). It is known that HCl is the most commonly used acid in the chemical industry (Misra et al., 2004;Wang et al., 2018). In technological processes the probability of release of hydrogen chloride -a dangerous air pollutant remains. Due to the toxic properties of HCl in both gaseous and water forms, there is a great need to detect and establish the concentration of these particles. However, currently available HCl detection methods are long-lasting, complex and expensive. Also, standard sensors, available in the market, operate at high temperatures (Aksimentyeva et al., 2015). Polymer semiconductor electronic sensors offer an attractive alternative due to its potentially low cost, simple packaging, versatility and compatibility with flexible substrates. So, in this work, the influence of the content of inorganic component in composites on their specific conductivity, activation parameters of conductivity and their changes under the action of moisture were studied. Silica nanoparticles were tested as possible modifier of polymer matrices used in resistive and optical gas sensors in real conditions of operation, that is, in an atmosphere with natural humidity. Materials and research methods Hybrid composites of PAn modified with silica nanoparticles were obtained by the method of polymerization filling (Aksimentyeva et al., 2015). Nanoparticles SiO 2 (AE, specific surface area 256 m 2 /g) and its modifications -with titanium (IV) oxide (TAE-7, specific surface area 90 m 2 /g) and phosphorus (III) chloride (grade P-2.1, specific surface area 124 m 2 /g), developed at the Chuiko Institute of Surface Chemistry National Academy of Sciences of Ukraine, were used as fillers of the composites. The technology of their production and properties are described in (Zarko et al., 1983;Bogatyrev & Chuiko, 1984). Before the synthesis of the composites, SiO 2 powder in an amount of 1.4 g was added to 60 mL of 1 M HCl solution and sonicated for 8 h. The resulting colloidal dispersion of silica was added in different proportions into the reaction solution containing aniline (Sigma Aldrich). To the resulting mixture equimolar amount of ammonium persulfate was added dropwise over 4 h with continuous stirring. Cooling the mixture during synthesis to a temperature T = -5 ºC was performed in cryostat by liquid nitrogen. PAn without fillers was obtained by oxidative polymerization of aniline in the presence of ammonium persulfate in aqueous solution of hydrochloric acid according to the method described in (Aksimentyeva et al., 2015). The obtained precipitate was filtered, repeatedly washed with distilled water on to completely remove residual electrolyte and dried in a dynamic vacuum at T = 353 K for 8 h. Optical microscopy of the samples was performed on microscope "Micromed", to obtain images a digital camera "Nicon-2500" was used. The molecular structure of composites was confirmed by FTIR-spectroscopy with spectrometer "Avatar" for samples pressed into KBr pellets. The cathode-luminescence was excited by the pulsed flux of electrons with energy up to E p = 9 keV. The frequency of pulses f = 50 Hz, duration τ imp = 3 microseconds. The current density in the electron beam reached j = 1500 A/m 2 . CL measurements were performed at temperature 78 K. Electrical conductivity of polymer composites was studied in pressed samples by standard 2-contact method at T = 293 K, the temperature dependence of the resistance -according to (Aksimentyeva et al., 2002) in the range T = 293-373 K. To study the regularities of moisture absorption of samples, it were kept in a sealed chamber with controlled humidity, which was created as the vapor pressure of sulfuric acid solutions of different concentrations (Lurie, 1971). Number of absorbed moisture was determined by gravimetric method. For this purpose weighing of samples were carried before placement in a moist chamber and during exposure at regular intervals. The relative moisture absorption was determined as the ratio of the mass difference to the initial weight of sample. In order to investigate the sensory properties, thin film elements where obtained by chemical deposition of sensitive layer of PAn and PAn/P-2.1 composite on the optical transparent carrier during the polymerization process according to the method described above. The purified and skimmed glass plate, covered with semiconductor layer SnO 2 , the size of 10300.5 mm, was kept in the reaction mixture for 10 minutes. After that the resulting polymer films were washed with distilled water and dried in air at room temperature. Obtained samples were kept for a fixed time (0.5-3 min) in a germetic glass chamber with HCl vapor. Results and Discussion Formation of hybrid composites of polyaniline with silica was performed using one of the nano-chemical approaches -namely, polymerization filling "in situ", which involves the formation of composite by oxidative polymerization in the presence of nanosize filler in reaction mixture. Ultrasonic processing of silica allowed obtaining a fine dispersion of nanoparticles, during the deposition of which on a solid surface there is formed uniform, dense coating ( fig. 1. a, c). Fig. 1. Microphotos of colloidal dispersion surface of silica TAE-7 (a) and P-2.1 (c); surface of PAn/TAE-7 (b) and PAn/P-2.1 (d). The content of silica is 4 % (by weight). Increasing ×600 The obtained photographs show that the morphology of TAE-7 surface has a distinct texture ( fig. 1. a), while in the case of P-2.1 particles preferably have a spherical shape ( fig. 1. c). In result of polymerization filling there are formed composite with mainly globular structure, silica particles were surrounded by polymer shell of PAn ( fig. 1. b, d). Bright green color of obtained composites indicates that the polymer is in the form of acid-doped PAn -emeraldine salt of hydrochloric acid. According FTIR-spectroscopy, the presence of absorption bands at 3400, 3030, 1600, 1480, 1250, 824, 744 cm -1 was established, typical to emeraldine salt ( fig. 2). Characteristic peaks at 1460 and 1600 cm -1 can be attributed to linear oscillation of C=C bonds of benzoquinoid and quinoid rings respectively. The peak at 1250 cm -1 corresponds to a linear oscillation of C-N bonds of secondary aromatic amines. The bands at 3425 and 3230 cm -1 correspond to N-H and O-H. The peak at 1100 cm -1 is typical for vibrations of Si-O in silica nanoparticles, which are in a good agreement with literature data (Li et al., 2005). The incorporation of silica nanoparticles into the PAn matrix and differences in the structure of TAE-7 and P-2.1 also proves by cathodoluminescence ( fig. 3) Fig. 3. Cadodoluminescence spectra of modified silica TAE-7 (a), P-2.1 (b) at room temperature It is known that the presence of nanosized silica in the structure of hybrid polymeric composites can not only radically change the structure, but also lead to improved mechanical characteristics, sorption capacity, increase or decrease in specific conductivity (Li et al., 2005;Pacher et al., 2010). To study the effect of the filler on the electrical properties of composites there were synthesized samples with different content of colloidal silica. Measurements of resistance of obtained composites at room temperature showed that the content of AE within 0,8-2,4 wt. % decreases the resistance of the composite compared with polymer without filler, and in the case of the content of AE within 3,2-4 wt. % a slight growth of the resistance was observed. At concentrations of filler more than 4 % a sharp increase in resistance was taken place (table 1). In the case of using modified silica as nanosized fillers there are similar concentration dependence. However, reducing the resistivity of composites is carried more noticeable and in the case of the filler content of 4% resistance falls almost 3.5 times for composite with TAE-7 and 10 times for the composite of P-2.1. 1.5*10 6 9.8*10 5 1.1*10 5 The fact of increasing the conductivity of PAN, doped HCl, at the presence of silica may be caused by the formation of nanostructured conductive network in conjugated polymers, due process of structuring of silica colloidal dispersion to form a three-dimensional spatial grid (Goncharuk et al., 2010). The resulting hybrid composites behave like typical semiconductors -namely, with increasing temperature, their resistance decreases ( fig. 4). Presenting this data in coordinates of activation equation ρ = ρ o exp (E a /2kT) as the dependence of logarithm of the resistivity on the inverse temperature (1/T) allows to calculate the value of activation energy of charge transfer (E a ). Due to the calculations, the value of E a in composites varies in a small extent compared with the PAn. So, the E a for PАn sample is 0.127 ± 0.005 eV, while for PАn/AE -0.124 ± 0.005 eV, for PАn/TAE-7 -0.132 ± 0.005 eV and for PАn/P-2.1 -0.107 ± 0.003 eV at 2,4% loading. In order to study the effect of silica on charge transport processes in hybrid composites of polyaniline under the influence of moisture, the resistivity of composites and temperature dependence of the resistivity after keeping of samples in cells with different humidity were measured. The general laws of water absorption for all samples exhibit similar features. If environmental humidity is near atmospheric (ψ = 56%), mass of the samples almost unchanged ( fig. 5). At lower values of ψ decrease the mass of the samples is observed, while at higher values -significant increment. Depending on the content of samples the intensity of moisture sorptiondesorption processes varies. In the case of standing in dry atmosphere (ψ = 2%) the weight decrease of samples were observed due to desorption of water. Samples modified with silica show elevated moisture content (16-21%) compared to the unmodified PAn (14%). At high humidity a polymer matrix possesses the greatest ability for water absorption (to 186%) and the samples of Pan/P-2.1 -the least (73%). (2), 56 (3), 35 (4), 17 (5), 2 (6) % Measurement of conductivity and temperature dependencies of the resistance of composites kept in environments with varying humidity were performed in the range 293-373 K for both samples of the polymer matrix PAn and its composites with silica. As can be seen from table 2, the presence of silica particles leads to a stable value of the resistivity of composites in environments with high humidity versus PAn. The most significant increase of moisture stability of samples is observed in the case of using P-2.1. Throughout the possible range of humidity (2-100%) values of resistivity of samples stored at 9.6 ± 0.2 Ohm*m, its change does not exceed ± 2%, which allow to recommend obtained modified material for use in resistive sensors. To study the sensory properties of the PAn/P-2.1 hybrid structures, thin-film elements were prepared by chemical precipitation of a sensitive PAn and PAn/P-2.1 layer on an optically transparent substrate (with a SnO 2 layer). Comparison of optical spectra of constructed structures is shown in fig. 6. For samples of polymer, an intensive absorption band with a maximum at λ = 760-770 nm is observed, that is characteristic of acid-doped polyaniline . The absorption spectrum of the modified sample differs considerably, namely, the width of the band increases significantly, and the position of the absorption maximum moves toward the smaller wavelengths,  = 680 nm. This spectrum is typical for PAn, which has a large number of reduced amino-quinoid fragments, and the band itself is a superposition of optical absorption of PAn with varying degrees of oxidation-reduction . It has been established that the influence of HCl vapours on the optical spectrum of unmodified PAn is insignificant ( fig. 7, a), since the polymer is in the form of emeraldinium chloride with an equilibrium degree of doping and all available for interactions with hydrogen chloride amino groups are busy. A completely different behavior is observed for PAn/P-2.1 films ( fig. 7, b). As the sample exposure time increases in the atmosphere of the gaseous HCl, the shift of the absorption maximum toward the larger wavelengths, Δλ = 120 nm, and the intensity increasing of this maximum ΔD = 0.081 (11.7%) are observed. In addition, significant changes can be traced after 30 s of HCl vapor influence. Possible cause of PAn/P-2.1 high sensitivity may be several factors. One of them consist in increasing of the contact area of the sensing medium with the analyte, and, accordingly, to adsorb ability, due to the inclusion of nanodispersed particles in the polymer matrix. The second one consist in the properties of surface of the modifier, namely, P-2.1. It should be expected existence of the centers with basic properties on the silica surface, such as hydroxyl and amino groups, as well as phosphine residues. The ability of surface functional groups to form chemical bonds with the HCl molecules, in particular saline forms, causes a change in the optical absorption of PAn/P-2.1 samples. Conclusions It is shown that the silica content of 1-4% causes an increasing of composite conductivity, more over the introduction of silica contributes to the stabilization of the polyaniline resistance at high humidity (ψ>70%). The presence of silica particles leads to a significant increase of moisture stability of the samples, the most significant was observed when using silica brand F-2.1. The resistivity change less than 2% was observed throughout the whole range of possible moisture, therefore the obtained modified material can be recommended for using in the resistive sensors operating in the condition of high humidity. So, in this work, the influence of the content of inorganic component in composites on their specific conductivity, activation parameters of conductivity and their changes under the action of moisture were studied. It has been established that introduction of modified silica nanoparticles F-2.1 into the polyaniline matrix caused enhanced sensitivity of composites to HCl vapors. This effect can be used for development of gas sensors. Prospects for further research Based on the obtained data, the possibility of using chemically deposited thin films of polyaniline/modified silica nanocomposite for the gas sensors production for various purposes, including controlling the food freshness and monitoring the state of environments in real conditions of atmosphere, is shown. The next stage of research will be improving of sensory properties by optimization of synthesis conditions of hybrid composites, as well as studying their sensory sensitivity to other gases (hydrogen sulfide, nitrogen oxides etc.). This work was supported by the project of Ministry of Education and Science of Ukraine "Development of organo-inorganic thin film reversible structures for multifunctional gas sensors" (state registration number 0118u003496).
4,454.8
2019-04-23T00:00:00.000
[ "Materials Science" ]
Chlamydia trachomatis and human herpesvirus 6 infections in ovarian cancer—Casual or causal? Ovarian cancer is one of the most lethal gynecological malignancies in the world. In the United States, more than 20,000 cases of ovarian cancer average every year, causing more than 14,000 deaths per year (www.cancer.org). This high percentage of mortality arises predominantly due to the silent nature of the disease. Ovarian cancer is diagnosed mostly in the late stages thus earning the disease its name—"Silent killer.” It is therefore of utmost importance to identify any markers that will allow early detection of ovarian cancer. The National Cancer Institute associates 15%–20% of all cancer with infectious agents. Studies in the past have shown the presence of several viral and bacterial markers in ovarian cancer samples [1, 2]. Understanding the molecular mechanisms of pathogenesis of these oncogenic pathogens may therefore enable early intervention in treatment and care of ovarian cancer patients. In this brief review, we endeavor to highlight the role that coinfection of human herpesvirus 6 (HHV-6) and Chlamydia trachomatis may play in initiation and progression of ovarian cancer and propose a theory that may justify their presence in ovarian cancer tissues, thus enabling a directed therapeutic approach. C. trachomatis is an obligate intracellular, gram-negative bacterium that is transmitted sexually. More than 2.8 million cases are registered in the US alone [3]. However, the actual number is believed to be much higher, owing to the asymptomatic nature of most C. trachomatis infections. C. trachomatis has a 48–72 hour life cycle in which it infects the cells, replicates, and exits by host cell lysis. During its developmental cycle, C. trachomatis cycles between 2 forms—infectious elementary bodies and replicative reticulate bodies (RB). Its presence in the cell is confined to a vacuoleinclusion. One characteristic of C. trachomatis infection is its ability to persist in an individual for months up to years. It modulates the host-cell signaling pathways, interacts with various organelles, and evades apoptosis to enable the completion of its developmental cycle [4]. In its pursuit of survival, however, C. trachomatis infection induces reactive oxygen species (ROS) production via the NADPH and NOD-like receptor family member X1 (NLRX1) pathways [5]. ROS further leads to oxidative damage of DNA, which is further repaired by base excision repair (BER) and nucleotide excision repair (NER) pathways. Recent studies have shown that C. trachomatis impairs BER of damaged DNA by down-regulating polymerase beta [6]. Deficiency in BER pathway enables the cells to acquire tumorigenic properties [7]. Inefficient BER leads to accumulation of single strand breaks which eventually lead to double strand breaks in the DNA [8]. Telomeres, the protective molecular caps on chromosomes, are damaged through induced telomere shortening during C. trachomatis infection [9]. C. trachomatis also affects the DNA damage response and associated signaling of DNA double strand break and telomere repair [10][11][12]. During C. trachomatis infection, the host cell encounters DNA damage and suffers impaired repair thereby giving rise to the underlying foundation of the prominent cancer hallmark-genomic instability. HHV-6 is a betaherpesvirus that has a double-stranded DNA genome. It infects nearly every individual by the age of 2 years. Its unique ability of integration in host telomeres enables it to maintain a lifelong latency in the infected individual. It mediates this integration through homologous recombination between its direct repeat (DR) sequences and host telomeric sequences. This integrated state is termed as chromosomally integrated HHV-6 (ciHHV-6) [13]. This integrated virus can be transmitted vertically in a mendelian fashion and is then termed as inherited chromosomally integrated HHV-6 (iciHHV-6). iciHHV-6 occurs in 1% of the general population in which at least 1 copy of the virus is present in every nucleated cell of the body [14]. This integrated virus may reactivate further in the lifetime of an individual by telomere-circle formation mechanism, which causes the excision of virus and its replication and/or transcription [9]. HHV-6 reactivation can occur due to many reasons, predominantly by stress and immunosuppression. Reactivation of HHV-6 is associated with a wide range of disorders [15][16][17]. Interestingly, DR sequences are able to integrate within the host genome even in absence of the viral genome. Both in vivo and in vitro studies have shown that viral DRs are capable of integrating in telomeric, as well as in nontelomeric, regions of host chromosomes [18]. Here, the viral DRs were shown to integrate in the intronic regions of gene encoding angiogenesis factor AGGF1 and G alpha interacting protein GAIP [18]. Integration of viral elements in the intronic regions may lead to enhanced gene expression [19]. This transposon-like feature of HHV-6 DR bears the potential of disrupting the regulation of important genes of human genome. The randomness of DR integration makes it an even more lethal cause of genomic instability. Recently, early reactivation or transactivation of HHV-6 has been highlighted by identifying small noncoding viral RNAs (sncRNAs) and their effect on the host transcriptome [20]. The viral DR encoded DR7 protein is known to bind tumor suppressor p53 and inhibit its nuclear translocation by sequestering it in cytoplasm. This strategy of HHV-6 to evade apoptosis may suffice as an initial trigger towards tumorigenesis [21]. C. trachomatis and HHV-6 share an interesting dynamic of coinfection. Coinfection of a C. trachomatis infected cell with HHV-6 induces C. trachomatis persistence in vitro [22], whereas C. trachomatis infection of a latent HHV-6 cell line induces reactivation of the virus [9]. Both these scenarios are detrimental to the genome stability of the host cell. Persistence of C. trachomatis would mean DNA damage over an extended period of time, whereas reactivation of virus may induce production of viral sncRNAs, and random DR integration may severally hamper genome stability (Fig 1). C. trachomatis, although being associated with ovarian cancer for nearly a decade now, is mostly studied in its active infectious state. The persistence model of C. trachomatis is seldom focused upon by researchers. Time and again, epidemiological studies employing extensive controls have pinpointed past C. trachomatis infections to ovarian cancer [23]. A recent study using PathoChip array was employed to identify various pathogenic signatures in ovarian cancer samples. The hybridization signal to pathogen genomic material was compared with both matched and unmatched control samples. Astonishingly, high HHV-6 signals were detected in ovarian cancer but not in either of the control samples. Chlamydia was present with a low prevalence in the same study [1]. Could these pathogens act synergistically and bring about transformation in ovarian cells? Several studies have reported that pathogens do co-occur and coinfect, and such coinfections are implicated in different types of cancer. C. trachomatis has been known to be an important factor in determining the course of Human Papillomavirus (HPV) infection and C. trachomatis/HPV coinfection may cause cervical cancer [24][25][26]. Plasmodium falciparum and Epstein Barr virus (EBV) coinfection is implicated in Burkitt Lymphoma in children in equatorial Africa [27]. Helicobacter pylori and Hepatitis C virus (HCV) are often implicated as coinfecting pathogens in a range of abnormalities, including liver cirrhosis, non-Hodgkin's lymphoma, and gastric adenocarcinoma [28-30]. However, currently there is no study focusing on HHV-6 and C. trachomatis coinfections in cancer samples. It is probably time to strip HHV-6 off its "benign" label and consider its coinfection with C. trachomatis and/or other pathogens for further in-depth studies. Identification of prevalence rates of coinfection in ovarian cancer samples may enable researchers to step-up the in vitro studies and move towards more robust models to study molecular pathogenesis of coinfection. C. trachomatis down-regulates p53 by various mechanisms to evade apoptosis [31,32]. Hence, therapies directed towards stabilizing p53 during infection could be further explored to reduce C. trachomatis-induced onset of ovarian cancer. C. trachomatis also changes the miRNA profile of the host cell such as by upregulating miR-30c or miR-499a targeting DRP-1 and polymerase beta, respectively [6,33]. Both miRNAs also target p53. Therefore, research on miRNA inhibitors as a preventive measure during infection could be considered as another approach. Strong correlation of past infection with C. trachomatis with nearly absent or low prevalence of pathogen in the cancer tissue suggests the ability of this pathogen to alter cells in such a way that further escalates and leads to transformation even after the pathogen is cleared. Down-regulation of p53 and induction of DNA damage are characteristics of C. trachomatis infection that would fit almost perfectly with this hypothesis. However, preexisting genomic malady such as iciHHV-6 could further enhance the magnitude of C. trachomatis-induced genomic instability and mediated oncogenesis. C. trachomatis causes global heterochromatin formation of host genome [10]. Therefore, when most of the genome is inaccessible, HHV-6 reactivation during C. trachomatis infection may lead to DR integration at chromosomal regions that are "active" or accessible. Genes, which are up-regulated during C. trachomatis infection, therefore, form tangible targets for DR integration. Genetic counseling for iciHHV-6 status owing to the hazardous nature of DR integration should therefore be considered for predisposed individuals. One additional marker enabling early detection of ovarian cancer will go a long way in reducing the burden of the disease and allowing a directed therapeutic approach. Decades have passed after the Hippocratic dyad explaining that health is achieved by manenvironment harmony, and that dyad has since been upgraded to a triad to include the etiological agent. Although many infectious agents causing cancer such as HPV, EBV, or Helicobacter pylori have been well-studied in terms of their molecular mechanism causing cancer, others like C. trachomatis and HHV-6, albeit strongly, are merely associated with cancer. It is perhaps time to design more comprehensive studies and harness "omics" approaches to understand the possibility of these coinfections in ovarian cancer and subsequently identify the molecular mechanisms.
2,227.4
2019-11-01T00:00:00.000
[ "Medicine", "Biology" ]
Quadratic Relations of the Deformed W -Algebra for the Twisted Affine Lie Algebra of Type A (2)2 N . We revisit the free field construction of the deformed W -algebra by Frenkel and Reshetikhin [ Comm. Math. Phys. 197 (1998), 1–32], where the basic W -current has been identified. Herein, we establish a free field construction of higher W -currents of the deformed W -algebra associated with the twisted affine Lie algebra A (2) 2 N . We obtain a closed set of quadratic relations and duality, which allows us to define deformed W -algebra W x,r (cid:0) A (2)2 N (cid:1) using generators and relations. Introduction The deformed W -algebra W x,r (g) is a two-parameter deformation of the classical W -algebra W(g). The deformation theory of the W -algebra has been studied in papers [2,3,4,5,6,8,10,12,13,14,16,17]. For instance, free field constructions of the basic W -current T 1 (z) of W x,r (g) were suggested in the case when the underlying Lie algebra is of classical type. However, in comparison with the conformal case, the deformation theory of W -algebras is still not fully developed and understood. Moreover, finding quadratic relations of the deformed W -algebra W x,r (g) is still an unresolved problem. In this paper, we generalize the study for W x,r A (2) 2 1 by Brazhnikov and Lukyanov [3]. They obtained a quadratic relation for the W -current T 1 (z) of the deformed W -algebra W x,r A (2) 2 with an appropriate constant c and a function f (z). This study aims to generalize the result for the cases A 2 to A 2N . We introduce higher W -currents T i (z), 1 ≤ i ≤ 2N , by fusion of the free field construction of the basic W -current T 1 (z) of W x,r A (2) 2N [8] (see formula (3.2)). We obtain a closed set of quadratic relations for the W -currents T i (z), which is completely different from 1 We use two types of symbols, Wx,r(g) and Wx,r(X (r) n ), for the deformed W -algebra associated with the affine Lie algebra g of type X (r) n . 2 T. Kojima those in the case of deformed W -algebras associated with affine Lie algebras of types A (1) N and A(M, N ) (1) (see formula (3.4)). We refer the reader to references [18,19] for the affine Lie superalgebra notation. We obtain the duality T 2N +1−i (z) = c i T i (z) with 1 ≤ i ≤ N , which is a new phenomenon that does not occur in the case of deformed W -algebras associated with affine Lie algebras of types A (2) 2 , A (1) N , and A(M, N ) (1) (see formula (3.3)). This allows us to define W x,r A (2) 2N using generators and relations. We believe that this paper presents a key step toward extending our construction for general affine Lie algebras g, because the structures of the free field construction of the basic W -current T 1 (z) for the affine algebras other than that of type A (1) N are quite similar to those of type A (2) 2N , not A (1) N . We have checked that there are similar quadratic relations as those for type A (2) 2N in the case of type B (1) N with small rank N . The remainder of this paper is organized as follows. In Section 2, we review the free field construction of the basic W -current T 1 (z) of the deformed W -algebra W x,r A (2) 2N [8]. In Section 3, we introduce higher W -currents T i (z) and present a closed set of quadratic relations and duality. We also obtain the q-Poisson algebra in the classical limit. In Section 4, we establish proofs of Proposition 3.1 and Theorem 3.2. Section 5 is devoted to discussion. In Appendices A and B, we summarize normal ordering rules. Free field construction In this section, we define notation and review the free field construction of the basic W -current T 1 (z) of W x,r A (2) 2N . Throughout this paper, we fix a natural number N = 1, 2, 3, . . . , a real number r > 1, and a complex number x with 0 < |x| < 1. Notation In this section, we use complex numbers a, w, q, and p with w ̸ = 0, q ̸ = 0, ±1, and |p| < 1. For any integer n, we define q-integers [n] q = q n − q −n q − q −1 . We use symbols for infinite products, We use the elliptic theta function Θ p (w) and the compact notation Θ p (w 1 , w 2 , . . . , w N ), Θ p (w) = p, w, pw −1 ; p ∞ , Θ p (w 1 , w 2 , . . . , w N ) = In this section we recall the definition of the twisted affine Lie algebra of type A (2) 2N , N = 1, 2, 3, . . . , in [11]. The Dynkin diagram of type A (2) 2N is given by Quadratic relations of the deformed W -algebra W x,r ( A In this section we recall the definition of the twisted affine Lie algebra of type A (2) 2N , N = 1, 2, 3, . . . , in Ref. [16]. The Dynkin diagram of type A (2) 2N is given by The corresponding Cartan matrix 2N is given by with N = 1. We set the labels a i = 2, 0 ≤ i ≤ N − 1, a N = 1, and the co-labels . We obtain A = DB, where B is a symmetric matrix. Thus, the Cartan matrix A is symmetrizable. Let h be an N + 2-dimensional vector space over C. Let {h 0 , h 1 , . . . , h N , d} be a basis of h, and {α 0 , α 1 , . . . , α N , Λ 0 } a basis of h * = Hom C (h, C) such that we have with respect to pairing ⟨·, ·⟩ : Let g(A) be the affine Lie algebra associated with the Cartan matrix A. Since A is symmetrizable, it is defined as the Lie algebra generated by e i , f i , 0 ≤ i ≤ N , and h with the following relations: Here we used the adjoint action (ad x)y = [x, y]. The corresponding Cartan matrix 2N is given by with N = 1. We set the labels a i = 2, 0 ≤ i ≤ N − 1, a N = 1, and the co-labels is a symmetric matrix. Thus, the Cartan matrix A is symmetrizable. Let h be an (N + 2)dimensional vector space over C. Let {h 0 , h 1 , . . . , h N , d} be a basis of h, and {α 0 , α 1 , . . . , α N , Λ 0 } a basis of h * = Hom C (h, C) such that we have with respect to pairing ⟨·, ·⟩ : Let g(A) be the affine Lie algebra associated with the Cartan matrix A. Since A is symmetrizable, it is defined as the Lie algebra generated by e i , f i , 0 ≤ i ≤ N , and h with the following relations: Here we used the adjoint action (ad x)y = [x, y]. Free field construction In this section, we recall the free field construction of the basic W -current T 1 (z) and of the screening operators S i of the deformed W -algebra W x,r A 2N introduced by Frenkel and Reshetikhin [8]. First, we define the N × N symmetric matrix B(m) = (B i,j (m)) N i,j=1 , m ∈ Z, associated with A (2) 2N , N = 1, 2, 3, . . . , as follows: We introduce the Heisenberg algebra H x,r with generators a i (m), The remaining commutators vanish. The generators a i (m), Q i are "root" type generators of H x,r . There is a unique set of "fundamental weight" type generators y i (m), Q y i , m ∈ Z, 1 ≤ i ≤ N , which satisfy the following relations The explicit formulas for y i (m) and Q y j are given in (A.7). We use the normal ordering : : on H x,r that satisfies Let |0⟩ ̸ = 0 be the Fock vacuum of the Fock space of H x,r such that a i (m)|0⟩ = 0, m ≥ 0, 1 ≤ i ≤ N . Let π λ be the Fock space of H x,r generated by |λ⟩ = e λ |0⟩, λ = N j=1 λ j Q y j . We obtain We work in the Fock space π λ of the Heisenberg algebra H x,r . Let the vertex operators A i (z), Y i (z), and S i (z), 1 ≤ i ≤ N , be Quadratic Relations of the Deformed W -Algebra for the Twisted Affine Lie Algebra The main parts of (2.2), (2.3), and (2.4) are the same as those of [8]. We corrected the misprints in the formulas for A i (z), Y i (z), and S i (z) in [8] Let k = k, k = 1, 2, . . . , N , and 0 = 0. The indices i, j ∈ J N satisfy i ≺ j if and only if j ≺ i. We define I = {i 1 , i 2 , . . . , i k } for a subset I ⊂ J N , I = {i 1 , i 2 , . . . , i k }. Let T 1 (z) be the generating series with operator valued coefficients acting on the Fock space π λ , We call T 1 (z) the basic W -current of the deformed W -algebra W x,r A 2N . Let π µ be the Fock space of H x,r generated by |µ⟩ = e µ |0⟩ with , takes values in integers on π µ . Hence, S i is well-defined on π µ . We define the screening operators S i , 1 ≤ i ≤ N , acting on the Fock space π µ as The integral in formula (2.6) means the residue at zero. Quadratic relations In this section, we introduce the higher W -currents T i (z) and present a set of quadratic relations between T i (z) for the deformed W -algebra W x,r A 2N . Quadratic relations We define the formal series ∆(z) ∈ C[[z]] and the constant c(x, r) as The formal series ∆(z) satisfies We define the structure functions f i,j (z), i, j = 0, 1, 2, . . . , as The ratio of the structure functions f 1,1 (z) is We introduce higher W -currents T i (z) as follows: Here, for a subset Ω i = {s 1 , s 2 , . . . , s i } ⊂ J N with s 1 ≺ s 2 ≺ · · · ≺ s i , we set Proposition 3.1. The W -currents T i (z) satisfy the duality Theorem 3.2. The W -currents T i (z) satisfy the set of quadratic relations In view of Proposition 3.1 and Theorem 3.2, we obtain the following definition. Definition 3.3. Let W be the free complex associative algebra generated by elements 2N is the quotient of W by the two-sided ideal generated by the coefficients of the generating series which are the differences of the right hand sides and of the left hand sides of the relations (3.3) and (3.4), where the generating series The justification of this definition is presented later. We compare this definition of the deformed W -algebra with other definitions in Section 5. We present the proofs of Proposition 3.1, Theorem 3.2, and Lemma 3.4 in Section 4. Classical limit The deformed W -algebra W x,r g yields a q-Poisson W -algebra [7,8,9,15] in the classical limit. As an application of the quadratic relations (3.4), we obtain a q-Poisson W -algebra of type A 2N . We set parameters q = x 2r and β = (r − 1)/r. We define the q-Poisson bracket {·, ·} by taking the classical limit β → 0 with q fixed as Here, we introduce The β-expansions of the structure functions are given as As corollaries of Proposition 3.1 and Theorem 3.2 we obtain the following. 2N , the currents T PB i (z) satisfy Here, the structure functions C i,j (z) are given by Corollary 3.6. The currents T PB i (z) satisfy the duality relations 4 Proof of Theorem 3.2 In this section, we prove Proposition 3.1, Theorem 3.2, and Lemma 3.4. Proof of Proposition 3.1 Proof . Using (A.2) and (A.8), we obtain the normal ordering rules (4.1). ■ : Proof . We show (4.6) here. From the definitions, we have Using the relation we obtain f 1,i (z) in the right hand side of the previous formula. We obtain (4.5), (4.7), (4.8), and (4.9) by straightforward calculation from the definitions. Using (4.5) and (4.6), we obtain the relations (4.10), (4.11), and (4.12). ■ Lemma 4.4. The following relation holds for A ⊂ J N : (4.13) Proof . First, we consider the case A = ∅ and J N \ A = J N . In this case, (4.13) can be rewritten as (4.14) Using (2.2), (2.3), and (2.5), the left side of (4.14) can be written as , the generators y 1 (m) in (A.7) are a j (m). Hence, we obtain (4.14). Next, we show (4.13) for A ⊂ J N . Cases (i), 0 ∈ A and (ii), 0 / ∈ A are proved separately. First, we study case (i), 0 ∈ A. Let Multiplying (4.14) by − → Λ A x L−K+1 z on the left, and using (4.1) and (4.7) yields Using (4.2), (4.3) and (4.4) yields Using the above five relations yields From (4.15) we obtain (4.13) for 0 ∈ A. Next, we study case (ii), 0 / ∈ A. The proof for this case is similar to that of case (i). Let Multiplying (4.14) by − → Λ A x L−K z on the left, and using (4.1) and (4.7) yields Using the above five relations yields From (4.16) we obtain (4.13) for 0 / ∈ A. ■ Lemma 4.5. The following relation holds for A ⊂ J N with |A| ≤ N : Proof . We define the map σ : Hence, the relation We prove (4.17) by induction on N . First, we establish the base N = 1 using case-by-case This implies that (4.17) holds for N = 1. Conclusion and discussion In this paper, we obtained the free field construction of higher W -currents T i (z), i ≥ 2, of the deformed W -algebra W x,r A 2N . We obtained a closed set of quadratic relations for the Wcurrents T i (z), which are completely different from those in types A (1) N and A(M, N ) (1) . The quadratic relations of W x,r A (2) 2N do not preserve "parity", though those of W x,r A (1) N and W x,r A(M, N ) (1) do. Here we define "parity" of T i (z)T j (w) as i + j. We obtained the duality which is a new structure that does not occur in types A N , and A(M, N ) (1) . This allowed us to define the deformed W -algebra W x,r A (2) 2N using generators and relations similarly to the definition of the twisted affine Lie algebra of type A (2) 2N given in Section 2. We also justified our definition of the deformed W -algebra of type A 2N . We compare Definition 3.3 with other definitions. In [8], the deformed W -algebras of types A (1) N , and A (2) 2N were proposed as the intersection of the kernels of the screening operators. We recall the definition based on the screening operators for A (2) 2N . Let H x,r be the vector space spanned by the formal power series currents of the form We propose another definition of the deformed W -algebra. From (3.5), the W -currents In this study, our definitions W x,r A 2N were based on generators and relations. We have introduced three definitions of the deformed W -algebra for the twisted algebra of the type A 2N are isomorphic as associative algebras 2N . (5.1) The author believes that this conjecture can be extended to arbitrary affine Lie algebras. Some necessary conditions of isomorphism (5.1) in Conjecture 5.1 can be indicated immediately. From (3.5), we obtain the following inclusion: 2N . We establish a homomorphism of associative algebras φ ∈ Hom C W x,r A 2N . If we assume that φ is injective, the isomorphism on the left side in (5.1) is obtained. In other words, no independent relations other than (3.3) and (3.4) exist in W x,r A 2N . We propose two results to support this claim. In the classical limit the second Hamiltonian structure {·, ·} of the q-Poisson algebra [7,8,9,15] was obtained from the quadratic relations (see (3.6) and (3.7)). In the conformal limit all defining relations of the W -algebra W β A (1) N , N = 1, 2, are obtained from the quadratic relations of W x,r A (1) N upon the assumption that the currents T i (z) have the form of expansion for small parameter ℏ (see [1,Appendix]). The definition of the deformed W -algebra W x,r (g) for non-twisted affine Lie algebra g was formulated in terms of the quantum Drinfeld-Sokolov reduction in [16]. Formulating the definition of the deformed W -algebras W x,r (g) in terms of the quantum Drinfeld-Sokolov reduction for twisted affine Lie algebra or affine Lie superalgebra [4,6,10,12,13] is still a problem that needs to be solved. It remains an open challenge to identify quadratic relations of the deformed W -algebras W x,r (g) for the affine Lie algebras g except for types A (1) N and A (2) 2N . We believe that this paper presents a key step towards extending our construction for general affine Lie algebras g. In [8] and [6] the free field construction of the basic W -current T 1 (z) of W x,r g was suggested in the case when the underlying simple finite-dimensional Lie algebra • g is of classical type, for g of type A (1) N , Λ 1 (z) + · · · + Λ N (z) + Λ 0 (z) + Λ N (z) + · · · + Λ 1 (z) for g of types B (1) 2N , D N +1 , Λ 1 (z) + · · · + Λ N (z) + Λ N (z) + · · · + Λ 1 (z) for g of types C Here we omit details of free field constructions of Λ i (z). The free field construction of T 1 (z) has similar form to that for g of type A 2N −1 , and D (2) N +1 . We would like to draw your attention to the following analogy. Let g be an affine Lie algebras of one of the types B be the integrable highest weight representation of U q ( • g ) with the highest weight Λ 1 . Let V be the evaluation representation corresponding to V Λ 1 of the quantum affine algebra U q (g) with a spectral parameter z ∈ C × . Let n be the dimension of V Λ 1 . We have The evaluation representation V of U q g is self-dual except for g of type A (1) N . Hence, we obtain the duality of the representations of U q g , which is similar as that in (3.3). As an analogy, we expect the duality of the W -currents, N , for the deformed W algebras W x,r (g). Here c i , 0 ≤ i ≤ n, are constants.
4,778.2
2021-08-31T00:00:00.000
[ "Mathematics", "Physics" ]
The Kirkwood Superposition Approximation, Revisited and reexamined The Kirkwood superposition approximation (KSA) was originally suggested to obtain a closure to an integral equation for the pair correlation function. It states that the potential of mean force of say, three particles may be approximated by sum of potential of mean forces of pairs of particles. Nowadays, this approximation is widely used, explicitly or implicitly, in many fields unrelated to the problem for which it was suggested. It is argued that the KSA is neither a good approximation nor a bad approximation; it is simply not an approximation at all. Introduction In 1935, Kirkwood proposed the so-called superposition approximation. 1 It states that the potential of mean force (PMF) for three (or more) particles may be approximated by the sum of pair-wise PMF. The PMF is defined as the change in the Helmholtz energy (in the T, V, N ensemble), or the Gibbs energy (in the T, P, N ensemble) associated with the process of bringing the three particles from infinite separation to the final configuration. [1][2][3][4] Thus, in the T, P, N ensemble This is known as the Kirkwood supervision approximation (KSA). It was originally introduced by Kirkwood in the theory of liquids to achieve a closure to an integral equation for the pair correlation function. This approximation is intuitively very appealing. The reason is that such a pair-wise additive assumption is a good approximation for the potential energy function, (Figure 1), i.e. (1.3) In fact, the pair-wise additivity of the potential function is exact for some systems; e.g. hard spheres, point charges, point dipoles, etc. It is also a good approximation for the total interaction energy among three (or more) simple, non-polar molecules. Perhaps, Kirkwood was inspired by the additivity of the potential energy (1.3) to suggest the additivity of the PMF (1.2). During the years many authors attempted to improve upon this approximation. 3,[5][6][7][8] Most of these attempts were aimed at improving the closure relation for the integral equation for the pair correlation function. Recently, an interesting new approach was suggested by Singer. 9 Singer showed that the KSA may be obtained by applying the principle of maximum entropy. 10,11 The question still remains: to what extent the KSA is a good approximation? In other words, is the KSA good approximation, in itself, without any reference to a closure to an integral equation? In the original article, 1 Kirkwood used a probabilistic argument to justify the superposition approximation, namely that the probability of observing, say three particles at a configuration 1 , 2 , 3 is the product of the three pair-wise probabilities. Equivalently, the superposition approximation may be formulated in terms of triplet and pair-correlation functions. 1 Nowadays, it is not uncommon to encounter especially in the biochemical literature, reference to the PMF as a potential energy function. As a "potential energy," one is inclined to apply the additivity assumption (1.3), without examining its validity. This nomenclature, though common, is unfortunate. The PMF is different from a potential energy in some fundamental properties. One is that the PMF is temperature dependent, whereas the potential energy is approximately independent of temperature. The second is the non-additivity of the PMF, even when there is an exact additivity of the potential energy. In the rest of this article, we shall present arguments showing the source of the non-additivity of the PMF. We shall show that the superposition approximation cannot be theoretically justified even when the potential energy of interactions are strictly additive. Therefore we conclude that the KSA is not an approximation at all. 2.The source of the non-additivity of the PMF The starting point of our argument is the following: For a classical system we can write the triplet PMF as: (2.1) Equation (2.1) states that the work (here at T, P, N constants) of bringing three particles 1,2,3 from infinite separations to the final configuration can be written in two terms, a direct interaction energy , and a solvent induced part . The latter can be rewritten as the difference in the solvation Gibbs energy of the triplet of particles at the configuration , Figure 2, and three times the solvation Gibbs energy of one particle in the same solvent. 4,12 Here, by "solvent" we mean all other particles in the system excluding the three particles 1, 2 and 3. To highlight the source of the non-additivity of the PMF, we shall assume that the potential energy in (2.1) is pair-wise additive (approximately or exactly). We shall show below that the non-additivity of the PMF originates from the solvation Gibbs energy . Note that when the solvent density tends to zero, then all the solvation Gibbs energies on the right-hand side of (2.1) will tend to zero. At this limit the PMF becomes identical with the potential energy. It is perhaps this limiting example that inspired Kirkwood as well as many others to adopt the supposition approximation for the PMF. Unfortunately, whenever a solvent is present, even a solvent consisting of a single molecule, the additivity assumption of the PMF becomes invalid. We first show the source of the non-additive for the simplest solvent; a single water molecule. 13 We also use the , , ensemble simply for convenience. The conclusions are valid also for the , , ensemble. The solvation Helmholtz energy ∆ * of M particles at any specific configuration is defined as 4,12 where is the Boltzmann constant, T the absolute temperature, = −1 , and is the binding energy defined by The average in eq. (2.2) is over all configurations of the water molecule in the , , ensemble with the probability distribution Assuming that the total interaction energy in the system is pair-wise additive, we can write the Helmholtz energy function as The integration in (2.5) is over all possible configurations of the water molecule. We can now see the main difference between the potential energy function and the Helmholtz energy function, . The assumption of pair-wise additivity of the potential function (whether exact or approximate) is indicated in Figure 3a as bold double arrows connecting all the M particles. On the other hand, the second term on the rhs of (2.5) contains only "lines of interactions" between the M particles and the water molecule. These are shown as dashed double arrows in Figure3a. Note that the second term on the rhs of (2.5) has no component which is pair-wise additive in the sense that the has. Therefore, one cannot assume that the Helmholtz energy function is pair-wise additive, unless the second term on the rhs of (2.5) is negligible, which means that the Helmholtz energy function reduces to the potential energy function. The conclusion reached above is valid for any solvent, not necessarily the simplest "solvent" discussed above. In the more general case the Helmholtz energy function is written as where is the configurational distribution of the solvent molecules. 4,12 Again, we see that the function has "lines of interactions" between all pairs of particles. On the other hand, the solvation Gibbs energy has only "lines of interactions" connecting the solute particles to water molecules Figure 3b. The averaging over all the configurations of the solvent molecules has no effect on this conclusion. 12 Thus the general argument is that although the process of bringing the three particles in vacuum and in the liquid are the same processes, Figure 2, the works associated with these two processes are very different. The very rewriting of the PMF in the form (2.1), or the more general form (2.6) reveals the inadequacy of the Kirkwood superposition approximation. In the limit of low solvent density, ∆ * may be shown to be factorizable into a product of solvation Gibbs energies of the M single particles. In this limit the last two terms on the rhs of eq. (2.6) cancel out, and we are left with the potential energy, which to a good approximation may be assumed to be pair-wise additive. In the next section we shall present a few exact examples demonstrating the non-additivity of the potential of mean force.We also present one experimental data J u n e 1 5 , 2 0 1 3 showing that theextent of non-additivity of the solvation Gibbs energy is of the same order of magnitude as the solvation Gibbs energy itself. 12,14 Some specific examples demonstrating the non-additivity of the Helmholtz (or Gibbs) energy of solvation We have seen that the culprit for the non-additivity of the PMF is the solvation Gibbs energy of the cluster of M particles. In this section, we present a few examples for which the PMF may be calculated exactly for both triplets and pairs of particle. (i) Three hard spheres (HS) at , and a solvent consisting of a single hard sphere This is the simplest case of solvation of the triplet of HS particles in a one-hard-sphere "solvent" denoted w. The solvation Helmholtz energy in this case is In the second step on the right-hand side of (3.1) we wrote the average quantity explicitly. Here, the probability density of the solvent molecule is simply Since there is only one solvent-molecule, denoted w, the binding energy reduces to the solute-solvent interaction energy. We next use the property of the HS interaction potential, which makes the integrand zero whenever the solvent molecule penetrates into the excluded volume of the triplet of particles, and unity otherwise. The resulting expression is the right-hand side of (3.1). assume that the configuration is an equilateral triangle. Since V is much larger than we can expand ∆ * , to first order in to obtain and similarly for a pair of particles at . The assumption of pair-wise additivity is equivalent to the equality of (3.5) Clearly, an equality of this kind does not exist, as can be seen from Figure 4. Note that when the particles are sufficiently far apart, such that the excluded volume around the particles is the sum of the excluded volume of each particle, then we have In this case, the equality (3.5) certainly does not hold. However, the solvent induced contribution as well as the interaction energy are zero, and this case is of no interest. (ii) "Solvation" on an adsorbent molecule with conformational changes. This example is instructive because the "solvation" of the ligand on the adsorbent molecule can be solved exactly. We describe here the model and the results. The details of the calculations are quite lengthy and are available elsewhere. 15 The solvent in our case is a system of adsorbent molecules. Each can be in two conformational states and each has three adsorption sites, Figure 5. The system is simple enough so that one can write down the partition function of the system and all the relevant thermodynamical properties. Specifically, we shall be interested in the analog of the PMF, or equivalently the Helmholtz energy change for the following two processes 15 (3.8) (3.9) In process (3.8) we start with two singly occupied adsorbent molecules, transfer one ligand from one adsorbent molecule to the other to form a doubly occupied adsorbent molecule. In process 3.9, we start with three singly occupied adsorbent molecules and we transfer two ligands to form a triply occupied adsorbent molecule. The Helmholtz energy change for this process is (process 3.8) (3.10) In (3.10) we wrote the PMF as a sum of two terms, the direct interaction energy between the two ligands occupying two sites on the adsorbent molecule, and the indirect part, or the "solvent-induced" part of the Helmholtz energy change. The latter is expressed in terms of the two molecular quantities , (3.11) where and are the energy levels corresponding to the two conformations H and L, respectively, and and are the binding energies to the sites of H and L, respectively. Similarly, the Helmholtz energy change for the process (3.9) is (process 3.9) Note that for either or the quantities on the two sides of (3.14) become equal to unity, hence the corresponding are zero. This is the case when there exists no indirect part of the PMF and therefore is of no interest to us. For any other values of and , there exists no condition under which the equality sign applies to (3.14). The reader is referred to reference 15 for more details. (iii) Ising model in one dimensional system We discuss here the simplest 1-D Ising model. Particles occupying lattice points on a 1-D system can be in one of the two states, say "up" or "down", or A and B. It is well known that the triplet correlation function in this system has the following property: For any consecutive triplet of particles, and , the triplet correlation function may be written as 15 (3.15) where is the state ("up" or "down", or A and B) of the site i. This property follows from the Markovian character of the 1-D Ising model. 15 Equation (3.15) is equivalent to the following equality of the analog of the PMF (3.16) This is sometimes referred to as the "Kirkwood superposition approximation. 15,16 However, the additivity expressed in (3.16) should be clearly distinguished from the Kirkwood superposition approximation, which, for the Ising model should be written as (3.17) One can show that the third term on the right-hand side of (3.17) is not zero. 15 It follows that the Kirkwood superposition approximation does not hold for the 1-D Ising model. Thus, although an additivity of the form (3.16) exists, it is different from the Kirkwood superposition approximation. (iv) An experimental evidence for the non additivity of the PMF    
3,162.6
2013-06-15T00:00:00.000
[ "Physics" ]
Program Acceleration in a Heterogeneous Computing Environment Using OpenCL, FPGA, and CPU Reaching the so-called “performance wall” in 2004 inspired innovative approaches to performance improvement. Parallel programming, distributive computing, and System on a Chip (SOC) design drove change. Hardware acceleration in mainstream computing systems brought significant improvement in the performance of applications targeted directly to a specific hardware platform. Targeting a single hardware platform, however, typically requires learning vendor and hardware-specific languages that can be very complex. Additionally, Heterogeneous Computing Environments (HCE) consist of multiple SOC hardware platforms, so why not use them all instead of just one? How do we communicate with all platforms while maximizing performance, decreasing memory latency, and conserving power consumption? Enter the Open Computing Language (OpenCL) which has been developed to harness the power and performance of multiple SOC devices in an HCE. OpenCL offers an alternative to learning vendor and hardware-specific languages while still being able to harness the power of each device. Thus far, OpenCL programming has been directly mostly at CPU and GPU hardware devices. The genesis of this thesis is to examine the connections between parallel computing in a HCE using OpenCL with CPU and FPGA hardware devices. Underlining the industry trends to favor FPGAs in both computationally intensive and embedded systems, this research will also highlight the FPGA specifically demonstrating comparable performance ratings as CPU and GPU at a fraction of the power consumption. OpenCL benchmark suites run on a FPGA will show the importance of performance per watt and how it can be measured. Running traditional parallel programs will demonstrate the power and portability of the OpenCL language and how it can maximize performance of FPGA, CPU, and GPU. Results will show that OpenCL is a solid approach to exploiting all the computing power of a HCE and that performance per watt matters in mainstream computing systems, making a strong case for further research into using OpenCL with FPGAs in a HCE. LIST OF TABLES Program acceleration is the idea of achieving the best program performance by utilizing all tools available, both hardware and software. Programmers create code that is clean, efficient, and built to run as parallel as possible in any computing environment. This idea is the motivation behind the Heterogenous Computing Environment (HCE) concept and the creation of a language to maximize its capabilities, OpenCL. This thesis will briefly discuss the connection between parallel computing and heterogeneity in mainstream computing as a precursor to understanding the need for OpenCL and why is somewhat revolutionary. This brief background in parallel programming and heterogeneity will be followed by program and data analysis using OpenCL in an HCE consisting of a CPU, GPU, and FPGA. Motivation for Research GPUs have cornered the lion's share of developmental research due to their ability to process large amounts of data as well as their physical presence in nearly every mainstream computing system. Up until the release of OpenCL, developers shied away from FPGAs due to the complexity of programming needed just to perform simple functions. Within the past few years, some research has been conducted using OpenCL and FPGA acceleration such as information filtering 1 , fractional video compression 2 , finite impulse filters 3 . More research is warranted, however, due to the increased utilization of FPGAs and since they are the best choice in executing highly parallel programs while consuming the least amount of power possible. Measuring program execution speed is one measure of performance, but the amount of wattage consumed to achieve that speed is also critical and is referred to as performance per watt. This research will demonstrate how it has never been easier to program hardware thanks to OpenCL and demonstrate the performance enhancement and power savings of FPGA over CPU and GPU in an OpenCL controlled environment. OpenCL in a heterogeneous computing environment enables computer programmers and engineers alike to maximize acceleration and performance across all hardware platforms. This research will substantiate why OpenCL should be used for program and hardware acceleration in a HCE. Chapter Overview Chapter 2 provides the background information and research as to the significance of OpenCL as a computing language. It reviews the main literature used in this thesis and research, reviews parallel programming and how it ties into OpenCL, analyzes speed and performance gains as they apply to hardware, and highlights FPGA as the de facto standard in power consumption savings in an HCE. In Chapter 3, a detailed explanation of the methodology is brought forth. The OpenCL architecture is introduced in detail and how it bonds the components of the HCE. As supporting evidence for this research, existing OpenCL programs will be altered to achieve maximum performance results and demonstrate the portability of OpenCL programs and kernels. To illustrate the simplicity of hardware programming using OpenCL, some existing OpenCL programs will be compiled and run on FPGA. A feature program will demonstrate the speed and performance gains of using FPGA as a hardware platform. The Software Development Kit (SDK) for Altera will also be introduced since its understanding is crucial to visualize how the hardware and software communicate in the OpenCL controlled HCE. Chapter 4 will validate the benefits of using OpenCL and FPGA in an HCE. First, comparisons will be made between the results of running a traditional C program Parallel Programming The need to focus more on software versus hardware to accelerate program execution and efficiency came to fruition after processor manufacturers reached the "performance wall" around 2004. Figure 2.1 shows how the increase of performance of single processors and memory configuration flatlined. Sequential programming techniques such as the "divide and conquer" method, which divided a single program into smaller subsets or groups of code executed separately, were precursors to the many classes of parallelism and parallel architectures that exist today. Object oriented computing languages such as Java and C++ also were designed initially to speed up sequential program execution, but suffered from too many data dependencies between function calls to exploit the essence of true parallelism. Instruction level parallelism was used on superscalar and out of order execution uniprocessors, but it only increased the execution speed of traditional sequential programs. 4 Figure 2.1 Processor performance growth. Around 2006, chip developers began placing multiple processors or cores onto one chip. Though the idea of having multiple computers work together on one program was not a new concept, the era of multiple cores had begun. This opened the door for parallel processing programs (running one job on multiple processors) and job level parallelism (running multiple jobs on multiple processors). 5 Within the realm of the personal computer, however, these multi-core machines are typically confined to sharing the same memory or physical address space. Clusters and grid computing were also introduced which are commonplace among online databases, physical limitations to how many cores can be placed on a board, and even though cores designed today have their own data memory they still are competing with other cores for the same physical address space which is controlled by the CPU. Additionally, Single Program Multiple Data (SPMD) models work well to handle running tasks in parallel but matters become much more complicated when data needs to be shared and synchronized between the tasks. With the release of OpenCL 2.0, shared virtual memory is used between program and device that also supports access to data across tasks being executed on separate devices. Speed and Performance Before we move on to discuss the significance of running OpenCL in an HCE, we need to review some basic concepts revolving around speed and performance. In the past decade, considerable strides have been made in increasing the overall speed and performance of programs being executed on hardware accelerators such as the GPU and FPGA boards. Where development based upon Amdahl's law of speedup in latency of task execution on a fixed workload stops, Gustafson's law continues and proves to be a more realistic formula for calculating parallel performance, especially as it relates to the basis of this research. Gustafson's law is formulated as: Slatency(s) = 1p + sp Where: • Slatency is the theoretical speedup in latency of the execution of the whole task • S is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system • P is the percentage of the execution workload of the whole task that benefits from the improvement of the resources of the system before the improvement Gustafson's law shows that even within a constant time interval, the complexity of programming can increase as long as the quantity or quality of resources being used to compute the program increases, such as the number or performance capability of processors for example. Figure consumption must also be kept at a minimum as the tech industry designs more and more applications of embedded computers that run off of battery power. Heterogeneous Computing Environment Now that we have reached the undeniable conclusion that more processing power is better, let's discuss how that processing power can or should be arrayed. Heterogeneous computing is the concept of using multiple "devices" with their own processor and memory capability to execute programs, tasks, or functions independently or concurrently with other devices. These devices include multi-core CPUs, GPUs, FPGAs, and digital signal processors. 9 As an example, GPUs have been used for decades as independent processing units to enhance computer generated graphics for gaming. Graphic generation and rendering require complicated floating point calculations that continually place a high demand on computing resources. Since GPUs are equipped with their own on-chip memory and processor, they opened the door for speed ups in graphics processing during hardcore gaming, especially 3D rendering. Modern GPUs have anywhere from 32 to 64 individual cores placed directly on the chip. NVIDIA Corporation developed their own API called CUDA to bind high-level computing languages such as C, C++, and FORTRAN with the massively parallel computing power of their GPUs. The need for heterogeneous computing has increased exponentially over the past decade. Instead of relying on one processor to perform at the highest possible speed, multiple processors are utilized to achieve faster program execution. Parallelism in programming requires additional processors to gain a true advantage over traditional sequential programming that has dominated software development for decades. As shown in Figure 2.3, these additional processors can be found within the multiple cores of all mainstream computing systems. Besides multiple cores, hardware components of computers today typically have their own memory and processor(s) located directly on the circuit board. SoC design takes advantage of spatial and temporal locality to reduce memory latency and achieve maximal speed-up. As mentioned earlier, manufacturers such as NVIDIA offer GPU cards for laptops and desktop computers that have their own CPU and memory on the same die which greatly reduces memory latency, data dependencies, and data conflicts. The DE1-SoC board used for this research has a dual-core ARM Cortex A9 MP Core processor coupled with a 1GB DDR3 SDRAM as part of the Hard Processor System (HPS). All of these processors or cores located in the various hardware components, along with any external boards that can be connect via PCIE or USB UART connections, provide additional computational power that can be utilized to support data and thread parallelism. These "self-contained" constructs of processor and memory constitute the "devices" of a heterogeneous computing environment. Each device in a heterogeneous environment handles the processing of complex applications differently. Complex applications can be vaguely categorized based upon the workload that they place on the device being used. Control intensive applications include searching, parsing, and sorting operations; data intensive applications focus more on image processing, simulation and modeling, and data mining; and compute intensive applications involve iterative methods, numerical methods, and financial modeling. 10 The complexity of the task at hand will determine which device is best suited for executing the task, as seen in Figure 2.4 which illustrates how a computer may handle data parallel versus serial and task parallel workloads. CPUs are typically best suited for control intensive applications while GPUs excel at processing imagery due to the massively parallel design that handle processing large amounts of data. FPGAs are also inherently parallel and handle complex parallelism with a lower consumption of power than both CPU and GPU platforms. An example of this will be examined exclusively in Chapter 4. Workload diversification has opened the door for improving performance and lowering overall power consumption for applications such as HD video conferencing and real-time language translation. Farming out the parallel processing chores to the GPU while keeping the operating system requirements and serial tasks with the CPU enables maximization of both devices, which is the essence of heterogeneous computing. Now that the layout of an HCE is clear, let's look at an acceleration of the hardware devices. In order to tap into the acceleration power of hardware such as the multi-core CPUs, GPUs, or FPGA one must acquire an in-depth knowledge of highlevel (C, C++, etc.), hardware (VHDL for FPGA), or even vendor specific (CUDA) languages which can be very complex and are only applicable to the one specific piece of hardware. If only there was a way to accelerate any hardware device by utilizing one programming language. This is precisely what OpenCL offers, which will be illustrated in Chapter 3. The use of a heterogeneous computing environment coupled with the OpenCL language and FPGA, offers an alternative framework for maximizing performance, decreased power consumption, and workarounds to the aforementioned challenges. FPGA Programming and Design FPGA design has evolved significantly over the past few decades. In the past, using an FPGA in your development environment required extensive programming just to get your FPGA to perform some simple functions. As a result, FPGAs have been by in large avoided by program developers. 11 FPGAs were the forerunners of the gaming industry, back when games such as Space Invaders and Pac-Man ruled the arcade. Designed as simple two-dimensional arrays of logic gates connected in parallel that are field programmable, they can perform complex computations at a fraction of the performance cost of its hardware brethren. The large number of logic elements typically found on an FPGA provide the means for the multi-threaded parallelism. Modern FPGA design such as the DE1-Soc board from Altera used in this research contains over a million logic elements and thousands of memory blocks in a parallel design that enables multiple OpenCL workgroups to be processed concurrently. Prior to OpenCL, programming an FPGA required learning complex hardware languages such as HDL or VHDL along with using Electronic Design Automation (EDA) tools in order to properly convert your design idea into a complex logic circuit that programmed the FPGA board. With the advent of the OpenCL language which we will discuss in more detail in Chapter 4, FPGAs can now be programmed without having to learn HDL or VHDL. The SDK supplied by the vendor of each OpenCL compatible FPGA board handles the chore of creating the complex logic circuit needed to program the FPGA board. The SDK utilizes a software program in the background such as Quartus II for FPGA in order to create the hardware configuration file. As mentioned earlier, the FPGA is becoming the hardware processor of choice in embedded applications where power is at a premium. The FPGA has a fine-grain parallelism architecture, and by using OpenCL you can generate only the logic you need to deliver one fifth of the power of the hardware alternatives. 12 It is clear that there is a solid connection between OpenCL and FPGA that may see a sharp increase in FPGA application in environments beyond the embedded market. METHODOLOGY Program acceleration can be achieved in many ways. No one way will work all of the time. Sometimes data level parallelism is more critical than job level parallelism, or image processing enhancement is most crucial vice the speed at which linear algebra problems are calculated. But it is possible that there are alternate ways to accelerate hardware in the interest of achieving maximum results with minimal power consumption. The goal here is to shed light on a new paradigm that attempts to tackle complex programs in the most advantageous way possible instead of being constrained to one device. This research is rooted in five key reasons to substantiate the argument for using OpenCL to program hardware, especially FPGA, in an HCE: 1. Heterogeneity in program design provides access to more processing power 2. OpenCL makes programming hardware easier 3. OpenCL provides program portability along with speed and performance advantages 4. FPGAs can perform just as fast as GPUs and CPUs at a fraction of the power consumption Parallel programming exploitation using OpenCL Reason one was covered with a basic explanation of a heterogeneous computing environment and how it correlates to OpenCL. A "Hello World" example will be used to not only template the basic functions of the OpenCL API, but to also demonstrate just how simple it is to program hardware such as the FPGA. Portability, speed, performance, and power savings will be demonstrated using a vector addition program run in the HCE. Wrapping up the supporting arguments will be a black sholes financial options pricing program converted from C to OpenCL to illustrate how parallelization of existing software and hardware can be exploited. Research Setup The components of my heterogeneous computing environment and the interconnectivity of the hardware and software components are diagramed in Figure 3.1. The hardware and software used consisted of the following: Hardware Altera SDK and DE1-SoC Development Board In order to build and run programs within an OpenCL environment, you need to install a SDK that is vendor specific to the type of hardware in your HCE. Since the DE1-SoC board from Altera is used in this research, it was necessary to install the Altera SDK for OpenCL to build and run kernels on that board. The Altera SDK OpenCL Architecture Open Computing Language (OpenCL) is not only designed to communicate with all devices in a heterogeneous computing environment, but also direct the execution of programs by assigning programming tasks to the best device available. 14 It bridges the gap between the computing power of multiple devices and serving as one computing language that software programmers can learn fairly quickly to interact directly with hardware. Created by Apple, it is an open source language that was first released in 2008 by the Khronos Group who still manages the libraries and releases. Khronos developed the OpenCL standard so that an application can offload parallel computation to accelerators in a common way, regardless of their underlying architecture or programming model. 15 Not only is it designed to support the heterogeneous computing environment, it also affords cross-platform portability. A program written in OpenCL can be run on virtually any machine that has an OpenCL SDK and applicable libraries installed. The OpenCL C language is a restricted version of the C99 language, but it also has wrappings that support C++, Java, Python, and NET. 16 The specification of The kernel programming model construct builds kernels using items such as work-items and work-groups to obtain maximum parallelization based upon the targeted device. The host is responsible for compilation of the main source code using a standard C compiler and loading the host binaries into memory. The main source code contains the OpenCL commands and function calls, contains any constant data, and controls the execution of a kernel on a device. Runtime compilation prepares the kernel to be run on the best device based on computational workload which maximizes both concurrency and parallelism. 19 After the kernels are created, they are compiled by an offline compiler to create the hardware configuration files (.aoco and .aocx). The .aocx file is used to program the FPGA enabling it to run the kernel. It takes the place of the complicated VHDL programs of the past that were need to program an FPGA. The memory model consists of buffers, images, and pipes that handle memory allocation, temporary storage of data, and prioritization of how data items are stored. OpenCL defines memory regions as either host or device specific. Global memory (DDR and QDR) is visible to all work items executing a kernel, constant memory stores data that remains constant, local memory (on-chip memory) shares data between work-items, and private memory (on-chip registers) is unique to an individual work-item. 20 Figure 3.5 shows the layout of host and device memory regions within OpenCL. Figure 3.5 OpenCL memory model. OpenCL Runtime and FPGA Programming When learning a new computer language, most authors and instructors start off with a "Hello World" version of the code that shows a few basic commands, function calls, include files, and the like. OpenCL is no exception. In this case, the hello world program is used to demonstrate how the OpenCL architecture interacts with the associated hardware in your HCE. A full listing of the "Hello World" code is provided in Appendix A. We will examine this program by breaking it down into smaller sections of OpenCL code that align with the application steps that all OpenCL programs follow. The final step will be to convert it to run on the DE1-SoC FPGA, demonstrating just how easy it is to program hardware with OpenCL. Since OpenCL is basically a derivative from the C language if follows the same principles of utilizing source and headers files. In the source file, there is one main function that controls the order of program execution along with the programspecific function definitions and declarations. It also provides space to declare hostside memory functions and operations, variables, and constants. The bulk of the standard OpenCL function definitions and declarations are located in a cluster of headers files that are continually updated as newer versions of OpenCL are released. All OpenCL header files are available for download from the Kronos Group website as well as popular developer websites such as GitHub. As stated earlier, all OpenCL programs follow ten main steps that setup the OpenCL environment and run kernels on existing devices. 21 Below is a detailed breakdown of each step along with the associated code. 1. Discovering the platform and devices. In order for a kernel to run on a device, the host must first determine if an OpenCL platform is present and how many devices are associated with it. Figure 3.6 shows the standard function calls for finding the OpenCL platform and devices. 2. Creating a context. After the platform and devices have been discovered, the host program creates a context that includes all devices. Figure 3.7 shows the standard function call for context creation along with error checking. // Create the context. context = clCreateContext(NULL, 1, &device, NULL, NULL, &status); checkError(status, "Failed to create context"); 4. Creating memory objects (buffers) to hold data. Memory objects enable the transfer of data between the host and the device. The buffer is associated with the context on the host side, making it accessible to all devices in that context. Flags can also be used here to specify if data is read-only, write-only, or read-write. The hello world program has no need to hold data in memory, so Figure 3.9 is a code snippet from a vector addition program to show the creation of buffer objects. 5. Copying the input data onto the device. Data is copied from a host pointer to a buffer which is ultimately transferred to the device when needed. Figure 3.10 shows the standard function call from the vector addition program. 6. Creating and compiling a program from OpenCL C source code. The hello world kernel is stored in a character array named "helloWorld" which is used to create a program object that is compiled. During compilation, the information for each targeted device is provided as needed. 7. Extracting the kernel from the program. Since the hello world kernel is embedded in main.cpp as a character array "HelloWorld", it must be extracted in order to be executed independently on a device. Figure 3.12 shows the standard function call for kernel extraction. // Create the hello world kernel kernel = clCreateKernel(program, "helloWorld", &status); checkError(status, "Failed to create kernel"); 9. Copying output data back to the host. This step reads data back to a pointer on the host. Since the hello world program does not use buffer objects, Figure 3.14 is a code snippet from the vector addition program that copies output data back to the host. Figure 3.14 Copy data back to host. 10. Releasing the OpenCL resources. The OpenCL resources that were allocated for kernel execution are released. This is similar to C or C++ programs where memory allocations (for example) are freed at the end of program execution. Figure 3.15 shows the standard function calls to release resources. Converting the hello world program (or any program for that matter) to run on the DE1-SoC FPGA is as easy as creating a hardware configuration file (.aocx) and an executable file for the Linux-based environment used on the DE1-SoC board. As shown in Figure 3.1, the SOC EDS Cross-compiler is used to "make" a Linux executable file that runs the host code on the FPGA using the onboard processor. This executable file is built using the main.cpp file and includes linkages to any libraries and additional source code as needed. The hardware configuration file is built from the kernel.cl file using the Altera Offline Compiler (AOC) which communicates with the Quartus II software. This creates a VHDL-like version of the kernel that can be run directly on the FPGA. It is only necessary to have Quartus II installed for the AOC to use, no need to understand how to use this IDE or even understand VHDL programming. The SDK does all the heavy lifting of a hardware configuration for you. Maximizing Speed, Performance, and Portability The advantages of using OpenCL can be best highlighted with example code that is straight forward and easy to manipulate in order to achieve easily recognizable results. It should be obvious to all software developers and hardware users that there is no one stop shop solution to solving computational problems. Some programs are designed to maximize the advantages offered by a specific hardware and vice versa. But OpenCL differs in that the program is created to be used across multiple hardware configurations. This opens up the possibility of the giving the OpenCL runtime the flexibility to choose the hardware device that is best suited for the task at hand. To emphasize the favorable attributes of OpenCL, I used a vector addition program that can be quickly altered to illustrate different results. The complete code listing for vector addition is listed in Appendix B. Converting from C to OpenCL To best demonstrate the effects of converting an existing program into OpenCL, I selected an existing code example that could be optimized to reap the benefits of running in an HCE using OpenCL. The financial market uses what is referred to as vanilla option pricing to determine call and put options for assets. The As shown in Figure 3.17, the randomization of x and y within the do…while loop, which in itself only runs a few times to arrive at the final value for x, was consuming the bulk of effort from the CPU. The challenge becomes how to convert this C-based program into OpenCL while optimizing program execution. The OpenCL approach is to look for functions or blocks of code as in Figure 3.17 that can be converted to a kernel for faster execution on a specific or multiple devices. This is akin to data and thread-level parallelization, but with the caveat of having more flexibility to tweak the number of work-items in a work-group and achieve even greater effects on the target hardware. The results and analysis gathered from this program conversion are discussed under the Black-Sholes sub-heading in chapter 4. FINDINGS In order to quantify the results and analysis of using OpenCL in a HCE on multiple devices, this research focuses on three overarching use cases. The first use case focuses on a commonly used vector addition algorithm compare and contrast results across different hardware devices. The second use case involves converting an existing program from a tradition high level language such as C to OpenCL to demonstrate the benefits of such a conversion. The third use case focuses on running benchmarks that currently exist for GPU platform on the FPGA. The third use case also involves converting code, but demonstrates the portability of the OpenCL language. Common to all cases is the demonstration that programming hardware such as the FPGA has never been easier. Vector Addition Program The vector addition program takes two vectors and adds them together creating a third resultant vector. This particular OpenCL rendition of the program is a good example of how pipeline parallelism can be exploited on an FPGA to achieve results that are similar to a GPU. Figure 4.1 depicts a representation of the load and store operations that occur within each logic element of the FPGA. The DE1-SoC board has 84,000 logic elements that can be utilized simultaneously by segregating the input vectors into work groups. The FPGA handles each work group as if it was a thread during parallel execution. When loads of thread ID 0, for example, are passed to the ALU for addition, the next two thread IDs are fetched from the host side memory buffer. Figure 4.1 Vector addition pipeline The program is setup to run on all devices that are available so it is an excellent example of why multiple devices with an HCE are important and how OpenCL can best leverage those devices. It can be adjusted to run the entire kernel on one device only, mirrored vector addition on multiple devices simultaneously, or dividing the vector workload across the devices. Buffer objects are created for each device to facilitate the transfer of data between the host and device memory. For my setup, the DE1-SoC has its own memory and processor so running the main program and kernel directly on the board eliminates memory latency due to the spatial locality that would be present when running the source program from the computer. Here is the method used to conduct a comparative analysis of the vector addition kernel run on 3 different devices: Devices: • CPU (4x compute units* at 1.7 GHZ) (Intel Core: 8 SP GFLOPS/cycle) Program alteration: altered the data bandwidth by changing the total elements to be processed: • N = 500,000 • N = 1,000,000 • N = 10,000,000 Captured system clock time average over 5 runs for each value of N Results: • Results in Figure 4.2: straight system clock time for kernel execution • Results 4-6: adjusted times to account for core advantage (FPGA baseline with 1 CU) • Results 7-9: adjusted times for clock rate advantage (CPU baseline @ With a level playing field for each processor (excluding any data conflicts caused by the shared memory between the CPU and GPU), the adjustments listed in Table 4.1 yield the results in Figure 4.3. This is a prime example of how the advantage of an FPGA versus CPU or GPU can be overlooked. Given the same clock rate and equal program workload distribution, the FPGA outperforms the CPU, is very close to the GPU and consumes multitudes less power than both. Black Sholes Program The black sholes program served as an entry level program that offered all the challenges of converting a standard C-based program into OpenCL along with all the benefits of doing so. Achieving program optimization without just creating redundant code is the goal of any developer. The best method is to work from the inside out, looking for functions and blocks of code that could benefit from parallelization. As If the number of sims for the black sholes program is set at 100,000 for example, the function is called 100,000 times by both the put and call functions respectively (see Effective resource and memory management enables the best balance between CPU and FPGA for overall program execution. Rodinia Benchmarks The Rodinia Benchmark Suite, version 3.1 offers a wide range of benchmarks that are targeted for CUDA, OMP, and OCL applications on GPU hardware. Since no OpenCL benchmarks exist for FPGAs, the Rodinia benchmarks were chosen since they contained OpenCL coding and would serve as an opportunity to maximize on the benefits of running OpenCL in a HCE. Since the benchmarks were designed for optimal performance on GPU, not all of them are good candidates for conversion and execution on FPGA. The benchmarks chosen for research were algorithms that could benefit from vectorization, were heavy in data transfer between local, global, and host memory, and utilized multiple kernels. The first benchmark chosen was the K-Means data-mining algorithm which is heavy in data parallelization. The algorithm is designed to take an initial set of input data and sort it into clusters based upon the data's unique features. Within each of these clusters, one item is determined to be the centroid for that particular cluster. The program uses the Euclidean Distance metric to calculate distances between data elements. The Rodinia version of the K-Means program maintains all the data points, features, cluster centers, and data/cluster integrity stored in separate arrays. Pointers are used to reference all of these arrays while performing the I/O, clustering, centroid computation, and cluster reassigning based upon proximity of the data elements to the centroids. The program is divided into separate .c files with the main execution residing within the kmeans.cpp file. As mentioned earlier, the Rodinia Benchmarks are optimized to run on GPU. The K-Means algorithm provides for multiple points of manipulation to achieve different results which benefit experiments with both software and hardware. Running the K-Means benchmark as downloaded showed results that favored the GPU over the FPGA. The OpenCL kernels with this benchmark are quite simple and don't provide an opportunity for FPGA optimization. By thorough examination of the code, I decided to manipulate the constant number of clusters while adjusting the work-group size to best match the FPGA's capabilities. The best combination of cluster groups to work-group size produced the worst performance for the GPU. Figure 4.8 shows the optimal results obtained when working on a 100,000-element data set with 10 clusters and a work-group size of 1024. The next Rodinia Benchmark selected for analysis was the hybrid sort program which is a combination of two popular sorting algorithms, bucket sort and merge sort. The program is designed to take list of floats in random order and run a bucket sort algorithm to separate the input data into groups, or, buckets. This step of the sorting process can be configured to run across all devices in the HCE, tracking the CPU execution time of each device. Next, the buckets are stored in a vector that is fed into the merge sort portion of the program. The merge sort can also be configured to run on all devices and the CPU execution time is tracked. As is a common theme with OpenCL optimization, examining the code execution and determining the proper work-group size to match the targeted hardware device can often be all that is required to achieved best results. Figure 4.9 shows the result of the merge sort and how the DE1-SoC board achieved a slight advantage over GPU when the work-group size was set at 1024. Also, the hierarchy of program execution between the ND-Range, work-group, and work-items underlines one of the major benefits of OpenCL in its portability and scalability. CONCLUSION This research was dedicated to determining if program acceleration with OpenCL in a Heterogeneous Computing Environment consisting of FPGA, CPU, and GPU is a credible approach to increasing speed, performance, and ultimately minimizing power consumption. Today's computing industry has a higher demand than ever before on maximal computation performance with minimal power consumption. We have moved well beyond the era of processor overclocking and moved on to multi-tentacle approaches to enhancing CPU performance such as multi-cores, distributive computing, and now OpenCL. Being able to maximize the performance of all hardware platforms simultaneously, independently, or sequentially is what OpenCL is all about. Learning a new computer language can always be a challenge. Time and repetition are paramount as well as tapping into the best resources available. Compared to other high-level languages such as Java, OpenCL may be somewhat intimidating for developers without C or C++ experience. I found the language to be straightforward to learn the basics, and there are multiple online training sessions offered at no cost from Intel which will get even the novice programmer up and running in relatively short order. The more detailed and in-depth programming lessons will cost money, however. Since the Kronos Group is the manager of the official OpenCL API, their website is naturally an excellent repository of source code and libraries. GitHub can also be used but be cautious not to mix up header files for different versions of OpenCL. The software development kit used was provided by Altera Corporation which required me to obtain a student license. Altera has since been acquired by Intel, so all software downloads can be obtained through their website as well. Intel will direct you to use Microsoft Visual Studio 2010 for compiling your host code for use with their SDK. Learning the API of the SDK also takes some time, but the instructions are very well written and come with simple code examples to help understand the interoperability between all devices in your environment. The results gathered from the vector addition and black sholes programs definitely underline how quickly one can begin to code in OpenCL and obtain immediate performance improvements that are manageable and scalable. Experimenting with work item quantities and work-group sizes gives you the power to parallelize your applications ever further than normal thread and data level parallelization techniques. Utilizing the multiple NDRanges (3 in total) offered by the OpenCL platform gives you additional scalability options that increase the need for synchronization between work-items, groups, multiple kernels (if used), and host and device-side memory. All of which is controllable by either host or device. Programming hardware has never been easier. OpenCL eliminates the need to learn complex hardware programming languages such as Verilog or VHDL to program FPGA. Kernels can be built to run on multiple devices or be tailored to maximize the efficiency of a specific hardware platform. Having device-side and host-side memory models that are scalable gives additional programming flexibility for larger and more data-intensive applications.
8,975.4
2017-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Short-term treatment with Uncaria tomentosa aggravates the injury phenotype in mdx mice Introduction: Uncaria tomentosa (Willd. ex Roem. & Schult.) DC. (Rubiaceae) or UT is a medicinal plant with antiviral, antimutagenic, anti-inflammatory and antioxidant properties. Duchenne muscular dystrophy (DMD) is a severe muscle wasting disease caused by mutations in the dystrophin gene; this deficiency leads to sarcolemma instability, inflammation, muscle degeneration and fibrosis. Objective: Considering the importance of inflammation to dystrophy progression and the anti-inflammatory activity of UT, in the present study we evaluated whether oral administration of UT extract would ameliorate dystrophy in the mdx mice, a DMD model. Methods: Eight-week-old male mdx mice were submitted to 200 mg/kg body weight daily UT oral administration for 6 weeks. General histopathology was analysed, and muscle tumor necrosis factor α, transforming growth factor-β, myostatin and osteopontin transcript levels were assessed. The ability of mice to sustain limb tension to oppose their gravitational force was measured. Data were analysed with the unpaired Student’s t -test. Results: Morphologically, both untreated and UT-treated animals exhibited internalised nuclei, increased endomysial connective tissue and variations in muscle fibre diameters. Body weight and muscle strength were significantly reduced in the UT-treated animals. Blood creatine kinase was higher in UT-treated compared to untreated animals. In tibialis anterior, myostatin, transcript was more highly expressed in the UT-treated while in the diaphragm muscle, transforming growth factor-β transcripts were less expressed in the UT-treated. Conclusion: While previous studies identified anti-inflammatory, antiproliferative and anticarcinogenic UT effects, the extract indicates worsening of dystrophic muscles phenotype after short-term treatment in mdx mice. INTRODUCTION Duchenne muscular dystrophy (DMD) is a severe and degenerative muscle wasting disease caused by frame-shift mutations, mainly deletions, in the dystrophin gene (located on Xp21); it occurs in 1 in 3,600-6,000 newborn boys 1 .Dystrophin, the product of the dystrophin gene, is a 427 kDa subsarcolemmal protein that assembles with some transmembrane and cytosolic proteins to form the dystrophin-associated glycoprotein complex (DAGC).DAGC mediates interactions among the cytoskeleton, membrane and extracellular matrix and promotes mechanical stability during muscle contraction.Additionally, dystrophin acts as a scaffold in different signalling pathways.Therefore, the absence of dystrophin disrupts DAGC, causes sarcolemma instability and elevates calcium influx, inflammation and muscle degeneration 2 .DMD symptom onset usually occurs between 2 and 5 years of age.Initially, proximal muscle weakness is commonly observed, and during the toddler years, patients usually present delays in motor milestones, toe-walking, difficulty in rising from the floor (Gowers maneuver) and frequent falls.Loss of ambulation, scoliosis and cardiac and respiratory complications usually occur in the second decade of life, and as the disease progresses, patients usually die due to cardiac and/or respiratory failure 3 .Muscle dysfunction and a progressive decrease in force generation are the main consequences of the classical hallmarks of muscular dystrophy, inflammation and fibroses 4 .However, the chronicity of the inflammation is potentially responsible for enhancing muscle damage and reducing regenerated muscle fibres 5 . Currently, dystrophic animal models serve an important role in DMD preclinical applications.The mdx mouse (X-chromosome-linked muscular dystrophy 6 ; is the most widely used animal model for investigating DMD.It exhibits a single point mutation in exon 23 of the dystrophin gene; this change prevents full-length dystrophin expression 1 TNF-α is an up-regulated pro-inflammatory cytokine in DMD and acts as a chemotactic factor attracting inflammatory cells to the injured area 8 .Transforming growth factor-β (TGF-β1) and osteopontin are also increased in dystrophic skeletal muscle and participate in extensive disorganization and structural remodeling of the extracellular matrix as part of the fibrotic process 9 .Therefore, these are important biomarkers to be evaluated in studies that propose new therapeutic approaches for DMD.Myostatin, a muscle growth inhibitory protein, is also touted as an important biomarker for neuromuscular diseases 10 . Uncaria tomentosa (Willd.ex.Roem.& Schult.)DC. (Rubiaceae), usually known as "Cats claw" or "uña de gato" and heretofore abbreviated as UT, is a vine located in the Peruvian Amazon and other South and Central American tropical areas.UT aqueous extract and decoctions are traditionally used to treat cancer, heart and inflammatory diseases by Ashaninka Indians.UT has pertinent antiviral, antimutagenic and antioxidant properties. Additionally, UT can regulate the pro-inflammatory cytokine tumor necrosis factor alpha (TNF-α), namely by inhibiting its secretion by lipopolysaccharide (LPS)-activated THP-1 monocytic cells and also by preventing nuclear factor-kappa B (NF-κB) activation 11,12 UT extract is composed of a mixture of quinovic acid and glycosides and pentacyclic or tetracyclic oxindole alkaloids, including pteropodine, speciophylline, uncarine F, mitraphylline, isopteropodine and isomitraphylline 13 .https://doi.org/10.7322/abcshs.2022018.2058 Considering the potential benefit of UT in inflammatory processes and the challenge in controlling the dystrophic hallmarks in DMD, the purpose of this study was to evaluate potential benefits of UT extract over muscle force and histopathology of the mdx mouse.The Kondziela test, histologic analysis and quantitative real-time polymerase reaction (q-RTPCR) for TNF-α, TGF-β, osteopontin and myostatin of the tibialis anterior (TA) and diaphragm (DIA) muscles were performed to examine the effect of UT extract. UT extract We used UT aqueous extract at a dose of 200 mg/kg body weight 14 which had standardized total alkaloid contents corresponding to 5.0% -0.5% of mitraphylline, as measured by chromatography, and was produced by the Herbarium Laboratory (São Paulo, Brazil).The extract derived from U. tomentosa root bark.UT powder was dissolved in 0.9% saline and stored in a refrigerator throughout the study. Experimental mice All protocols were approved by the Animal Care and Use Committee of the Centro Universitário FMABC, protocol 13/2014 and were performed in accordance with National Institutes of Health guidelines.Male 8-week-old mdx mice were utilised (untreated n=6; UTtreated n=10).The untreated animals received 0.2 mL saline and the treated animals received 0.2 mL aqueous UT extract (200 mg/g body weight) administered daily via gavage for 6 weeks.All animals were weighed weekly on a digital scale, and the weight was recorded.All quantifications were made blinded.https://doi.org/10.7322/abcshs.2022018.2058 Muscle strength assay The muscle strength assay was performed with the four limb hanging test to measure the efficacy of the UT treatment (untreated n=6; UT-treated n=10).The test was performed according to the TREAT-NMD protocol (DMD_M.2.1.005)"The use of four limb hanging tests to monitor muscle strength and condition over time" 15 (George Carlson, last reviewed 29 June 2016).Briefly, the animals were placed individually in the centre of a wire grid.This grid was inverted and raised 30 cm above a box full of wood chips and the length of time that the animal was able to stay on the grid without falling was recorded. Holding impulses (gm sec): body weight (grams = gm) x the time the animal remained suspended (seconds = sec). The Holding impulses is used to oppose the gravitational force as an attempt to correct for the negative effects of body mass on the Hang Time. Analysis of creatine kinase After the treatments, the animals (untreated n=6; UT-treated n=10) were anesthetized with an intraperitoneal injection of ketamine (50 mg/kg body weight) and xylazine (10 mg/kg body weight).Blood samples were collected caudal vena cava puncture and used to determine CK activity.The samples were centrifuged (Sigma® 3-18K refrigerated centrifuge) at 3000 rpm, for 10 minutes at 4 °C.The serum obtained was used to determine the amount of CK using the Roche / Hitachi cobas c 701/702, cobas® 8000 ISE analyzers systems. Histological analysis After the 6-week treatment, the animals were anesthetised as described in item 2.4 (untreated n=6; UT-treated n=10).A general morphological analysis was performed on the frozen tibialis anterior (TA) and diaphragm (DIA) muscles from one side randomly.These https://doi.org/10.7322/abcshs.2022018.2058muscles were chosen because they are differentially affected in mdx mice.DIA muscle is severely affected, while TA muscle displays considerable regeneration 7 .The samples were stained with hematoxylin and eosin (H&E) and the total number of fibers with a central nucleus (indicative of regenerated muscle fibers) and fibers with a peripheral nucleus (characteristic of normal fibers) were counted.Two sections of each muscle were analyzed by light microscopy (Micro Nikon eclipse E200) connected to a camcorder (Moticam 1000) in 10X objective, using ImageJ software (ImageJ, http://rsb.info.nih.gov/ij/index.html).Each section muscle was dived into 10 fileds that were randomized photography.All the fibers of the 10 fields (normal and regenerated fibers) were counted to estimate the total population of fibers of each muscle.The percentage of normal and regenerated fibers of the studied animals and the average number of cells of each filed were obtained.The cross-sectional area of individual myofibres (measured as Feret's diameter) was determined from digitised images using Image J (http://rsb.info.nih.gov/ij/index.html). Statistical analysis Data are presented as mean ± standard deviation.Statistical differences between the groups were analysed with an unpaired Student's t-test and grouped comparisons were carried out using two-way analysis of variance (ANOVA), followed by Bonferroni post-test using Prism software (GraphPad, San Diego, CA).p<0.05 was considered statistically significant. Effect of UT on body weight, muscle strength and serum CK levels The treatment did not change the body weight of the mdx mice when compared over the five weeks.However, at the fourth week, UT-treated mdx mouse weight at this time was approximately 13.8% less compared to the untreated mouse body weight (Figure 1A). Regarding the inverted grid test, mdx mice from both groups did not show significant changes in the mean maintenance time throughout the week.However, at the end of the fifth week, there was a significant reduction (46.5%) in the maintenance time for the UT-treated compared to the untreated mice (Figure 1B).Blood CK levels were significantly increased in the UT-treated compared to the untreated animals (Figure 1C). UT effect on the TA and DIA histopathological pattern In TA muscles, there was significant decrease (23.8%) in the percentage of centrally nucleated muscle fibre count but not in cell count under 10x magnification in the UT-treated compared to the untreated group (Figure 2C-D).When analysing the minimum Feret's diameter, there was a significant increase in the 40-µm-diameter fibre population and a significant reduction in the 60-µm-diameter fibre population in the UT-treated compared to the untreated mice (Figure 2B).https://doi.org/10.7322/abcshs.2022018.2058 In DIA muscle, there was a significant increase (34.4%) in the percentage of centrally nucleated fibres in the UT-treated compared to the untreated group (Figure 3C), but when the number of cells per field (10x magnification) was counted, there was no statistical difference between the groups (Figure 3D).Quantification of the minimum Feret's diameter demonstrated a significant increase in the 60-µm-diameter fibre population and a significant reduction in the 90-µm-diameter fibre population in the UT-treated compared to untreated mice (Figure 3B). DIA muscles In the TA muscle, qRT-PCR revealed a significant increase in myostatin expression, in UT-treated compared to untreated mdx mice approximately 376.1% (Figure 2E). Comparatively, in the DIA muscle, there was a significant reduction in TGF-β1 expression and a nonsignificant reduction in myostatin expression in the UT-treated compared to the untreated group approximately 82.8% and 88.1%, respectively (Figure 3E-H). DISCUSSION The results of the present study demonstrated a worsening of dystrophic muscle injury after six weeks of administration of 200 mg/kg aqueous extract of UT root coat, based on molecular and histopathological markers (Figure 4). Muscle fibre injury related to altered sarcolemma permeability is evidenced by the increased plasma CK level in dystrophic mdx mice 6 .As described by Maglara et al. 16 , this finding reflects the intense and severe state of muscular injury in dystrophic animals.Thus, in the present study, UT administration indicated a myotoxic effect in the mdx mouse, as evidenced by the increase in blood CK concentration.Unfortunately, studies that evaluate any https://doi.org/10.7322/abcshs.2022018.2058UT-mediated toxicity are scarce 17 .Additionally, the impaired development of mdx mouse body mass and muscle function after UT administration may be related to worsening muscle injury, as studies clearly demonstrate that decreased CK levels are associated with improvements in muscle function in DMD patients and mdx mice 18 . Studies demonstrated that osteopontin is an immunomodulator and regulates TGF-β1 expression.Mdx mice with deletion of the SPP gene, which encodes osteopontin, show marked reduction in fibrosis and improved muscle strength 19 .The administration of the oral proteasome inhibitor, ixazomib, demonstrated in the mdx mouse a reduction in osteopontin and TGF-β, associated with an improvement in the dystrophic phenotype of the DIA muscle 20 . Another study showed increased levels of TGF-β and OPN in the muscles of GRMD dogs, while TGF-β was shown to be positively associated with the degree of sartorius muscle hypertrophy 21 .Our findings do not demonstrate this relationship in the DIA muscle of animals that were treated with the aqueous extract of UT.The reduction of TGF-β1 seems to be associated with proliferative effects, considering the increase in the number of regenerated cells.Elevated level of TGF-β1 was shown in DIA muscle to be accompanied by development of fibrosis and muscle wasting in mdx mice 22 .Another study shows that TGF-β1 immunomodulation, despite reducing the proliferation of connective tissue in the DIA muscle of mdx mice, showed an increase in the inflammatory response in these animals 23 .Therefore, it is too early to say that the reduction in TGF-β1 observed in the present study corresponds to the beneficial effect of UT on the dystrophic muscle. The muscles of mdx mice suffer intense necrosis around the fourth week of life, followed by muscle fibre regeneration and hypertrophy 21 .Given the susceptible cycles of degeneration/regeneration, this condition continues with muscle atrophy and elevated fibrosis 24 .As a method for evaluating muscle tropism, determining the minimum Feret's diameter is an advantageous analysis due to the small variability in this parameter 25 .Studies https://doi.org/10.7322/abcshs.2022018.2058showed that prevention of muscle atrophy is associated with the effects of preventing myonecrosis and improving muscle function 26 .These parameters correspond with our findings in the DIA muscle by correlating the increase of fibers with central core with the increase of fibers of smaller diameter. Myostatin belongs to the TGF-β superfamily, and it is an important factor that regulates muscle growth 27 .Increased myostatin inhibits the expression of myogenic modulators, including MyoD and myogenin, and this effect directly influences satellite cell proliferation and differentiation and thus impairs muscle regeneration and contributes to muscle atrophy 27 .This phenomenon was especially apparent in the TA muscle, which demonstrated increased myostatin expression in association with the smaller regenerated fibre population and the anticipation of muscle atrophy in the mdx mouse.This finding is consistent with the literature that describes myostatin as a negative modulator of muscle regeneration 28 . Data from the DIA muscle underscore this feedback loop; regeneration was not impaired in this muscle because the myostatin level was low.Our findings are consistent with Pasteuning-Vuhman et al. 29 , who demonstrated an increase in regenerated areas and a significant reduction in muscle fibre size in mdx mice following treatment with an Alk4 functional blocker, the main receptor for the myostatin signaling pathway. The worsening of the damage caused by UT administration in mdx mice was also denoted by increased TNF-α expression, a cytokine that promotes very pronounced anti-tumor and immunological responses in the mdx mouse and DMD patients 8 .UT treatment did not decrease TNF-α expression, as seen in other systems, possibly because that increased muscle injury per se may contribute to maintenance of the elevated TNF-α expression.The pentacyclic indole alkaloid mitraphylline is the main bioactive secondary metabolite of UT extracts and associated with cytotoxic effects on cancer cells in anti-proliferative, proapotocytic and immunoregulatory terms 14,30,31 .The UT extract increased the production of https://doi.org/10.7322/abcshs.2022018.2058ROS in HepG2 cells, which resulted in a decrease in the level of GSH, leading to apoptosis of these cells through the activation of caspase-3 and caspase-7 32 .In certain cell types, ROS are the main mediators of the pathways that regulate the expression of TNF-α, from the modulation of kinases of the redox system, activation of transcription factors, intracellular alteration of Ca2+ and gene expression 33 It is possible that in the present study, the dose of 200 mg/kg may be directly 14 associated with the antiproliferative effect of mitraphylline via a mechanism of overproduction of ROS, contributing to the maintenance of high expression of TNF-α39.Furthermore, Ermolova et al. 34 observed that the administration of a TNF-α blocker in the skeletal and cardiac muscle of mdx mice reduces the expression of myostatin and improves the phenotype of the disease, reinforce our findings. Oxidative stress is considered a primary event of DMD 35 .Studies show that the UT ethanol extract has a more effective antioxidant action compared to the aqueous extract [36][37][38] . The use of hydroalcoholic extract of UT bark showed antitumor and antioxidant effects by partially regulating redox and metabolic homeostasis 39 .In contrast, Navarro-Hoyos et al. 40 suggest that the leaves constitute the most suitable part of UT for use in the elaboration of standardized phenolic extracts because they are rich in proanthocyanidins and with high antioxidant activity, either in aqueous or ethanol extracts.Based on the above, we believe that the use of aqueous extract of UT root bark at a dose of 200mg/kg was decisive for the results obtained in the present study, since it does not contain some antioxidant and anti-inflammatory properties reported in studies with the ethanol extract of the root of the UT, or other parts of the plant, being essential the accomplishment of new studies to consider its real effect. Conclusion In conclusion, the administration of aqueous extract of UT root bark at a dose of 200mg/kg did not reduce important molecular, biochemical and morphological markers in Figure 1 : Figure 1: (A) Body weight (g) and (B) muscle strength in untreated (blue line) and UT-treated (red line) mdx mice.All values are expressed as mean ± standard deviation (SD).* compared with untreated mdx group, (unpaired Student's t-test).(C) Serum creatine kinase (CK) levels for untreated and UT-treated mdx mice.* compared with untreated mdx mice; (unpaired Student's t-test). Figure 2 : Figure 2: (A) Tibialis anterior (TA) muscle cross-sections that showing fibres with central nuclei in untreated and UT-treated mdx mice (arrow).Scale bar is 100 µm.(B) The graph shows analysis of the minimum Feret's diameter in the TA muscle of the untreated (blue line) and UT-treated (red line) mdx mice.(C and D) The graphs show the percentage of centrally nucleated muscle fibres (C) and the cell count (D) at 10x magnification in the TA muscle of untreated and UT-treated treated mdx mice.qRT-PCR results are shown for myostatin (E), TGF-β1 (F), osteopontin (G) and TNF-α (H) expression in TA muscle.All values expressed as mean ± standard deviation (SD).* compared with untreated mdx mice (unpaired Student's t-test). Figure 3 : Figure 3: (A) Diaphragm (DIA) muscle cross-sections that show centrally nucleated fibres in the untreated and UT-treated mdx mice (arrow).The scale bar is 100 µm.(B) The graph shows the analysis of the minimum Feret's diameter in the DIA muscle of the untreated (blue line) and UT-treated (red line) mdx mice.(C and D) The graphs show the percentage of centrally nucleated muscle fibres (C) and the cell count (D) at 10x magnification in the DIA muscle.qRT-PCR results are shown for myostatin (E), TGF-β1 (F), osteopontin (G) and TNF-α (H) expression in DIA muscle.All values expressed as mean ± standard deviation (SD).* compared with untreated mdx mice (unpaired Student's t-test). Figure 4 : Figure 4: Graphical abstract -Representation of effects of UT on mdx mice.The illustrations are disponibilize free from freepik.com®. . Although this genetic defect resembles human DMD, mdx mice present phases where muscle degeneration Feder et al.Short-term treatment with Uncaria tomentosa aggravates the injury phenotype in mdx mice.ABCS Health Sci.[Epub ahead of print]; DOI: 10.7322/abcshs.2022018.20584 https://doi.org/10.7322/abcshs.2022018.2058 is replaced by regeneration.Due to this feature, their dystrophic phenotype is considered to be milder compared to humans, and researchers must pay close attention when considering the age of the mdx mice used for studies.At 2 weeks old, mdx skeletal muscles are no different from normal mice.Alterations usually appears at 3 to 6 weeks of age, with inflammation, necrosis and fibrosis.After this age, most skeletal muscles become more stable due to significant muscle fibre regeneration; however, the diaphragm exhibits progressive degeneration 7 . Table 1 : Feder et al.Short-term treatment with Uncaria tomentosa aggravates the injury phenotype in mdx mice.ABCS Health Sci.[Epub ahead of print]; DOI: 10.7322/abcshs.2022018.205813 https://doi.org/10.7322/abcshs.2022018.2058mdx mouse muscle.Despite the apparent evidence of its efficacy in immunomodulatory diseases and other experimental models, UT root bark at the chosen period and dose actually exhibited an aggravating potential for the mdx mouse dystrophic muscles.Future studies are needed to determine UT extract pharmacokinetic patterns to weigh potential modulations and appropriate therapeutic strategies to enable its use as a possible therapy for DMD.https://doi.org/10.7322/abcshs.2022018.2058Forward and reverse primers used in quantitative real time PCR analyses https://doi.org/10.7322/abcshs.2022018.2058
4,732
2024-03-11T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Four-dimensional dosimetry validation and study in lung radiotherapy using deformable image registration and Monte Carlo techniques Thoracic cancer treatment presents dosimetric difficulties due to respiratory motion and lung inhomogeneity. Monte Carlo and deformable image registration techniques have been proposed to be used in four-dimensional (4D) dose calculations to overcome the difficulties. This study validates the 4D Monte Carlo dosimetry with measurement, compares 4D dosimetry of different tumor sizes and tumor motion ranges, and demonstrates differences of dose-volume histograms (DVH) with the number of respiratory phases that are included in 4D dosimetry. BEAMnrc was used in dose calculations while an optical flow algorithm was used in deformable image registration and dose mapping. Calculated and measured doses of a moving phantom agreed within 3% at the center of the moving gross tumor volumes (GTV). 4D CT image sets of lung cancer cases were used in the analysis of 4D dosimetry. For a small tumor (12.5 cm3) with motion range of 1.5 cm, reduced tumor volume coverage was observed in the 4D dose with a beam margin of 1 cm. For large tumors and tumors with small motion range (around 1 cm), the 4D dosimetry did not differ appreciably from the static plans. The dose-volume histogram (DVH) analysis shows that the inclusion of only extreme respiratory phases in 4D dosimetry is a reasonable approximation of all-phase inclusion for lung cancer cases similar to the ones studied, which reduces the calculation in 4D dosimetry. Introduction Monte Carlo simulation is the most accurate radiation dose calculation algorithm in radiotherapy [1,2]. With the advent of increasingly fast computers and optimized computational algorithms, Monte Carlo methods promise to become the primary dose calculation methodology in future treatment planning systems [3][4][5][6]. Thoracic tumor motion could introduce discrepancies between the dose as planned and actually delivered, both to the tumor and the surrounding normal lung [7]. Incorporating Monte Carlo methods into 4-dimensional (4D, 3 spatial dimensions plus time) dosimetry and treatment planning yields the most accurate dose calculations for thoracic tumor treatments [8,9]. To generate a 4D Monte Carlo dose calculation, it is necessary to calculate the dose on CT image sets derived from different time points across the respiratory cycle. These can then be fused together to calculate cumulative doses. Deformable image registration is an integral part of this process. It provides a voxel-to-voxel link between the multiple respiratory phases of a 4D CT image set so that the dose distribution on each phase can correctly be summed to give a path-integrated average dose distribution [10,11]. Deformable image registration across the various phases of a 4D CT image set has become a new focus of study [10,11]. In this study, 4D Monte Carlo dosimetry was presented. The 4D cumulative point dose in a moving phantom was compared with measurement. Clinical lung cancer cases were studied with the goal of determining under which conditions 4D Monte Carlo dosimetry likely differs from a static plan and how many respiratory phases are necessary to be included in 4D dose calculation. CT-Based Treatment Planning A total of four CT simulation image sets were used in this study. Two were performed on actual patients. Two lung cancer patients underwent 4D CT scanning (Case 1 and Case 2). These 4D CT data sets were comprised of a total of 10 CT scans per patient, taken at equally-spaced intervals across the entire respiratory cycle (phase-based sorting in 4D CT reconstruction). There were 93 and 94 slices in each respiratory phase of the two 4D CT cases, respectively. The GTV moved about 1.5 cm during the respiratory cycle in Case 1 and 1.0 cm in Case 2, predominantly in the SI direction. The GTV volume for Case 1 was 12.5 cm 3 (about 3 cm in diameter) while for Case 2 it was 159.1 cm 3 (about 7 cm in diameter). For the last two cases, 4D CT image sets were generated from a moving phantom with two different motion ranges, to compare the 4D cumulative doses with actual measurements. The 4D scans of the moving phantom contained 90 slices in each of the ten respiratory phases. All 4D CT imaging was performed on a 16-slice Big Bore CT scanner (Philips Medical Systems, Andover, MA). The transaxial slice resolution was about 1 mm × 1 mm and the slice thickness was 3 mm for all scans. The moving phantom was custom-designed ( Figure 1). Phantom motion was controlled by a motor with adjustable rotational frequency. A rotating wheel connected to the motor. The wheel contained holes at various distances from the axis of rotation, which thereby determined the magnitude of the range of the sinusoid motion of the phantom, which is the only motion pattern the table can perform. The phantom container was made of acrylic. Cork blocks with density of 0.26 g/cm 3 were placed inside the acrylic container to simulate normal lung. An acrylic rod of 3 × 3 × 2 cm 3 was placed in the center of the cork blocks to simulate a tumor. The center of this rod contained a 0.04 cc Scanditronix CC04 ion chamber (active length 3.6 mm, inner radius 2 mm) to measure the point dose. The motion range was set to 1.5 (Case 3) or 3 cm (Case 4) at a frequency of about 18 cycles per minute to simulate respiration. The same motion pattern was used during both the 4D CT scan and treatment delivery. A treatment plan was generated for each of the four CT data sets. Simple 3D-conformal plans were utilized. All the plans were calculated for a Varian Clinac 2100EX linear accelerator (Varian Medical Systems, Palo Alto, CA). Photon beams of 6 MV in energy were used. The margin from gross tumor volume (GTV) to block edge is 0.5 cm (Case 2) and 1 cm (Case 1, 3 and 4). MLC was used for the conformal plans in Case 1 and 2. Open 5 × 5 cm 2 beams were used in the phantom study cases due to the regular shape of the acrylic rod which simulated the GTV. For Case 1 and Case 2, the tumors were contoured on the maximum inspiration phase of the respective 4D CT image sets and the isocenters were set accordingly. A 3D plan was then generated for each patient. For Case 1, a wedged 3-beam 3D plan was created. A wedged two-field 3D-conformal plan was designed for Case 2. The respective treatment plans were then copied over from the maximum inspiration scan to each of the other nine phases of the CT scan for that patient. A Monte Carlo simulation was used to calculate the dose distribution on each phase. The dose distributions from all other phases were mapped to the maximum inspiration phase using deformation matrices generated via deformable image registration between all the other phase and the maximum inspiration phase. A 4D cumulative dose distribution was created from an equally-weighted average of the dose distributions. This 4D Monte Carlo dosimetry method was applied to the two cases over all ten phases (vide infra). A dose-volume-histogram (DVH) was obtained for each of the respiration phases and the 4D integrated DVH was obtained from the 4D cumulative dose distribution. For the moving phantom cases, a lateral-opposed 2beam plan was designed to cover the simulated tumor during the maximum inspiration phase. These beams were copied to the nine other phases of CT scans and the doses were calculated using Monte Carlo methods (vide infra). The 4D cumulative doses were generated. Table 1 lists the tumor sizes, motion ranges and beam margins for all the cases studied. The beam margins are purposely set smaller than the motion ranges to gauge the coverage loss effects. Monte Carlo Dose Calculation BEAMnrc [1] was used to simulate the linear accelerator. This is a Monte Carlo simulation application based on EGSnrc [12], a software package designed for Monte Carlo simulation of coupled electron-photon transport. The simulated incident electron beam bombarding the tungsten target was a 6 MeV pencil beam with a 2-dimensional Gaussian distribution of full width at half maximum (FWHM) of 0.1 cm [1,12]. For each treatment beam, the linear accelerator was simulated to generate a phase-space file containing information about each particle exiting the treatment head of the machine, as it existed at 60 cm from the electron source. The percentage depth dose curves and profiles in a water phantom from Monte Carlo simulations were matched with the measured data within 2% for most of the low gradient dose regions and slightly over 2% at the shoulders of one of the profiles. In regions of build-up or penumbra, the distance between calculated and measured curves was within1 mm. Another EGSnrc based software, DOSXYZnrc [13], was used for dose calculations in the patient/phantom through the various respiratory phases. Additionally, CTto-phantom converter code, ctcreate [14], was used to convert the patient/phantom CT image data to CT phantom data that DOSXYZnrc could use. For the patient cases (Case 1 & 2), AIR, LUNG, ICRUTISSUE and ICRP-BONE were used for air, lung tissue, soft tissue and bone media respectively based on their CT number ranges, while for the phantom cases (Case 3 & 4), AIR, LUNG and PMMA were used for air, cork and acrylic respectively. Dosimetrically, cork is equivalent to lung tissues [15,16]. The dose grid size used for this study was 2 × 2 × 3 mm 3 , which is coarser than the CT image resolution of 1 × 1 × 3 mm 3 . Each CT slice was therefore sub-sampled from 512 × 512 pixels to 256 × 256 pixels to match the Monte Carlo dose grid size before the CT-to-phantom conversion. The phase-space files were then used as the particle source to calculate the dose distribution for each respiratory phase in the patients and phantom. In order to achieve acceptable statistical uncertainties in target volume (about 1%), the particles stored in the phase space files were recycled 4 times. No specific variance reduction technique was applied. The cutoff energies for electrons (ECUT) and for photons (PCUT) were 0.7 and 0.01 MeV respectively. Dose calculation for one respiratory phase took about 20 hours of CPU time on a 2.66 GHz single-processor personal computer with 2 GB RAM, running Linux. Deformable Image Registration The optical flow method of deformable image registration was then applied to calculate the deformation matri-ces between the CT images from the different respiratory phases. These matrices were used to map the dose distributions from the various respiratory phases to an average integral dose. The 3D optical flow program was based upon the 2D Horn and Schunck algorithm [11,17]. For typical 4D CT image sets with a sub-sampled slice resolution of 2 × 2 mm 2 /pixel, each deformable image registration required about three minutes on a personal computer with a single 2.66 GHz CPU and 4 GB RAM. Thus, for a respiratory cycle divided into 10 phases, about half an hour was required to calculate all the deformation matrixes. Moving Phantom Study Absolute dose was used in the 4D dosimetry of the moving phantom by normalizing the dose matrix to the reference dose which was the maximum value of the central depth dose of a 10 × 10 cm 2 field at 100 cm of source to surface distance (SSD). This absolute dose conversion assumed that the Monte Carlo calculated reference dose was 1 cGy per monitor unit (MU) which agreed with the accelerator calibration. With different motion ranges, the central point dose measurements and 4D dose calculations showed an agreement better than 3%. With a tumor motion range of 3 cm (Case 4), the measured central point dose for a 5 × 5 cm 2 field demonstrated a 27.5% ± 0.7% drop compared to the static phantom case, while the 4D dosimetry calculation showed a 25.0% ± 1.1% drop. With a motion range of 1.5 cm (Case 3), the central point dose was equivalent for both the phantom measurement and 4D dose calculation due to the fact that the central point was well covered by the treatment beams, given the relatively short motion range. ure 2C-D). The distribution of the mapped dose is shifted inferiorly towards the diaphragm, and the tumor is closer to the superior aspect of the isodose distribution ( Figure 2D). The reason for this is that in the diaphragm and tumor move upward in the maximum expiration phase while the beams remain fixed. Consequently, the dose distribution on the maximum expiration phase moves inferiorly relative to the diaphragm or tumor. Therefore, after the dose distribution is mapped onto the maximum inspiration phase, the isodose distribution skews inferiorly. Figure 3 shows a DVH of the GTV coverage at various phases of the respiratory cycle together with the 4D cumulative dose DVH. At the prescribed dose of 70 Gy, the static plan shows 95% GTV coverage in the maximum inspiration (0%) phase while the average dose plan only shows tumor coverage of 80%. The worst phase (50% or 70% in the figure) shows slightly better than 70% coverage of the GTV. In this example, the GTV moved about 1.5 cm in the SI direction. With a beam margin of 1 cm, tumor coverage was clearly reduced. Lung Tumor Treatment Plans In general, the DVH of the 4D cumulative dose distribution from the mapped doses lies between the optimized static dose DVH at the maximum inspiration (0%) phase and the maximum expiration (50%) phase. However, at times, it can exceed or trail the curve for any individual phase. In Figure 3, at the low-dose portion of the curve, around 66 Gy, the volume covered by the average dose is higher than that for any of the static respiratory phases. Correspondingly, at the high dose tail (above 75 Gy), the average dose curve is lower than that for any individual respiratory phase. This behavior of the DVH curves in Figure 3 indicates that the 4D cumulative dose reduced the magnitude of hot/cold spots in individual static plans. When evaluating a treatment plan, one also needs to consider the DVH curves for the normal structures. In particular, different portions of lung move in and out of the treatment field, which causes the 4D cumulative lung DVH to vary from that for any given respiratory phase. This is evident in Figure 4. We next investigated how many respiratory phases are necessary to include in the 4D calculations to reasonably estimate the average dose to the GTV as calculated when incorporating all ten respiratory phases. Figure 5 shows a comparison of several GTV DVH curves from Case 1, including curves from the extreme static phases and the lowest GTV coverage phase (30%) as references. The cal- In the static plan from the 0% phase, the GTV coverage at the prescribed dose of 70 Gy is about 95%, while it is 80% for the 4D cumulative dose. Figure 4 Left lung DVHs from various static image sets (0%, 50%, 90%) and the 4D cumulative DVH (Case 1). For the 50% phase, the diaphragm started moving superiorly into the field, causing less lung being irradiated at this phase, thereby reducing the lung DVH. culated average doses included the doses as mapped from a variable number of the respiratory phases, using deformable image registration, ranging from two (0% and 50%), to five (0%, 20%, 50%, 70% and 90%), to all the 10 phases. By observation, the inclusion of increasing numbers of respiratory phases in the 4D dose calculation improves agreement with the calculation derived from using all ten phases. However, considering that both Monte Carlo simulation and deformable image registration are time consuming calculations, the DVH of the cumulative dose using just the two extreme phases is a reasonable representation of the average derived when incorporating all ten phases. In Case 2, the GTV motion is about 1 cm, but the DVH variation is much smaller than that in Case 1 even with a block margin of 0.5 cm across the GTV ( Figure 6). This can be explained by the fact that the GTV is much larger in Case 2 (159.1 cm 3 ) than in Case 1 (12.5 cm 3 ). This translates into a much smaller percentage volume change for Case 2 when compared to Case 1. Discussion In this study, discrepancy between a point dose measurement in a moving phantom and the calculated 4D cumulative dose was less than 3%. The variance is multifactorial, representing a combination of errors from Monte Carlo simulation, image registration, and phantom measurements. In the Monte Carlo simulations, the statistical uncertainties in the high dose regions, such as the GTV, are below 1%. Other error sources include electron source parameters, linear accelerator geometry and materials. Any discrepancies of these items between simulation and reality could introduce variability between calculations and measurements. As shown previously, these differences were within 2% for most cases in our study. Errors in image registration can also affect the calculated dose. There are three root causes for errors in image registration. Artifacts in the 4D CTs, the aperture effect [18], and the inherited occlusion problem [19] all introduce potential sources for error in image registration. In our experience, 4D CT artifacts are the major contributing factor to errors in image registration. The 4D CT artifacts are caused by residual motion in each respiration phase which smears details in 4D CT images. Since accurate optical flow registration depends upon clarity of the details in each image, any degradation in image quality can impact the quality of registration. The aperture effect is introduced in regions of flat intensity within the images. When there is no variation in intensity within a region, the voxel-to-voxel correspondence becomes vague. Thus the registration may have larger errors in low contrast regions. For human CT data, detailed anatomic structures, such as veins, help reduce the aperture effect. Our prior research has shown that the average magnitude of this error is smaller than an image voxel size in the thoracic regions [20]. Another study by Zhong et al [21] showed that the average error in lungs by Demons, another deformable image registration algorithm that is similar to optical flow, was around 0.7 mm, but larger in the low gradient prostate region. . By incorporating additional phases, the accuracy of the dose calculation improved. However, the use of just two phases (0% and 50%, the maximum inspiration and maximum expiration respectively based on diaphragm motion) provides a reasonable approximation. The dose difference for the same volume coverage between each of the three averaged DVH curves is less than 0.5 Gy. The lowest GTV coverage occurred at 30%, which is shown in this figure too for reference. This is in contrast to Case 1, which had a similar range of tumor motion, but for a tumor which measured only 12.5 cm 3 . Consequently, the DVH curve for the average dose does not differ much from the static DVH curves (Figure 3). Occlusion may cause motion discontinuity in other image registration applications, such as daily patient CT registration when rectal filling varies. For 4D CT images, occlusion is not a problem since there is no topological change between the respiratory phase images. The Monte Carlo method applied in this study is a classical full Monte Carlo method. The calculation time was long for each case. In recent a few years, various techniques have helped in increasing the computational efficiency of Monte Carlo simulation and reducing its calculation time [3,[22][23][24]. Using multiple source models instead of simulating phase-space files would also reduce calculation time significantly [24]. By applying these modifications, some simpler and faster Monte Carlo methods have already been implemented in commercial treatment planning systems or demonstrated to be reasonable for clinical application [25][26][27]. With faster computers and high efficient Monte Carlo algorithms, multiphase Monte Carlo dose calculations have been demonstrated feasible for clinical applications [27]. If fewer phases are used for 4D dose calculations, the work load is also correspondingly reduced. Another way to further reduce the computation time is to lower the simulation histories in each respiration phase. With a higher statistical uncertainty in each respiration phase, the statistical uncertainty of the 4D cumulative dose remains at an acceptable level [8]. The 4D Monte Carlo dose calculation can be reduced to a single calculation on the average CT if the simplified 4D dose accumulation method proposed by Glide-Hurst et al [28] is applied. In our 4D test cases, the method noticeably altered the dose calculation compared to static plans only when the tumor was small and the respiratory motion was comparatively large. Vinogradskiy et al [29] demonstrated by measurement that 4D dose calculations provided greater accuracy than 3D dose calculations in heterogeneous dose regions. Rosu et al [30] studied how many phases are needed in 4D cumulative dose calculation for various clinical end points and concluded that results using only two extreme phases in 4D cumulative dose calculation agreed well with those of full inclusion for the 4 cases studied. This study confirmed their conclusion with Monte Carlo calculations. The treatment plans generated for this study were not intended for clinical use. The phase for the original plan was randomly picked between the two extreme phases and the isocenter was placed on the GTV center of the corresponding phase. The margins in the plans were purposely set small compared to the motion ranges so that target volume coverage loss, thus DVH variation of the target volume versus respiratory phase, was more pro-nounced. The conditions used in our study tended to exaggerate coverage loss and hence was more adverse against the above conclusion. The conclusion is thus deemed more confident when applied to real clinical cases which are usually with better coverage. However, due to limited number of cases studied, this conclusion should not be applied to cases of larger or irregular motions. When large motion is reduced to be within certain range (< 1 cm) by applying a motion-reducing technique, such as abdominal compression which is often used in stereotactic lung treatments, this conclusion should apply as long as the beam margins are large enough for the motion ranges. Monte Carlo methodology provides more accurate dose calculation across an inhomogeneous substrate such as the lung [31]. For some extrathoracic sites, such as the abdomen, respiratory motion of tumors and normal structures is not insignificant [32]. Therefore, 4D dose calculations might also prove useful in the treatment of abdominal tumors. When lung or any other significant inhomogeneous substrate is not involved in treatment volumes, Monte Carlo methods may be replaced by other faster dose calculation algorithms in 4D dose calculations with an acceptable accuracy. Conclusions With the combination of Monte Carlo simulation and the optical flow method, 4D dosimetry is proved accurate based on point-dose measurement in a moving phantom. Monte Carlo 4D dose calculation would provide a planned dose distribution that is closer to the delivered dose than a static plan does, especially when dose variation is large between respiratory phases. Based on the cases studied, large dose variation between respiratory phases is more likely for small tumor volumes with relatively large motion. The inclusion of only two extreme respiratory phases in 4D cumulative dose calculation would be a reasonable approximation to all-phase inclusion for cases similar to the ones studied.
5,409.6
2010-05-29T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
NF-κB as an Important Factor in Optimizing Poxvirus-Based Vaccines against Viral Infections Poxviruses are large dsDNA viruses that are regarded as good candidates for vaccine vectors. Because the members of the Poxviridae family encode numerous immunomodulatory proteins in their genomes, it is necessary to carry out certain modifications in poxviral candidates for vaccine vectors to improve the vaccine. Currently, several poxvirus-based vaccines targeted at viral infections are under development. One of the important aspects of the influence of poxviruses on the immune system is that they encode a large array of inhibitors of the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB), which is the key element of both innate and adaptive immunity. Importantly, the NF-κB transcription factor induces the mechanisms associated with adaptive immunological memory involving the activation of effector and memory T cells upon vaccination. Since poxviruses encode various NF-κB inhibitor proteins, before the use of poxviral vaccine vectors, modifications that influence NF-κB activation and consequently affect the immunogenicity of the vaccine should be carried out. This review focuses on NF-κB as an essential factor in the optimization of poxviral vaccines against viral infections. Introduction Poxviridae is a family of dsDNA viruses. It is divided into two subfamilies: Chordopoxvirinae, the viruses of vertebrates, and Entomopoxvirinae, the viruses of insects. The Chordopoxvirinae subfamily includes 18 [1]. Poxviruses are represented by numerous human and animal pathogens. Among them, variola virus (VARV) orthopoxvirus, a human pathogen, is the causative agent of smallpox, a disease that had caused over 300 million deaths worldwide by the late 1970s before the global smallpox eradication program was completed. In the global smallpox eradication program, vaccinia virus (VACV), a zoonotic pathogen belonging to the Orthopoxvirus genus, was used [2,3]. Other members of the Poxviridae family, such as orf virus (ORFV) and goatpoxvirus (GTPV), which represent the Parapoxvirus and Capripoxvirus genera, respectively, may also serve as vaccines and are described in this review. With the exception of parapoxviruses, poxvirus virions have a brick shape. The virions of parapoxviruses are cocoon-shaped. The virions of parapoxvirus and other members of the Poxviridae family have dimensions of 260 × 160 nm and 350 × 250 nm, respectively [4,5]. Depending on the number of membranes surrounding the virion, two infectious forms of poxviruses are observed. Mature virus (MV), which contains a tubular nucleocapsid surrounded by a biconcave core wall and proteinaceous lateral bodies, is enclosed by a single proteolipid membrane bilayer. In turn, extracellular One of the advantages that make poxviruses good vaccine vectors is that the cytoplasmic replication cycle of these vectors eliminates the risk of integration into the host genome and persistence within the host. Importantly, poxviral vaccines are easy to store, especially when freeze-dried. The thermostability of these vaccines can also be ensured by using sugar-glass technology. Additionally, the cost of poxviral vaccines is low and their administration is needle-free [12][13][14]. Although poxviruses are regarded as promising vaccine tools, certain challenges limit the design of poxviral vaccines. When using VACV and other poxvirus-based vaccines, it is desirable to achieve enhanced immunogenicity and/or virus attenuation. This is particularly important for improving the safety profile of the vaccine. Due to the abundance of immunomodulatory genes and cellular targets of the poxviruses, which remain unrevealed, there are still many opportunities for virus modification in order to improve the vaccine efficacy by inducing stronger immunological memory. In addition, reduction of dosage and administration regimes would be beneficial as well [7]. One of the strategies employing poxvirus vaccines is prime-boost vaccination, in which poxviral vectors that enhance T cell responses as boosters are combined with other vectors. On the other hand, when used as primers with protein and adjuvant, poxviruses improve the B cell responses. Furthermore, the optimization of the antigen expression is based on mosaic immunogen sequences [13]. When modifying poxviral vaccine vectors, the immunomodulatory genes should be removed in order to enhance immunogenicity [13]. Poxviruses, which express a wide range of host response modifiers influencing cellular signaling pathways involved in immunity and inflammation, share multiple mechanisms of host evasion. Since Poxviridae family members encode a number of cellular signaling inhibitors, this review describes the influence of poxvirus-based vaccines on the NF-κB transcription factor [9]. Several data indicate the importance of NF-κB in the development of poxvirus-derived vaccines. These are based on both veterinary and human antiviral vaccines. Therefore, in this review, we focus on the benefits of certain modifications of poxviral vaccine vectors and how these modifications can affect NF-κB signaling in different cells and hosts and the possible mechanisms of immune response modulation that can be shared by individual poxvirus genera. We describe the VACV-, ORFV-, and GTPV-based vaccines, which can be used against viral infections. Poxviral Vectors for Vaccine Applications The vaccine used in the global smallpox eradication program was based on several strains of VACV. For instance, in the United States, the New York City Board of Health (known as NYCBH) and Lister strains were used for vaccination against smallpox, while in Europe, Lister, Bern, Paris, and Copenhagen (called VACV-COP) strains were applied. The first-generation antismallpox vaccines were propagated in the skin of calf and other animals, while the second-generation VACV-based vaccines were grown in tissue culture and chicken embryos instead of live animals. Unfortunately, the vaccines generated in cell cultures are not sufficiently safe. Therefore, the use of second-generation antismallpox vaccines is limited [3]. Recently, it has been shown that chicken embryonic stem cells (cESCs) may serve as an alternative source for the propagation of poxvirus vaccine vectors. The idea behind the use of cESCs rather than mammalian ESCs is linked to ethical issues and safety concerns, the most important of which is that cESCs lack transforming oncogenes or adventitious agents [26]. Despite the eradication of smallpox, the risk of reoccurring VARV infections remains to be eliminated because of the bioterrorist threat or the possible de novo synthesis of the virus. Therefore, the pathogenesis of orthopoxvirus diseases is still of interest to researchers [27,28]. Although VACV-based vaccines used in the global smallpox eradication program were effective, some adverse effects of vaccination, including postvaccinal encephalitis, generalized vaccinia, progressive vaccinia, and eczema vaccination, were observed in immunocompromised individuals and patients with skin conditions. To overcome these, modified VACV Ankara (MVA)-Bavarian Nordic (MVA-BN), a third-generation attenuated antismallpox vaccine-has been introduced. This vaccine is approved in Canada (Imvamune) and the European Union (Imvanex) [29,30]. The currently used VACV-based vaccines, which are nonreplicating and attenuated, such as MVA-BN-vectored encephalitic alphavirus vaccine, target the biothreat viruses. Another VACV-derived vaccine, RABORAL V-RG, an antirabies vaccine expressing the rabies virus (RABV) glycoprotein gene (V-RG), has been used in Europe and North America to vaccinate foxes and raccoons [30]. Additionally, VACV-based vaccine candidates have been demonstrated to protect against emerging viral diseases, such as chikungunya virus (CHIKV) disease [41,42] and yellow fever [43] in preclinical animal models. The Sementis Copenhagen Vector (SCV) is a new multiplication-defective VACV-COP-derived vaccine vector with targeted deletion of D13L gene encoding D13 protein that is essential for viral assembly. This new vector has recently been successfully tested in nonhuman primates as a vaccine against Zika and chikungunya [41]. NF-κB Signaling One of the key factors involved in the proper induction of antiviral immunity is NF-κB. It constitutes a family of dimeric transcription factors, which regulate the expression of numerous genes involved in the cell cycle, apoptosis, and immunity. The NF-κB family consists of five proteins: RelA/p65, RelB, c-Rel, NFκB1 p105/p50, and NFκB2 p100/p52. The NF-κB dimer that is most commonly detected in the cytoplasm of unstimulated cells is composed of RelA and p50 subunits [50]. The RelA/p50 heterodimer remains in the cytoplasm due to the activity of IκBα, which masks the nuclear localization sequences (NLSs) of NF-κB [51]. The classical NF-κB signaling pathway is induced by proinflammatory cytokines such as interleukin-1β (IL-1β), IL-18, and TNF-α and various ligands of pattern recognition receptors (PRRs), which are represented by retinoic acid-inducible gene-I (RIG-I) and Toll-like receptors (TLRs). In the NF-κB signaling cascade, the cellular receptors cooperate with adapter molecules and induce the cellular pathways that activate the transcriptionally active dimers [52]. Upon the stimulation of NF-κB signaling, transforming growth factor (TGF)-β-activated kinase 1 (TAK1) activates IKK, the IKKβ subunit of which triggers IκBα phosphorylation at Ser32 and Ser36. This event results in the recognition of IκBα by the E3 ubiquitin ligase complex composed of β-transducin repeat-containing proteins: S-phase kinase-associated protein 1 (Skp1)-Cullin 1-F-box (SCF β−TrCP ). Conjugation of phosphorylated IκBα with K48-linked polyubiquitin chains of Lys 48 of ubiquitin by SCF β−TrCP results in 26S proteasome-mediated IκBα degradation and the release of RelA/p50 dimers. These dimers translocate to the nucleus, where they bind DNA and initiate the transcription of target genes. E3 ubiquitin ligase complex is also involved in p105 proteasomal processing to p50 [50,53,54] ( Figure 1). On the other hand, the noncanonical NF-κB signaling triggered by the members of the TNF superfamily leads to the activation of NF-κB-inducing kinase (NIK), which then activates IKKα. IKKα, in turn, phosphorylates the C-terminal portion of p100 precursor protein, which retains RelB in the cytoplasm due to its IκB activity. Following the phosphorylation of p100 at Ser866 and Ser870, IκB-like C-terminal portions of this protein are ubiquitinated, leading to the generation of a p52 active NF-κB subunit. RelB/p52 dimers translocate to the nucleus and initiate the transcription of target genes [55]. In general, the canonical NF-κB signaling is responsible for the regulation of innate immunity [56], whereas the noncanonical NF-κB activation pathway regulates the adaptive immune responses. However, there exist regulatory mechanisms for these two signaling pathways as well as for the crosstalk between them [55,57]. The modulation of NF-κB signaling is attributed to viral pathogens, one excellent example of which is the viruses belonging to the Poxviridae family encoding multiple immunomodulatory proteins; these proteins affect the components of NF-κB signaling and therefore disrupt the antiviral innate response [52,58]. Selected NF-κB inhibitors of VACV, ORFV, and GTPV, which may be relevant to the efficacy of poxviral vaccines, are shown in Figure 1. Vaccinia Virus VACV is a pathogen whose origin or natural host has not been identified so far. It was believed that VACV infections occur due to the spread of vaccine strains into new wild hosts. However, it is now obvious that VACV is transmitted via peridomestic rodents, which infect wild animals and cows. In humans, VACV can be transmitted via infected animals and is observed in milkers in areas where the virus circulates, especially in Asian and South American countries, such as Brazil. Furthermore, VACV infection of humans occurs via their direct contact with the crusts containing viral particles, which results in the formation of skin focal lesions on the hands and forearms [18,37,59]. After a few days, the focal lesions form pustules leading to edema and erythema. Ulcerated and necrotic lesions appear after a maximum of 12 days of VACV infection, following which crusts develop. In 4 weeks, the lesions disappear, but local lymphadenopathy can be observed for 20 days. Alternatively, a systemic infection manifested by fever, headache, and muscle ache develops immediately after the appearance of lesions [60]. Modified VACV Ankara VACV is regarded as a universal vaccine carrier. However, the use of replicating VACV as an antismallpox vaccine has led to severe adverse effects in individuals with an immunocompromised immune system or skin disorders. Currently, MVA obtained by 570 passages of chorioallantois VACV Ankara (CVA) strain in chicken embryo fibroblasts remains an excellent alternative to traditional antismallpox vaccines [61]. MVA does not replicate in human cells but displays good immunogenicity as well as a good safety profile in vivo. In addition, MVA can be successfully used in immunocompromised humans [12,15,32,[62][63][64][65][66]. The ongoing clinical trials on MVA-vectored vaccines targeted at viral diseases are shown in Table 1. MVA is a good candidate for an effective and safe vaccine vector. Nevertheless, its immunogenicity can be enhanced to improve the efficacy of the vaccine, for which the introduction of immune-stimulating genes into MVA and reinsertion of certain VACV genes to obtain a replication-competent virus are considered beneficial [67]. Despite the expression of A46 (Toll/IL-1-receptor signaling interference protein), B16 (IL-1-binding protein (IL-1BP)), and K7 (B cell lymphoma 2 (Bcl-2)-like protein), which inhibit NF-κB signaling, MVA stimulates NF-κB [61,68,69]. Studies on NF-κB signaling in MVA-infected cells have shown that early MVA protein expression in human 293T fibroblasts activates the phosphorylation of extracellular signal-regulated kinase 2 (ERK2), which, in turn, mediates the activation of NF-κB [70]. Further research revealed that MVA triggers NF-κB activation via VACV growth factor (VGF), which interacts with the epidermal growth factor receptor (EGFR). Moreover, in 293T cells and Hacat keratinocytes infected by MVA deprived of an early C11R gene encoding VGF, reduced activation of ERK2 and NF-κB was observed. Since keratinocytes are the immediate target of the vaccine, the prosurvival role of NF-κB would be beneficial to stimulate cells for a long duration for optimal induction of immune response before cell lysis occurs [71]. Other studies on MVA revealed that, during the early phase of viral replication in human embryonic kidney 293 cells that were transformed with large T antigen (HEK 293T), IκBα degradation occurs before the initiation of viral replication. It has been shown that in Chinese hamster ovary (CHO), HEK 293T, and rabbit kidney 13 (RK13) cells, IκBα degradation can be inhibited by the expression of CP77, a CPXV Brighton Red (CPXV-BR) early host-range gene [72]. CP77-encoded protein is one of the ankyrin (ANK) repeat proteins [73]. In general, ANK repeats, consisting of 30-34 amino acid residues, are involved in protein-protein, protein-sugar, or protein-lipid interactions. ANK repeat proteins take part in cellular signaling, vesicular trafficking, cell cycle control, and inflammation, are responsible for cytoskeleton integrity, and regulate transcription. They are present in both eukaryotic and prokaryotic cells, such as intracellular bacteria [74]. ANK repeat proteins are not commonly expressed by viruses. However, within the Chordopoxvirinae subfamily of poxviruses, only three species of different genera lack ANK proteins. In poxviruses, these proteins are composed of multiple ANK motifs, which start from the N-terminus. The ANK motifs are followed by non-ANK linker sequence. In turn, the C-terminus has the homolog of cellular F-box sequence. The cellular F-box sequence interacts with ubiquitin ligase E3 complexes, thus allowing ubiquitination and subsequently 26S proteasome-mediated protein degradation. ANK proteins of poxviruses belonging to Avipoxvirus, Parapoxvirus, Orthopoxvirus, and the Leporipoxvirus supergroup genera may either bind to Skp1 or interact with cellular proteins, thus acting as inhibitors of cellular signaling, such as NF-κB [75]. Some poxviral ANK repeat proteins, including VACV K1 [76] and myxoma virus (MYXV) M150 [77], may act as nuclear NF-κB inhibitors. Furthermore, certain poxviral ANK repeat proteins, in which the F-box domain is absent, serve as host-range proteins [75]. These are represented by VACV K1 protein encoded by K1L gene [78]. It has been shown that MVA expressing the VACV Western Reserve (VACV-WR) gene K1L, which prevents IκBα degradation [79] and p65 acetylation [76], potentially inhibits IκBα degradation in HEK 293T cells. Importantly, in mouse embryonic fibroblasts (MEFs), human or mouse dsRNA-activated protein kinase R (PKR) is crucial for IκBα degradation. It can be assumed that the induction of PKR by MVA may stimulate immune response and impair virus replication, which is critical for the safety and efficacy of the vaccine [72]. Furthermore, stimulation of immune response is necessary for the prolonged presentation of antigens and delayed viral clearance. The insertion of a VACV-CVA 5.2-kb region containing apoptosis inhibitor and ANK repeat protein gene M1L [80] and NF-κB inhibitors encoded by M2L (mitogen-activated protein kinase kinase (MEK)/ERK and NF-κB inhibitor) and K1L (PKR and NF-κB inhibitor) genes into MVA decreases both apoptosis and virus-mediated NF-κB activation in antigen-presenting cells (APCs) in vivo. Moreover, VACV-specific CD8 + T cell response is diminished in vivo after treatment with MVA/5.2 kb compared to MVA. These results contradict the findings observed in vitro, in which MVA/5.2 kb displays an immunostimulatory effect [67]. New York VACV Another VACV-based vaccine vector, the New York VACV strain (NYVAC), derived from the VACV-COP strain, is deprived of 18 ORFs, including the host-range K1L gene, encoding an IFN antagonist protein and NF-κB inhibitor. It has been shown that both MVA and NYVAC-infected HeLa cells display enhanced expression of NF-κB protein, degradation of IκBα, and secretion of IL-6. Furthermore, NYVAC upregulates certain NF-κB-responsive genes, including activating transcription factor 3 (ATF3). It can be concluded that ATF3 may act pro-apoptotically during NYVAC infection. However, the ectopic expression of VACV-WR K1L gene in NYVAC-infected cells induced apoptosis but inhibited NF-κB. This indicates that K1 does not prevent apoptosis in NYVAC-infected cells. Both induction of apoptosis and inhibition of NF-κB may not be favorable for the NYVAC replication cycle; however, apoptosis occurs at the late stage of the viral lifecycle when the replication is completed. These events are important for the generation of immune response and vector clearance [81]. Interestingly, although both MVA and NYVAC stimulate NF-κB, A52R gene is present in NYVAC, but not in MVA. The absence of A52, a Bcl-2-like protein, which binds IL-1 receptor (IL-1R)-associated kinase 2 (IRAK2) and thus inhibits NF-κB, is likely to upregulate the TLR3 pathway in MVA-infected cells. Since A52 binds to IRAK2 and TNFR-associated factor 6 (TRAF6), which are important for TLR signaling, upregulation of TRAF6 and downregulation of IL-1α and IL-1β were observed in MVA-infected immature human monocyte-derived dendritic cells (MDDCs), but not in NYVAC-infected cells [82]. NYVAC has been proposed as a candidate for a human immunodeficiency virus (HIV) vaccine. NYVAC-C ∆A52R ∆B15R ∆K7R, and NYVAC deletion mutant lacking A52R, B15R (encoding B14 protein that binds IKKβ), and K7R (encoding TRAF6 and IRAK2-binding protein) has been used to express HIV type 1 (HIV-1) envelope (Env) glycoprotein 120 (gp120) and GPN (Gag-Pol-Nef) clade C antigens. In mice, NYVAC-C ∆A52R ∆B15R ∆K7R induced the activation of chemokines/cytokines and migration of Nα and Nβ neutrophils to the infection site. This effect was accompanied by an increase in T cell response toward HIV antigens. The activation of virus-specific CD8 + T cells was triggered by Nβ neutrophils displaying an APC-like phenotype [83]. Further analyses of mice models infected with NYVAC mutants (NYVAC-C ∆A52R, NYVAC-C ∆A52R ∆K7R, or NYVAC-C ∆A52R ∆B15R) showed that the infection increased the number of CD11c + major histocompatibility complex (MHCII) + -positive DCs in mice. Deletion of A52R gene influenced the migration of DCs, whereas double-gene deletion affected the migration of both DCs and neutrophils. Finally, the deletion of all A52R, B15R, and K7R genes not only enhanced the migration of DCs, neutrophils, and natural killer (NK) cells but also influenced chemokine release. In addition, NYVAC-C ∆A52R ∆B15R, NYVAC-C ∆A52R ∆K7R, and NYVAC-C ∆3 (triple-deletion) mutants induced CTLs. Among the double-deletion mutants, NYVAC-C ∆A52R ∆B15R not only induced a strong T CD8 + response but was also effective in the induction of IgG. These studies demonstrate that double or triple deletion of NF-κB inhibitors from NYVAC enhances both T cell-specific and humoral anti-HIV responses. The induction of Gagand Pol-specific CD8 + T lymphocytes by NYVAC mutants showed that NYVAC-based vectors are promising anti-HIV vaccine candidates [84]. VACV Western Reserve Another VACV strain that can be used as a vaccine vector is VACV-WR. When modifying the VACV-WR genome, single-gene deletions are more beneficial than the deletions of multiple genes, which may decrease the immunogenicity of the vaccine [85]. One of the candidate genes that can be deleted from VACV-WR is N1L, encoding a Bcl-like inhibitor of NF-κB, which prevents NF-κB activation by proinflammatory cytokines including TNF-α or IL-1β [86]. Studies on intradermal murine model infection with VACV-WR devoid of N1L gene have shown that NF-κB is essential for CD8 + T cell memory and, consequently, for the efficacy of vaccines. N1 is an early VACV protein that inhibits apoptosis. Therefore, its mutation or deletion reduces the virulence of VACV and, at the same time, enhances CD8 + T cell response, which is desirable for antiviral protection induced by the vaccine [87]. VACV-WR vaccine can also be modified by the deletion of K1L NF-κB inhibitor. In mice models, K1-deficient virus induced VACV-specific CD8 + T cell response and prevented lethal VACV infections, despite silencing the innate immune response. Above all, at day one postinfection, the deletion mutant did not induce the expression of NF-κB-regulated genes, such as Nfkbia and Tnf. However, Ifna4, Il7, and Nfkb2, which are only partially controlled by NF-κB, were downregulated [88]. Recently, the importance of VACV-WR BTB-BACK-Kelch (BBK)-like protein, A55, in NF-κB modulation has been described. A55 is an NF-κB inhibitor that disturbs the p65-importin interaction and thus prevents the transcription of NF-κB-regulated genes and impairs inflammatory response. Especially, NF-κB-regulated cytokines and inflammation influence the proliferation and development of effector and memory T cells. As expected, the deletion of the A55R gene from VACV-WR resulted in the enhancement of CD8 + T cell memory, and the vaccine displayed increased immunogenicity and protected the mice challenged with VACV intranasally [89]. Orf Virus ORFV, a virus belonging to the Parapoxvirus genus, is the causative agent of orf disease-highly contagious ecthyma. In humans, orf is an enzootic and self-limiting disease manifested as pustular dermatitis, which spontaneously resolves within 3 to 6 weeks. The ORFV infections caused in humans are frequently observed in Asia and Africa. In sheep and goats, these infections cause scabby mouth disease, which is characterized by high morbidity in infected sheep worldwide. ORFV may also infect cats, reindeers, camels, serows, and musk oxen. Mortality due to ORFV infections is rarely associated with secondary infections and aspiration pneumonia. In general, orf disease is a threat to kids and lambs and may cause farms to suffer economic losses [90][91][92][93][94]. The use of ORFV as a vaccine vector may constitute a novel strategy to vaccinate both permissive and nonpermissive hosts against orf, which is an alternative to the current attenuated vaccines that are inefficient or insufficiently safe. The immunomodulatory properties of ORFV, as well as its ability to replicate in various hosts, make it a good vaccine candidate. Moreover, preclinical studies have confirmed these properties of inactivated ORFV [91]. Due to the fact that ORFV does not spread systemically or neutralize antibody production, it can be successfully used as a vaccine vector for repeat immunizations [95]. The highly attenuated anticontagious ecthyma vaccine strain D1701 derived from ORFV protects sheep for 4-6 months [96,97]. When adapted to Vero cells, D1701-V can be used to deliver target genes into the vegf-e gene for the construction of a vaccine against pseudorabies virus (PRV), the causative agent of Aujeszky's disease [98][99][100]. D1701-V-VP1 expressing capsid protein VP1 has been used to immunize rabbits against rabbit hemorrhagic disease virus (RHDV) [101]. Another recombinant D1701-V vector, D1701-V-RabG expressing the RABV glycoprotein, has been designed as a new antirabies vaccine for companion animals and tested on murine, dog, and cat models [97]. D1701-V-HAh5n expressing H5 hemagglutinin has also been proposed as a vaccine against avian influenza virus H5N1 and tested on mice models [102]. In addition, DNA vaccines expressing ORFV011 EEV envelope phospholipase and ORFV059 immunodominant envelope antigen F1L protein have shown enhanced immunogenicity and triggered lasting immunity in mouse models [103]. ORFV-IA82 Thus far, several ORFV-encoded proteins capable of inhibiting NF-κB signaling have been identified, including'ORFV024 [104], ORFV002 [105,106], ORFV121 [107], ORFV073 [108], and ORFV119 [109]. Recently, ORFV020, a dsRNA-binding IFN resistance protein, displaying dsRNA adenosine deaminase activity, has been described. ORFV020 is a counterpart of VACV-WR E3, which inhibits the activation of PKR and NF-κB. Moreover, it belongs to viral IFN (VIR) resistance proteins that inhibit IFN-mediated antiviral response. Therefore, ORFV expressing ORFV020 is resistant to the activity of IFN type I and type II. Considering the conservative nature of ORFV isolates, the deletion or mutation of the E3L counterpart may be ideal for vaccine construction [110]. ORFV strain ORFV-IA82 is used as a vaccine against porcine epidemic diarrhea. The ORFV-PEDV-S virus expressing spike (S) proteins of porcine epidemic diarrhea virus (PEDV) was constructed using the ORFV121 locus insertion site. This site encodes a unique parapoxviral NF-κB inhibitor, which blocks the phosphorylation of p65 and its nuclear translocation [107]. Immunization of pigs with ORFV-PEDV-S induced the production of neutralizing antibodies and PEDV-specific serum IgA and IgG. Importantly, ORFV-PEDV-S ensured protection from the clinical outcomes of the infection. Reduced virus shedding was also found upon the immunization of infected animals [111]. Furthermore, ORFV-PEDV-S vaccine has been shown to induce passive immunity in newborn piglets [112]. In addition, ORFV-IA82 can be used as an antirabies vaccine. The gene ORFV024 encoding a unique parapoxviral inhibitor of IKK kinase and IκBα degradation [89] was used as an insertion site for the RABV glycoprotein (G) gene. Similarly, ORFV121 gene encoding NF-κB inhibitor has been used as an insertion site for G-encoding gene. Immunization of pigs and cattle with ORFV ∆024 RABV-G or ORFV ∆121 RABV-G resulted in the induction of neutralizing antibodies. Of these, the ORFV∆121 mutant was more immunogenic [113]. Goatpox Virus GTPV, a member of the Capripoxvirus genus, is a sheep pathogen and the causative agent of the goatpox disease. Goatpox is transmitted via aerosols and insects and causes systemic infections in goats and sheep, which manifest as fever, enlargement of lymph nodes and skin, and respiratory and gastrointestinal lesions. Goatpox disease is also a source of economic loss to domestic ruminant farms. In general, GTPV is an economically important capripoxvirus in central Asia, North Africa, the Middle East, and India [114][115][116]. At present, only attenuated vaccines are available for GTPV and other capripoxviruses [117]. The capripoxvirus-based vaccines, which are obtained by serial passages, confer protective immunity for 1 year after vaccination. Capripoxviruses can also be used as vectors for vaccines against diseases caused by ruminant pathogens, such as bluetongue, Rift Valley fever, peste des petits ruminants, or rinderpest. Certain GTPV strains, such as Isiolo and Kedong, which infect goats, sheep, and cattle, are used as a universal vaccine against capripox diseases [46]. Gorgan strain-based vaccines protect cattle against lumpy skin disease. Caprivac (Jordan Bio-Industries Centre, JOVAC) is one of the vaccines used against goatpox in cattle in the Middle East [118]. GTPV-AV41 The existing GTPV vaccine, GTPV-AV41, contains an attenuated strain obtained by passages of GTPV-AV40 strain in the testis cells of goats and sheep. Unfortunately, it may cause generalized skin lesions and miscarriages in vaccinated animals, and thus, its use hinders distinguishing between vaccinated and infected ones [119]. Therefore, modifications of the virus are needed to improve the vaccine. It is worth noticing that the inactivation of NF-κB-related genes has been observed among capripoxvirus vaccine strains. For instance, in the GTPV Gorgan vaccine, a 1.6-kbp deletion led to the inactivation of GTPV_144 and GTPV_145 genes. GTPV_144 is a counterpart of VACV-COP A55R, encoding Kelch repeat and BTB domain-containing protein 1, while GTPV_145 is related to VACV-COP B4R, encoding ANK repeat protein [117]. Mutations in these two genes are common in capripoxviruses and are typical for vaccine strains. Since A55 inhibits CD8 + T cell memory, the deletion of A55R gene or its GTPV counterpart may improve the immunogenicity of the vaccine [89]. GTPV_145 encodes an ANK repeat protein, a counterpart of B4 VACV-COP and EVM154 proteasome inhibitor of ectromelia virus, which interacts with Skp1 and conjugated ubiquitin and subsequently inhibits IκBα degradation. It is believed that that EVM154 may be involved in virus spread and its depletion may cause attenuation of the virus [120,121]. One of the insertion site candidates for GTPV-based vaccines is the 135 ORF containing an early gene, which is not essential for the viral replication in vitro and in vivo. The 135 gene encodes an 18-kDa protein, which inhibits NF-κB and apoptosis. The GTPV135 protein is a counterpart of a Bcl-like VACV-WR N1 protein. The 135 gene is a host innate immune response inhibitor and may therefore serve as an insertion site for live attenuated dual vaccines instead of the tk locus insertion site. Interestingly, the GTPV AV41 vaccine strain expressing the hemagglutinin protein of peste des petis ruminants virus (PPRV), whose gene was inserted in the ORF135 insertion site, displayed a stronger antibody neutralization response than the strain with a tk insertion site [122]. To improve GTPV-AV41 and prevent the side effects of the vaccine, it may be necessary to perform modifications based on the deletion of the nonessential gene. For instance, deletion of the viral tk gene and ORF8-18 may be beneficial for the vaccine. Among ORF8-18 homologs of VACV, the following NF-κB inhibitors can be found: ORF12, which encodes ANK repeat protein and a counterpart of B4, and ORF15, encoding a homolog of VACV IL-18BP, C12. ORF16, in turn, encodes an EGF-liked growth factor, a C11 VACV counterpart, which may activate NF-κB. In vaccinated animals, the attenuated vaccine GTPV-TK-ORF vector allowed maintaining immunogenicity and increased safety compared to wild-type GTPV-AV41. The vaccine also induced the production of neutralizing antibodies and GTPV-specific antibodies, as well as the release of IFN-γ in goats. Hence, removal of nonessential genes that are linked to apoptosis inhibition and immune modulation is considered as a factor that may improve the efficacy of the vaccine [123]. Conclusions Generation of effective immune response and immunological memory, as well as safety, is the main concern in vaccine development. When employing virus-based vaccines, it is necessary to ensure both the complete replication cycle of the virus and proper induction of immunological memory for determining the vaccine efficiency. The loss of viral immunomodulatory proteins may affect these parameters, thus influencing efficiency. Since poxviruses modulate the activation of immune cells by affecting the NF-κB-mediated apoptosis regulation, inflammation, and immunological memory, discovering new mechanisms of NF-κB inhibition and cellular targets of poxviruses may help modify vaccine candidates to improve the efficacy of poxvirus-based vaccines and the immunological memory generated by them. Author Contributions: J.S. contributed to conceptualization and writing (original draft preparation, review, and editing). L.S.-D. contributed to conceptualization and writing (figure preparation, review, and editing). All authors have read and agree to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
7,004.2
2020-11-29T00:00:00.000
[ "Biology" ]
Frontiers in Computational Neuroscience Computational Neuroscience Materials and Methods Surgical Preparation The experimental methods were similar to those used in our lab in the past (Uglesich et al., 2009). Housing, surgical and recording procedures were in accordance with the National Institutes of Health guidelines and the Mount Sinai School of Medicine Institutional Animal Care and Use Committee. Adult macaque monkeys were anesthetized initially with an intramuscular injection of xylazine (Rompun, 2 mg/kg) followed by ketamine hydro-chloride (Ketaset, 10 mg/kg), and then given propofol (Diprivan) as needed during surgery. Local anesthetic (xylocaine) was used profusely during surgery, and was used to infi ltrate the areas around the ears. Anesthesia was maintained with a mixture of propofol (4 mg/kg-hr) and sufentanil (0.05 µg/kg-hr), which was given intravenously (IV) during the experiment. Propofol anesthesia has been shown to cause no changes in blood fl ow in the occipital cortex (Fiset et al., 1999), and appears to be optimal for brain studies. Cannulae were inserted into the femoral veins, the right femoral artery, the bladder, and the trachea. The animal was mounted in a stereotaxic apparatus. Phenylephrine hydrochloride (10%) and atropine sulfate (1%) were applied to the eyes. The corneas were protected with plastic gas-permeable contact lenses, and a 3-mm diameter artifi cial pupil was placed in front of each eye. The blood pressure, electrocardiogram, and body temperature were measured and kept within the physiological range. Paralysis was produced by an infusion of pancuronium bromide (Norcuron, 0.25 mg/kg-hr), and the animal was artifi cially respired. The respiration rate and stroke volume were adjusted to produce an end-expiratory value of 3.5–4% CO 2 at the exit of the tracheal cannula. Penicillin (750,000 units) and gentamicin sulfate (4 mg) were administered IM to provide antibacterial coverage, and dexamethasone was injected IV to prevent cerebral edema. A continuous IV fl ow (3–5 ml/kg-hr) of lactated Ringer's solution with 5% dextrose was maintained throughout the experiment to keep the animal properly hydrated, INTRODUCTION The brain processes information, and it is therefore natural to estimate the amount of information that a neuron transmits to its targets. In the past, several methods that derive such estimates from the fi ring pattern Bialek et al., 1991;Rieke et al., 1997;Strong et al., 1998;Brenner et al., 2000) or membrane potential (Borst and Theunissen, 1999;DiCaprio, 2004) of individual neurons have been used. The information from spike trains was estimated by calculating the entropy associated with the various temporal patterns of spike discharge, using Shannon's formula (Shannon and Weaver, 1949). Since all brain functions involve many neurons, it is desirable to provide similar information estimates for a neuronal population (Knight, 1972). To simply add up the information amounts from individual neurons in the population would be valid only if the neurons were all independent of one another, an assumption that usually is incorrect (see, for example, Zohary et al., 1994;Bair et al., 2001;Pillow et al., 2008). Approaches like the Direct Method (Strong et al., 1998) are impractical for a population, because the multi-dimensional space occupied by many spike trains can be sampled only sparsely by most neurophysiological experiments. Calculating the information carried by a population of many neurons thus has remained a challenge (Brown et al., 2004;Quiroga and Panzeri, 2009). At the same time, the need for such estimates has become increasingly urgent, since the technology of recording simultaneously from many neurons has become much more affordable and wide-spread, and data from such recordings are becoming common. We describe here a method that estimates the amount of information carried by a population of spiking neurons, and demonstrate its use, fi rst with simulated data and then with data recorded from the lateral geniculate nucleus (LGN) of an anesthetized macaque monkey. and the urinary catheter monitored the overall fl uid balance. Such preparations are usually stable in our laboratory for more than 96 h. The animal's heart rate and blood pressure monitored the depth of anesthesia, and signs of distress, such as salivation or increased heart rate, were watched for. If such signs appeared, additional anesthetics were administered immediately. VISUAL STIMULATION The eyes were refracted, and correcting lenses focused the eyes for the usual viewing distance of 57 cm. Stimuli were presented monocularly on a video monitor (luminance: 10-50 cd/m 2 ) driven by a VSG 2/5 stimulator (CRS, Cambridge, UK). The monitor was calibrated according to Brainard (1989) and Wandell (1995). Gamma corrections were made with the VSG software and photometer (OptiCal). Visual stimuli consisted of homogeneous fi eld modulated in luminance according to a pseudo-random naturalistic sequence (van Hateren, 1997). Eight second segments of the luminance sequences were presented repeatedly 128 times ('repeats'), alternating with 8 s non-repeating ('uniques') segments of the sequence (Reinagel and Reid, 2000). In addition, we used steady (unmodulated) light screens and dark screens, during which spontaneous activity was recorded. ELECTROPHYSIOLOGICAL RECORDING A bundle of 16 stainless steel microwires (25 µ) was inserted into a 22 gauge guard tube, which was inserted into the brain to a depth of 5 mm above the LGN. The microwire electrodes were then advanced slowly (in 1 µ steps) into the LGN, until visual responses to a fl ashing full fi eld screen were detected. The brain over the LGN was then covered with silicone gel, to stabilize the electrode bundle. Based on the electrode depth, dominant eye sequence and cell properties (Kaplan, 2007), we are confi dent that all the electrodes were within the parvocellular layers of the LGN. The receptive fi elds of the recorded cells covered a relatively small area (∼4° in diameter), which suggests that the electrodes bundle remained relatively compact inside the LGN. The output of each electrode was amplifi ed, band-pass fi ltered (0.75-10 kHz), sampled at 40 kHz and stored in a Plexon MAP computer for further analysis. Spike sorting Sorting procedures. The spike trains were fi rst thresholded (SNR ≥5) and sorted using a template-matching algorithm under visual inspection (Offl ine Sorter, Plexon Inc., Dallas, TX, USA). In most cases, spikes from several neurons recorded by a given electrode could be well-separated by this simple procedure. In more diffi cult cases, additional procedures (peak-or valley-seeking, or multi-variate t-distributions) (Shoham et al., 2003) were employed. Once the spikes were sorted, a fi ring times list was generated for each neuron and used for further data analysis. Quality assurance. To ensure that all the spikes in a given train were fi red by the same neuron, we calculated for each train the interspike interval (ISI) histogram. If we found intervals that were shorter than the refractory period of 2 ms, the spike sorting was repeated to eliminate the misclassifi ed spikes. We ascertained that all the analyzed data came from responsive cells by calculating the coeffi cients of variation of the peristimulus time histogram bin counts for the responses to the repeated and unique stimuli, and taking the ratio of these two coeffi cients. Only cells for which that ratio exceeded 1.5 were included in our analysis. Generation of surrogate data To test our method we generated synthetic spike trains from a Poisson renewal process, in which the irregularities of neuronal spike times are modeled by a stochastic process whose mathematical properties are well defi ned. Recent interest and success in modeling a neuron spike-train as an inhomogeneous Poisson process (Pillow et al., 2005(Pillow et al., , 2008Pillow and Simoncelli, 2006) led us to that choice. Firing rates and input. Our modeling necessarily addressed two major features of the laboratory data. The nine real neurons show a range of mean fi ring rates, from 3.04 impulses per second (ips) to 28.72 ips, which span an order of magnitude. To mimic this, we gave our 12 model cells 12 inputs which consecutively incremented by a factor of 10 (1/11) , to give fi ring rates spanning an order of magnitude. The second major feature was that our laboratory neurons evidently received inputs processed in several ways following the original retinal stimulus. To make a simple caricature of this, we drove each of our Poisson model neurons with a separate input that was a weighted mean admixture of two van Hateren-type stimuli. The fi rst was that which we used in the laboratory and the second was the time-reversal of that stimulus. Calling these A and B, the stimuli were of the form S = (1 − x)·A + x · B, where the admixture variable x took on 12 equally spaced values starting with 0 and ending with 1. As shown in Table 1, the pairs (admixture, mean rate) were chosen in a manner that allowed the whole grouping of model cells to be divided into smoothly changing subsets in different ways, and evenly distributed the range of properties across all cells. Estimation of the information delivered by a subset of neurons If we have data from numerous parallel spike trains, the familiar Direct method (Strong et al., 1998) for computing signal information delivered requires an impractical time span of data. As a practical alternative we advance a straightforward multi-cell generalization of a method of information computation from basisfunction coeffi cients. Shannon has observed (Shannon and Weaver, 1949, Chapter 4; see also Shannon, 1949) that the probability structure of a stochastic signal over time may be well approximated in many different ways by various equivalent multivariate distribution density functions of high but fi nite dimension. He further observed that when some specifi c scheme is used to characterize both the distribution of signal-plus-noise and the distribution of noise alone, the information quantity one obtains for the signal alone, by taking the difference of the information quantities (commonly called 'entropies') evaluated from the two distributions, has a striking invariance property: the value of the signal information is universal, and does not depend on which of numerous possible coordinate systems one has chosen in which to express the multivariate probability density (see extensive bibliography, and discussion, in Rieke et al., 1997, chapter 3). We will follow Shannon (1949), whose choice of orthonormal functions was Fourier normalized sines and cosines, over a fi xed, but long, time span T. This choice has the added virtue of lending insight into the frequency structure of the information transfer under study. Here we outline our approach for obtaining the signalinformation rate, or 'mutual information rate' , transmitted by the simultaneously recorded spikes of a collection of neurons. The mathematical particulars are further elaborated in the Appendix. Following Shannon (1949), if one has a data record that spans a time T, it is natural to use the classical method of Fourier analysis to resolve that signal into frequency components, each of which speaks of the information carried by frequencies within a frequency bandwidth of 1/T. If this is repeated for many samples of output, one obtains a distribution of amplitudes within that frequency band. In principle, that probability distribution can be exploited to calculate how many bits would have to be generated per second (the information rate) to describe the information that is being transmitted within that frequency band. However, part of that information rate represents not useful information but the intrusion of noise. To quantify our overestimate we may repeat the experiment many times without variation of input stimulus, and in principle may employ the same hypothetical means as before to extract the 'information' , which now more properly may be called 'noise entropy' . When this number is subtracted from the previous, we obtain the mutual information rate, in bits per second, carried by the spikes recorded from that collection of neurons. In order to reduce the above idea to practice, we have exploited the following fact (which apparently is not well known nor easily found in the literature): if our response forgets its past history over a correlation time span that is brief compared to the experiment time span, T, then the central limit theorem applies, and our distribution of signal measurements within that narrow bandwidth will follow a Gaussian distribution. If we are making simultaneous recordings from a collection of neurons, their joint probability distribution within that bandwidth will be multivariate Gaussian. A Gaussian with known center of gravity is fully characterized by its variance, and similarly a multivariate Gaussian by its covariance matrix. Such a covariance matrix, which can be estimated directly from the data, carries with it a certain entropy. By calculating the covariance matrices for responses to both unique and repeated stimuli, one can determine the total signal information fl owing through each frequency channel for a population of neurons. To verify that our Gaussian assumption is valid, we have applied to our Fourier-coeffi cient sample sets two standard statistical tests that correctly identify a sample as Gaussian with 95% accuracy. For our 12 surrogate cells and 9 laboratory LGN cells, the degree of verifi cation across the frequency range for 2560 distribution samples (160 Hz × 8 bins/Hz × 2, with each sine and cosine term sampled 128 times) is shown in Table 2. Because of its importance, we return to this issue in the Discussion, where further evidence is provided for the Gaussian nature of the underlying distributions. Entropy vs temporal frequency In anticipation of analyzing simultaneous laboratory records of actual neurons, we have created 12 Poisson model neurons with fi ring rates that overlap those of our laboratory neurons and with inputs as discussed above in Section 'Materials and Methods' , presented at the same rate (160 Hz) used in the laboratory experiments. Figure 1 shows, for a single simulated cell, the entropy rate per frequency, for responses to unique and repeat stimuli. The entropy from the responses to the unique stimulus (signal plus noise) exceeds that of the responses to the repeated stimulus (noise alone) at low frequencies, and the two curves converge near the monitor's frame-rate of 160 Hz, beyond which signal-plus-noise is entirely noise. Hence we will terminate the sum in (Eq. A26) at that frequency. The difference between the two curves at any temporal frequency is the mutual information rate at that frequency. Single cell information For the 12 model cells, the cumulative sum of information over frequency (Eq. A26) is given in Figure 2 (left frame). We note that all the curves indeed fi nish their ascent as they approach 160 Hz. More detailed examination shows a feature that is not obvious: the output information rate of a cell refl ects its input information rate, and the input information rate of a mixed, weighted mean input is less than that of a pure, unmixed input. This accounts for the observation that the second-fastest cell (cell 11, with a near even mixture) delivers information at only about half the rate of the fastest (cell 12). Group information We turn now to the information rate of a group of cells, fi ring in parallel in response to related stimuli. We proceed similarly to what is above, but use the multi-cell equation (Eq. A25) and its cumulative sum over frequencies. As a fi rst exercise we start with the slowest-fi ring surrogate cell and then group it with the next-slowest, next the slowest 3 and so on up to the fastest; the set of cumulative curves we obtain from these groupings are shown in the left frame of Figure 3. Again we see that the accumulation of information appears to be complete earlier than the frame-rate frequency of 160 Hz. Redundancy The mutual information communicated by a group of cells typically falls below the sum of the mutual information amounts of its constituent members. This leads us to defi ne a measure of information redundancy. The redundancy of a cell with respect to a group of cells can be intuitively described as the proportion of its information already conveyed by other members of the group. For example, if a cell is added to a group of cells and 100% of its information is novel, then it has 0 redundancy. If, on the other hand, the cell brings no new information to the group, then it contains only redundant information, and it therefore has redundancy 1. With this in mind, we defi ne the redundancy of a cell C, after being added to a group G, as: The procedure of information redundancy evaluation is general, and can be applied to the addition of any cell to any group of cells. Thus for the cell groups of Figure 3, we can evaluate the redundancy of each newly added cell not only upon its addition to the group but also thereafter. This is shown for the 70 resulting redundancies, in Figure 4 (Left). Synergy When the total information conveyed by several neurons exceeds the sum of the individual ones, the neurons are synergistic (Gawne and Richmond, 1993;Schneidman et al., 2003;Montani et al., 2007). When this happens, our formula yields a negative redundancy value. ANALYSIS OF MONKEY LGN SPIKE TRAINS We now apply the same techniques to simultaneous laboratory recordings of 9 parvocellular cells from the LGN of a macaque monkey, responding to a common full-fi eld naturalistic stimulus (van Hateren, 1997;Reinagel and Reid, 2000). Figure 2 (right frame) shows the single cell cumulative information of these neurons as frequency increases. In two obvious ways their behavior differs from that of the Poisson model neurons. First, at low frequency there is a qualitative difference indicative of initially very small increment, which differs from the Poisson model's initial linear rise. Second, the real geniculate neurons show a substantial heterogeneity in the shape of their rise curves. For example, the second most informative cell (cell 8), has obtained half its information from frequencies below 40 Hz, while the most informative cell (cell 9) has obtained only 11% of its information from below that frequency. The right frame of Figure 3 shows for LGN cells the accumulating multineuron group information, while the left frame shows it for the surrogate data. Entropy (bits/s) Frequency (Hz) FIGURE 1 | Entropy per frequency conveyed by a single surrogate neuron. The signal-plus-noise entropy (derived from the unique stimuli) is shown in blue, and the noise entropy (from the repeated stimulus) is shown in red. The data shown are typical of data from other cells. FIGURE 2 | Cumulative information rate vs frequency for 12 surrogate Poisson model neurons and 9 LGN cells. The fi ring rates of the various neurons in the two groups were similar. FIGURE 3 | Group information vs frequency for our Poisson model surrogate neurons and 9 LGN cells. The group size is indicated to the right of the cumulative curve for each group. The neurons were ranked according to their fi ring rate. The fi rst group contained only the slowest fi ring neuron, and each new group was formed by adding the next ranking cell. Redundancy in surrogate and real LGN neurons Figure 4 (right frame) compares the redundancy over the 9 LGN cells with what was shown for the fi rst 9 Poisson model neurons in Figure 4 (left frame). The pair of sharp features at cells 4 and 7 might be attributed to diffi culties in spike separation. Note that the redundancy of real neurons appears to be quite different from that of their Poisson model counterparts: as cluster size increases, real cells manifest a stronger tendency than our simulated neurons to remain non-redundant. This implies that the different LGN neurons are reporting with differences in emphasis on the various temporal features of their common stimulus. DISCUSSION THE VALIDITY OF THE GAUSSIAN ASSUMPTION Our method exploits the theoretical prediction that the distribution of each stochastic Fourier coeffi cient of our data should be Gaussian. Our evidence supports this prediction. A standard visual check is to normalize a distribution by a Z-score transformation and plot its quantiles against those of a standard Gaussian. If the distribution is likewise Gaussian, the points will fall near a unit-slope straight line through the origin. Figure 5 shows two typical cases, each with 128 points: surrogate data in the left frame and LGN cell data on the right. Both show good qualitative confi rmation of the Gaussian assumption. We have proceeded to apply to our numerous Fourier coeffi cient distributions two standard statistical tests for Gaussian distribution: the Shapiro-Wilk test and the Lilliefors test. Both are designed to confi rm that a sample was drawn from a true Gaussian distribution in 95% of cases. Table 2 shows that in almost all cases for both unique and repeat responses of our 12 surrogate and 9 LGN cells our distributions passed both tests at the 95% signifi cance level. SMALL SAMPLE BIAS In the extraction of mutual information from spike data, traditional methods suffer from a bias due to the small size of the sample. We checked the Fourier method for such bias by dividing our sets of 128 runs into subsets of 64, 32 and 16 runs. The results for one surrogate cell (number 12) and one LGN cell (number 8) are shown in Figure 6. These results are typical, and show no clear small-sample bias. We also notice that, for these data, a sample of 64 runs gives a mutual information estimate reliable to better than ±10%. A summary of small-sample bias and estimated reliability for several recent techniques for calculating spike-train mutual information is given by Ince et al. (2009) (their Figure 1). In addition to the number of data segments, the number of spikes used in estimating the mutual information is also an important factor, and we discus it further at the end of the Appendix. SUMMARY AND CONCLUSIONS We have presented a new method for calculating the amount of information transmitted by a neuronal population, and have applied it to populations of simulated neurons and of monkey LGN neurons. Since the method can be used also to calculate the information transmitted by individual cells, it provides an estimate of the redundancy of information among the members of the population. In addition, the method reveals the temporal frequency bands at which the communicated information resides. The new method fi lls a gap in the toolbox of the modern neurophysiologist, who now has the ability to record simultaneously from many neurons. The methodology presented here might permit insights regarding the mutual interactions of neuronal clusters, an area that has been explored less than the behavior of single neurons or whole organisms. APPENDIX Suppose we have a stochastic numerical data-stream that we will call u(t), and which becomes uncorrelated for two values of t that are separated by a time interval greater than a maximum correlation time-interval t * . That is to say, if t 2 − t 1 > t * , then u(t 2 ) and u(t 1 ) are independent random variables in the probability sense. Suppose now that in the laboratory, by running the probabilistically identical experiment repeatedly, we gather N realizations (samples) of u(t), the n th of which we will call u (n) (t). Suppose further that we collect each data sample over a time-span T that is large compared to the correlation time interval t * . We can represent each sample u (n) (t) to whatever accuracy we desire, as a discrete sequence of numbers in the following way. Over the time interval t = 0 to t = T, we choose a set of functions ϕ m (t) that are orthonormal in the sense that they have the property: Then u (n) (t) may be represented as a weighted sum of these basis functions: This claim can be verifi ed if we substitute (Eq. A2) into (Eq. A3) and then use (Eq. A1) to evaluate the integral. Here our choice of the ϕ m (t) will be the conventional normalized sinusoids: It is a straightforward exercise to show that these functions have the property required by (Eq. A1). Now let us see what follows from T >> t * . Divide the full timespan T into K sub-intervals by defi ning the division times: and defi ne the integrals over shorter sub-intervals: But we note that the measure of the support of the integral (Eq. A7) is smaller than that of (Eq. A6) by the ratio t * /((T/K) − t * ) , and if we can pick T long enough, we can make that ratio as close to zero as we choose. So the second sum in (Eq. A8) is negligible in the limit. But now note that, because they are all separated from each other by a correlation time, the individual terms in the fi rst sum are realizations of independent random variables. If the distribution of an individual term in the sum is constrained in any one of a number of non-pathological ways, and if there are a suffi cient number of members in the sum, then the central limit theorem states that the distribution of the sum approaches a Gaussian. In the more general case, where we have several simultaneous correlated numerical data-streams, the argument runs exactly the same way. If, for many repeated samples, at a particular frequency we compute the Fourier coeffi cient for each, to estimate a multivariate probability density, then from a long enough time span, by the multivariate central limit theorem that density will approach a multivariate Gaussian. Simply because the notation is easier, we elaborate the univariate case fi rst. Specializing, for cell response we use the spike train itself, expressed as a sequence of δ-functions, so for the r th realization u (r) (t) of the stochastic spike-train variable u(t), we have: where t (r)n is the time of the n th spike of the r th realization, and N r is the total number of spikes that the cell under discussion fi res in that realization. Substituting this and also (Eq. A4) into (Eq. A3) we see that the integral may be performed at once. In the cosine case of (Eq. A4) it is, Before proceeding further we look back at Eq. A8 and note that, because a cosine is bounded between +1 and −1, every term in the sums of (Eq. A8) is bounded in absolute value by 2/T times the number of spikes in that sub-interval. As real biology will not deliver a cluster of spikes overwhelmingly more numerous than the local mean rate would estimate, the distribution of each term in the stochastic sum cannot be heavy-tailed, and we may trust the central limit theorem. Thus we may estimate that the probability density function for the stochastic Fourier coeffi cient variable u m is of the form, The right-hand-most expressions in (Eq. A12), (Eq. A13) testify that u m and V m can be estimated directly from the available laboratory data. What is the information content carried by the Gaussian (Eq. A11)? The relevant integral may be performed analytically: For a signal with fi nite forgetting-time the stochastic Fourier coeffi cients (Eq. A10) at different frequencies are statistically independent of one another, so that the signal's full multivariate probability distribution in terms of Fourier coeffi cients is given by, It is easily shown that if a multivariate distribution is the product of underlying univariate building blocks, then its information content is the sum of the information of its components, whence Observing (Eq. A13) we note that this can be evaluated from available laboratory data. Generalization of the information rate calculation to the case of multiple neurons is conceptually straightforward but notationally messy due to additional subscripts. The rth realization's spike train from the qth neuron (out of a total of Q neurons) may be defi ned as a function of time u t q r ( ) ( ) ( ) just as in (Eq. A9) above, and from our orthonormal set of sines and cosines we may fi nd the Fourier coeffi cient u q m r ( ) ( ) . This number is a realization drawn from an ensemble whose multivariate probability density function we may call: This density defi nes a vector center of gravity u m whose Q components are of the form: and similarly it defi nes a covariance matrix V m whose (q,s)th matrix element is given by, This covariance matrix has a matrix inverse A m : Clearly (Eq. A18) and (Eq. A19) are the multivariate generalizations of (Eq. A12) and (Eq. A13) above. The central limit theorem's multivariate Gaussian generalization of (Eq. A11) is, This expression becomes less intimidating in new coordinates Z (q) with new origin located at the center of gravity and orthogonally turned to diagonalize the covariance matrix (Eq. A19). We need not actually undertake this task. Call the eigenvalues of the covariance matrix π λ (A24) Shannon (1949, chapter 4), in a formally rather analogous context, has noted that much care is needed in the evaluation of expressions similar to (Eq. A24) from laboratory data. The problem arises here if the eigenvalues approach zero (and their logarithms tend to −∞) before the sum is completed. However, the information in signal-plus-noise in the mth coeffi cient, expressed by (Eq. A24) is not of comparable interest to the information from signal alone. With some caution, this signal-alone information contribution may be obtained by subtracting from (Eq. A24) a similar expression for noise alone, taken from additional laboratory data in which the same stimulus was presented repeatedly. If we use 'µ' to annotate the eigenvalues of the covariance matrix which emerges from these runs, then the information difference of interest, following from (Eq. A24) is expresses the multi-cell information contributed by the mth frequency component. To obtain the total multi-cell information, it is to be summed over increasing m until further contributions become inappreciable. An entirely analogous procedure applies to obtain the information of signal alone for an individual cell. Call the variance of the mth frequency component of the unique runs V mu , and that of the repeat runs V mr . Each will yield a total information rate expressed by (Eq. A16) above, and their difference, the information rate from signal alone, consequently will be: In the data analysis in the main text, the single-cell sums (Eq. A16), for both uniques and repeats, approached a common, linearly advancing value which they achieved near 160 Hz, which is the stimulus frame-rate. Consequently, the summation over frequency of signal only information was cut off at that frequency, both for single cells (see Eq. A26) and for combinations of cells. In both the simulations and the experiments, each run was of T = 8 s duration. In consequence the orthonormalized sines and cosines of (Eq. A4) advanced by steps of 1/8 Hz. EFFECT OF THE NUMBER OF RESPONSE SPIKES With reference to small-sample bias, a further word is appropriate here regarding our methodology. If the number of runs is modest, the total number of spikes in response to the repeated stimulus may show a signifi cant statistical fl uctuation away from the total number of spikes in response to the unique runs. In this case, the asymptotic high-frequency entropy values, as seen in our Figure 1, will not quite coincide, and consequently the accumulated mutual information will show an artifactual small linear drift with increasing frequency. This introduces a bit of uncertainty in the cut-off frequency and in the total mutual information. This asymptotic drift may be turned into a more objective way to evaluate the total mutual information. In cases where the problem arises, we divide our repeat runs into two subsets: the half with the most spikes and the half with the least. Accumulating both mutual information estimates at high frequency, we linearly extrapolate both asymptotic linear drifts back to zero frequency, where they intersect at the proper value of mutual information.
7,375.2
0001-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
The paraoxonase ( PON1 ) Q192R polymorphism is not associated with poor health status or depression in the ELSA or I N CHIANTI studies Background The human paraoxonase (PON1) protein detoxifies certain organo-phosphates, and the PON1 Q192R polymorphism (rs662) affects PON1 activity. Groups with higher dose exposure to organophosphate sheep dips or first Gulf War nerve toxins reported poorer health if they had 192R, and these associations have been used to exemplify Mendelian randomization analysis. However, a reported association of 192R with depression in a population-based study of older women recently cast doubt on the specificity of the higher dose findings. We aimed to examine associations between the PON1 Q192R polymorphism and self-reported poor health and depression in two independent population-based studies. We used logistic regression models to examine the associations in men and women aged 60–79 years from the English Longitudinal Study of (ELSA, n ¼ 3158) and InCHIANTI ( n ¼ 761) population Outcomes included the Center for Epidemiologic Studies Depression (CES-D) scale, self-rated general health status and (in ELSA only) diagnoses of depression. Conclusions We found no evidence of an association between the PON1 Q192R polymorphism and poor general or mental health in two independent population-based studies. Neither the claimed Q192R association with depression in the general population nor its theoretical implications were supported. Introduction The paraoxonase 1 (PON1) protein contributes significantly to the detoxification of several organophosphates (OPs). The toxicity of many OPs occurs through inhibition of acetylcholinesterase, an enzyme essential for normal nerve impulse transmission. The effect of OPs on neural signalling is the basis of their use as insecticides and nerve gas agents. 1 OPs have been used in sheep dip since the 1960s and were used extensively in the UK between 1976 and 1992 when compulsory sheep dipping was in place. One of the main OPs used in sheep dips is diazinon. 2 Human serum PON1 hydrolyses diazoxon, the active metabolite of diazinon, thus limiting the toxicity of the OP. 3 PON1 is also involved in metabolizing oxidized phospholipids and has been linked with systemic oxidative stress and cardiovascular disease risk. 4 It is widely accepted that people exhibiting low PON1 activity may be more sensitive to the toxicity of certain OPs. 5 There are several polymorphisms associated with both the level of expression of PON1 and its catalytic activity, 5 but one common variant i.e. Q192R (rs662) in the coding region of the PON1 gene has received most attention. This non-synonymous polymorphism involves a substitution of glutamine (Q, 'wild' or common type) with arginine (R, variant) at amino acid position 192 of the protein sequence. Early studies indicated that individuals with the 192R genotype demonstrated higher serum levels and activity of PON1 compared with individuals with the QQ192 genotype. 6 In 1996, Davies et al. 1 identified that the Q192R polymorphism affected the catalytic activity of the PON1 protein in a substrate-specific manner, suggesting that individuals with the 192R substitution were less efficient at detoxification of diazoxon, soman and sarin, although the opposite may apply to paraoxon. In 2002, Cherry et al. 7 hypothesized that sheep dippers with the 192R genotype might be more vulnerable to toxic effects of the high-dose OPs they were exposed to. More recent evidence has been mixed, with some studies reporting no difference in hydrolysis rates of diazoxon by genotype 8,9 or even faster detoxification in individuals with the 192R genotype 10 under certain conditions. On the basis of early evidence of 192R carriers being poorer detoxifiers of the relevant OPs, the Q192R polymorphism has been used in Mendelian randomization studies to determine whether there is a causal link between OPs and neurological impairment. 11,12 If exposure to OPs (especially diazinon) causes illhealth, then in farmers exposed to higher dose OPs from sheep dips, those carrying the 192R isoform, would be more likely to report poorer general health and greater morbidity; this association was observed in studies by Mackness et al. and Cherry et al. 3,7 Similarly, first Gulf War veterans (who are reported to have been exposed to OP nerve agents including sarin) described more neurological impairment compared with controls. These neurologically impaired veterans were more likely to have the 192R genotype. 13 Recently, Lawlor et al. 14 aimed to extend the associations between the PON1 Q192R polymorphism and poor health and neurological impairment to a more general population. Health outcomes in women aged 60-79 years in the British Women's Heart and Health (BWHH) study were examined and the presence of the 192R genotype was found to be associated with respondent reports of doctor-diagnosed depression [per-allele odds ratio (OR) ¼ 1.22; 95% confidence interval (CI) 1.05-1.41]. It was argued that this result had important implications, casting doubt on the attribution of specific causation in the previous sheep dip and Gulf War veterans' studies. The apparent association of the PON1 polymorphism with depression in this group of older women, who were unlikely to have had recent occupational or high-dose exposure to OPs, was argued to suggest that ill health in the sheep dip and war exposure groups may be part of a general vulnerability and not necessarily specific to their exceptional (high-dose) exposures. Implications for psychiatric causation were also claimed. However, the analysis was based on a single question to respondents about diagnosed depression, and no independent replication has been available. Given the potential methodological and causal importance of the previous report, we aimed to examine associations between the Q192R polymorphism in PON1 in two independent older population samples (the ELSA and InCHIANTI studies). These studies are not gender specific and also have the advantage of having data on overall health status and from validated scales of depression symptoms. Methods The English Longitudinal Study of Ageing (ELSA) study ELSA 15 is a follow-up study of respondents to the UK Government's Health Survey for England (HSE) (at http://www.dh.gov.uk/), an annual crosssectional survey designed to be representative of the community-living population. The ELSA sample included those aged 550 years seen originally either in HSE 1998HSE , 1999 or 2001, and is described in detail elsewhere. 15 The InCHIANTI study The InCHIANTI study 16 is a population study of decline of mobility in later life. The sample is representative of the population of two small towns in Tuscany, Italy, and all participants were of White European origin. The study includes 1453 respondents, of whom 1343 donated blood samples at baseline. Of these individuals, 761 were aged between 60 and 79. The Italian National Institute of Research and Care of Aging Institutional Review Board approved the study protocol. As in ELSA, data were collected on self-reported health and depression by face-to-face interviewing. Genotyping methods The PON1 Q192R polymorphism (rs662) was genotyped in ELSA as part of a 1536 Goldengate custom SNP panel by Illumina, using high-throughput BeadArray TM technology. Genotyping was successfully completed in 3666 individuals aged between 60 and 79 years, with a call rate of 99.7%. Of the genotyped population, 3158 were of White European origin and formed the sample group for our analysis. In InCHIANTI, genome-wide genotyping was performed using the Illumina Infinium HumanHap550 genotyping chip (chip versions 1 and 3) as previously described. 17 All single-nucleotide polymorphisms (SNPs) on the chip were fully quality controlled, and SNPs were only used where call rates were 498% and had minor allele frequencies 41%. The PON1 Q192R polymorphism (rs662) that was on the chip was genotyped in 782 individuals aged between 60 and 79 years. In both studies, the Q192R polymorphism did not deviate appreciably from the expected population distribution, i.e. it was in Hardy Weinberg equilibrium (P40.05), and there were no duplicate errors. Measures of depression The Center for Epidemiologic Studies Depression scale (CES-D) has been extensively validated in older populations. 18 In InCHIANTI the full 60-point scale was used, with a cut-off point of 16 18,19 being indicative of depression. In ELSA, an abridged 8-point CES-D scale, with a cut-off point of three or more being symptomatic of depression in line with previous studies that have used this abridged version of the scale, 20 was calculated from responses to the questions: 'Which of the following was true for you much of the time during the past week: felt depressed; everything you did was an effort; sleep was restless; were happy (reversed); felt lonely; enjoyed life (reversed); felt sad; could not get going'? Results In the InCHIANTI cohort, 36.6% (95% CI 33.4-39.8%) of respondents aged between 60 and 79 reported having fair to very poor general health. The equivalent prevalence in the ELSA cohort was 28.5% (26.9-30.1%). We found no association between PON1 Q192R and fair to very poor self-reported general health in either study or in the meta-analysis (MA); in MA, 31.0% of the sample reported having fair to very poor general health in the homozygous 'RR' variant group compared with 29.9% in the common homozygous 'QQ' genotype (OR ¼ 1.01; 95% CI 0.91-1.13, P ¼ 0.795 in the additive genotype logistic regression model adjusting for age, sex and study; Table 1). In the InCHIANTI cohort, the prevalence of depression in individuals aged between 60 and 79 based on a cut-off point of 16 on the full CES-D scale was 18.6% (95% CI: 16.0-21.2%). In the ELSA cohort, based on the abridged 8-point CES-D scale using a cut-off point of three or more depressive symptoms, the prevalence of depression was 19.4% (18.0-20.8%). In ELSA, we also considered a cut-off point of four or more depressive symptoms, yielding a prevalence of 12.5% (11.3-13.6%). We found no association between depressive symptoms measured by the CES-D scale and the genotype in the individual studies or in the meta-analysis (MA: OR ¼ 1.01; 95% CI 0.87-1.17, P ¼ 0.90 in the additive genotype model; Table 1). Data on diagnoses of depression were only available from ELSA respondents. Again, we found no difference in the prevalence of this outcome by genotype (OR ¼ 1.03; 95% CI 0.82-1.30, P ¼ 0.80). These results remain consistent in sex-specific analyses (data available from authors). Discussion In this analysis we have used two population-based studies covering the same age range as the original report by Lawlor et al. 14 We have found that the PON1 Q192R polymorphism (rs662) is not associated with current depressive symptoms or history of diagnosed depression in either study independently, or across our combined sample of 3919 people aged 60-79 years. In addition to a simple reporting of diagnosed mental illness, we have examined data from a validated depression scale, the CES-D. We have also examined the broader concept of self-rated health status. In neither women nor men did we find the previously reported association between the PON1 Q192R polymorphism and health status. It has also been suggested that the PON1 Q192R genotype might predispose to chronic neurodegenerative disease in a study that reported a higher prevalence of Parkinson's disease in people with the R192 isoform than in those with the Q192 isoform. 21 However, more recent genome-wide association studies on specific neurological conditions including Parkinson's disease, 22 major depressive disorder, 23 neuroticism 24 and bipolar disorder 25 have reported Adjusting for study also in meta-analysis. THE PARAOXONASE (PON1) Q192R POLYMORPHISM no associations at genome-wide significance levels between these neurological conditions and the Q192R polymorphism or any other polymorphism in the PON1 gene. Our findings are not consistent with those of Lawlor et al., who reported an association between the PON1 Q192R polymorphism and symptoms of depression in a population-based sample of elderly women. Although the absence of an association is difficult to prove, it is clear that if such an association does exist, it is likely to be small. We also note that whilst our two populations studied are representative samples of community-dwelling individuals, like Lawlor et al., we have no direct measure of the level of exposure to relevant OPs within these subjects. Given the mixed evidence on the biological effect of the PON1 192R variant, future work is needed to clarify the effect of this variant on in vivo PON1 activity in relevant exposures and circumstances. Work is also needed on quantifying the relevant OP exposures at the individual level. Given the paucity of support for the association of the studied health effects with the 192R variant in the general population, however, the case for this area being a research priority can be doubted. Conclusion In two independent cohorts, we found no evidence of an association between the PON1 Q192R polymorphism and general health status or depression in the general older population. Neither the claimed 192R association with depression in the general older population nor its theoretical implications are supported. Funding US National Institute on Aging [grant numbers R01AG24233, R01AG1764406S1 to English Longitudinal Study of Ageing DNA Repository (EDNAR)]; the Intramural Research Program, US National Institute on Aging, NIH. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
2,994.6
2009-10-01T00:00:00.000
[ "Medicine", "Biology", "Psychology" ]
A substantial hybridization between correlated Ni-d orbital and itinerant electrons in infinite-layer nickelates The discovery of unconventional superconductivity in hole doped NdNiO2, similar to CaCuO2, has received enormous attention. However, different from CaCuO2, RNiO2 (R = Nd, La) has itinerant electrons in the rare-earth spacer layer. Previous studies show that the hybridization between Ni-dx2−y2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${d}_{{x}^{2}-{y}^{2}}$$\end{document} and rare-earth-d orbitals is very weak and thus RNiO2 is still a promising analog of CaCuO2. Here, we perform first-principles calculations to show that the hybridization between Ni-dx2−y2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${d}_{{x}^{2}-{y}^{2}}$$\end{document} orbital and itinerant electrons in RNiO2 is substantially stronger than previously thought. The dominant hybridization comes from an interstitial-s orbital rather than rare-earth-d orbitals, due to a large inter-cell hopping. Because of the hybridization, Ni local moment is screened by itinerant electrons and the critical UNi for long-range magnetic ordering is increased. Our work shows that the electronic structure of RNiO2 is distinct from CaCuO2, implying that the observed superconductivity in infinite-layer nickelates does not emerge from a doped Mott insulator. The discovery of superconductivity in doped NdNiO2 has generated excitement due to similarities with cuprates. Here, the authors use first-principles calculations to show that different from cuprates, a hybridization between Ni d-orbitals and itinerant electrons in NdNiO2 disfavours magnetism by screening Ni moment, as in Kondo systems. S ince the discovery of high-temperature superconductivity in cuprates 1 , people have been attempting to search for superconductivity in other materials whose crystal and electronic structures are similar to those of cuprates 2,3 . One of the obvious candidates is La 2 NiO 4 which is iso-structural to La 2 CuO 4 and Ni is the nearest neighbor of Cu in the periodic table. However, superconductivity has not been observed in doped La 2 NiO 4 4 . This is in part due to the fact that in La 2 NiO 4 , two Ni-e g orbitals are active at the Fermi level, while in La 2 CuO 4 only Cu-d x 2 Ày 2 appears at the Fermi level. Based on this argument, a series of nickelates and nickelate heterostructures have been proposed with the aim of realizing a single orbital Fermi surface in nickelates. Those attempts started from infinite-layer nickelates 2,5,6 , to LaNiO 3 /LaAlO 3 superlattices 7-10 , to tri-component nickelate heterostructures 11,12 and to reduced Ruddlesden-Popper series 13,14 . Eventually, superconductivity with a transition temperature of about 15 K has recently been discovered in hole doped infinite-layer nickelate NdNiO 2 15 , injecting new vitality into the field of high-T c superconductivity [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] . However, there is an important difference between infinite-layer nickelate RNiO 2 (R = Nd, La) and infinite-layer cuprate CaCuO 2 in their electronic structures: in infinite-layer cuprates, only a single Cu-d x 2 Ày 2 band crosses the Fermi level, while in infinitelayer nickelates, in addition to Ni-d x 2 Ày 2 band, another conduction band also crosses the Fermi level 6,[21][22][23] . First-principles calculations show that the other non-Ni conduction band originates from rare-earth spacer layers 6,[21][22][23] . Hepting et al. 20 propose that itinerant electrons on rare-earth-d orbitals may hybridize with Ni-d x 2 Ày 2 orbital, rendering RNiO 2 an "oxide-intermetallic" compound. But previous studies find that the hybridization between Ni-d x 2 Ày 2 and rare-earth-d orbitals is very weak [21][22][23]29 . Therefore other than the self-doping effect 27 , infinite-layer nickelates can still be considered as a promising analog of infinite-layer cuprates 21,16 . In this work, we combine density functional theory (DFT) 34,35 and dynamical mean-field theory (DMFT) 36,37 to show that the hybridization between Ni-d x 2 Ày 2 orbital and itinerant electrons in rare-earth spacer layers is substantially stronger than previously thought. However, the largest source of hybridization comes from an interstitial-s orbital due to a large inter-cell hopping. The hybridization with rare-earth-d orbitals is weak, about one order of magnitude smaller. We also find that weak-to-moderate correlation effects on Ni lead to a charge transfer from Ni-d x 2 Ày 2 orbital to hybridization states, which provides more itinerant electrons to couple to Ni-d x 2 Ày 2 orbital. In the experimentally observed paramagnetic metallic state of RNiO 2 , we explicitly demonstrate that the coupling between Ni-d x 2 Ày 2 orbital and itinerant electrons screens the Ni local moment, as in Kondo systems [38][39][40] . Finally we find that the hybridization increases the critical U Ni that is needed to induce long-range magnetic ordering. Our work provides the microscopic origin of a substantial hybridization between Ni-d x 2 Ày 2 orbital and itinerant electrons in RNiO 2 , which leads to an electronic structure that is distinct from that of CaCuO 2 . As a consequence of the hybridization, spins on Ni-d x 2 Ày 2 orbital are affected by itinerant electrons and the physical property of RNiO 2 is changed. This implies that the observed superconductivity in infinite-layer nickelates does not emerge from a doped Mott insulator as in cuprates. The computational details of our DFT and DMFT calculations can be found in the Method section. For clarity, we study NdNiO 2 as a representative of infinite-layer nickelates. The results of LaNiO 2 are very similar (see Supplementary Note 1 and Note 2 in the Supplementary Information). Results Electronic structure and interstitial-s orbital. In Fig. 1a, b, we show the DFT-calculated band structure and Wannier function fitting of NdNiO 2 and CaCuO 2 in the non-spin-polarized state, respectively. We use altogether 17 Wannier projectors to fit the DFT band structure: 5 Ni/Cu-d orbitals, 5 Nd/Ca-d orbitals, 3 Op orbitals (for each O atom), and an interstitial-s orbital. The interstitial-s orbital is located at the position of the missing apical oxygen. The importance of interstitial-s orbitals has been noticed in the study of electrides and infinite-layer nickelates 22,41,42 . Our Wannier fitting exactly reproduces not only the band structure of Fig. 1 Non-spin-polarized band structures calculated by density functional theory (DFT) and Wannier fitting. a, b DFT-calculated band structures and 17 Wannier functions fitting of NdNiO 2 (a) and CaCuO 2 (b). The thick blue lines are DFT-calculated bands and the red thin lines are bands reproduced by the Wannier functions. The red dots show the Wannier projection onto Ni-d x 2 Ày 2 and Cu-d x 2 Ày 2 orbitals, respectively. c-e Band structures reproduced by Wannier functions in an energy window close to the Fermi level. The dots show the weights of Wannier projections onto Nd-d 3z 2 Àr 2 orbital (c), Nd-d xy orbital (d), and interstitial-s orbital (e). The coordinates of the high-symmetry points on the k-path are Γ(0,0,0)-X(0.5,0,0)-M(0.5,0.5,0)-Γ(0,0,0)-Z (0,0,0.5)-R(0.5,0,0.5)-A(0.5,0.5,0.5)-Z(0,0,0.5). The Fermi level E F (black dashed line) is shifted to zero energy. f-h An iso-value surface of the Wannier functions of Nd-d 3z 2 Àr 2 orbital (f), Nd-d xy orbital (g), and interstitial-s orbital (h). The large orange atom is Nd, the gray atom is Ni, and the small red atom is O. the entire transition-metal and oxygen pd manifold, but also the band structure of unoccupied states about 5 eV above the Fermi level. In particular, the Ni/Cu-d x 2 Ày 2 Wannier projector is highlighted by red dots in Fig. 1a, b. The details of the Wannier fitting can be found in Supplementary Note 3 in the Supplementary Information. For both compounds, Ni/Cu-d x 2 Ày 2 band crosses the Fermi level. However, as we mentioned in the Introduction, in addition to Ni-d x 2 Ày 2 band, another conduction band also crosses the Fermi level in NdNiO 2 . Using Wannier analysis, we find that the non-Ni conduction electron band is mainly composed of three orbitals: Nd-d 3z 2 Àr 2 , Nd-d xy , and interstitial-s orbitals. The corresponding Wannier projectors are highlighted by dots in the panels of Fig. 1c-e. An iso-value surface of the three Wannier functions (Nd-d 3z 2 Àr 2 , Nd-d xy , and interstitial-s orbitals) is explicitly shown in Fig. 1f-h. We note that interstitial-s orbital is more delocalized than Nd-d 3z 2 Àr 2 and Nd-d xy orbitals. Because all these three orbitals are located in the Nd spacer layer between adjacent NiO 2 planes, if these three orbitals can hybridize with Ni-d x 2 Ày 2 orbital, then they will create a three-dimensional electronic structure, distinct from that of CaCuO 2 20 . Analysis of hybridization. However, from symmetry consideration, within the same cell the hopping between Ni-d x 2 Ày 2 orbital and any of those three orbitals (Nd-d 3z 2 Àr 2 , Nd-d xy , and interstitial-s) is exactly equal to zero 22 , which leads to the conclusion that the hybridization between Ni-d x 2 Ày 2 and rare-earth-d orbitals is weak 20,22,29 . While this conclusion is correct by itself, the hybridization between Ni-d x 2 Ày 2 and interstitial-s orbital has been omitted in previous studies [20][21][22][23]27,29 . We find that due to a large inter-cell hopping, Ni-d x 2 Ày 2 orbital hybridizes with interstitial-s orbital much more substantially than with rare-earth-d orbitals by about one order of magnitude. The direct inter-cell hopping between Ni-d x 2 Ày 2 and any of the three orbitals (Nd-d 3z 2 Àr 2 , Nd-d xy and interstitial-s) is negligibly small. The effective hopping is via O-p orbitals. Figure 2 shows the inter-cell hopping between Ni-d x 2 Ày 2 orbital and the other three orbitals via one O-p orbital. Among Nd-d 3z 2 Àr 2 , Nd-d xy and interstitial-s orbitals, we find that the largest effective hopping (via one O-p orbital) is the one with interstitial-s orbital (see Table 1). The effective hopping between Ni-d x 2 Ày 2 and Nd-d xy / d 3z 2 Àr 2 orbitals is one order of magnitude smaller because Nd atom is located at the corner of the cell, which is further from the O atom than the interstitial site is. Furthermore, the energy difference between interstitial-s and O-p orbitals is about 1 eV smaller than that between Nd-d xy /d 3z 2 Àr 2 and O-p orbitals (see Table 1). These two factors combined lead to the fact that Nid x 2 Ày 2 has a significant coupling with interstitial-s orbital, substantially stronger than that with Nd-d orbitals. This challenges the previous picture that the hybridization between Ni-d x 2 Ày 2 orbital and itinerant electrons in the Nd spacer layer is weak [20][21][22][23]27,29 . To further confirm that the hybridization is substantial, we downfold the full band structure to a noninteracting model that is based on the above four orbitals (Ni-d x 2 Ày 2 , Nd-d 3z 2 Àr 2 , Nd-d xy , and interstitial-s orbitals). Equation (1) shows the Wannier-based Supplementary Information). The important information is in the first row. The largest hopping is the one between neighboring Ni-d x 2 Ày 2 orbitals (this is due to the σ bond between Ni-d x 2 Ày 2 and O-p x /p y orbitals). However, the hopping between Ni-d x 2 Ày2 and interstitial-s orbitals is even comparable to the largest hopping. By contrast, the hopping between Ni-d x 2 Ày 2 and Nd-d xy /d 3z 2 Àr 2 orbitals is about one order of magnitude smaller than the hopping between Ni-d x 2 Ày 2 and interstitial-s orbitals, which is consistent with the preceding analysis. Charge transfer and screening of Ni local moment. Since infinite-layer nickelates are correlated materials, next we study correlation effects arising from Ni-d x 2 Ày 2 orbital. We focus on whether the hybridization between Ni-d x 2 Ày 2 orbital and itinerant electrons in the rare-earth spacer layer may affect the correlated properties of NdNiO 2 , such as magnetism. We use the above four orbitals (see Eq. (1)) to build an interacting model: where mm 0 labels different orbitals, i labels Ni sites and σ labels spins.n iσ is the occupancy operator of Ni-d x 2 Ày 2 orbital at site i with spin σ and the onsite Coulomb repulsion is only applied on Inter-cell hopping from Ni-d x 2 Ày 2 to interstitial-s orbital (a), to Nd-d xy orbital (b), and to Nd-d 3z 2 Àr 2 orbital (c) via one O-p orbital. The hopping t and energy difference Δ between the five relevant orbitals of NdNiO2 shown in Fig. 2. d x 2 Ày 2 is the Ni-d x 2 Ày 2 orbital; p is the O-p orbital; d 3z 2 Àr 2 is the Nd-d 3z 2 Àr 2 orbital; dxy is the Nd-d xy orbital; s is the interstitial-s orbital. The hopping and energy difference are obtained from 17 Wannier functions fitting. The unit is eV. the Ni-d x 2 Ày 2 orbital. H 0 (k) is the Fourier transform of the Wannier-based Hamiltonian H 0 (R) 9 andV dc is the double counting potential. That we do not explicitly include O-p states in the model is justified by noting that in NdNiO 2 O-p states have much lower energy than Ni-d states, which is different from perovskite rare-earth nickelates and charge-transfer-type cuprates 20,19 . In the model Eq. (2), the Ni-d x 2 Ày 2 orbital is the correlated state while the other three orbitals (interstitial-s and Nd-d 3z 2 Àr 2 /d xy ) are noninteracting, referred to as hybridization states. We perform dynamical mean-field theory calculations on Eq. (2). We first study paramagnetic state (paramagnetism is imposed in the calculations). Figure 3a-c shows the spectral function with an increasing U Ni on Ni-d x 2 Ày 2 orbital. At U Ni = 0 eV, the system is metallic with all the four orbitals crossing the Fermi level (the main contribution comes from Ni-d x 2 Ày 2 ). As U Ni increases to 3 eV, a quasi-particle peak is evident with the other three orbitals still crossing the Fermi level. We find a critical U Ni of about 7 eV, where the quasi-particle peak becomes completely suppressed and a Mott gap emerges. As U Ni further increases to 9 eV (not shown in Fig. 3), a clear Mott gap of about 1 eV is opened. The presence of hybridization states means that there could be charge transfer between correlated Ni-d x 2 Ày 2 orbital and interstitial-s/Nd-d orbitals. We calculate the occupancy of each Wannier function N α and study correlation-driven charge transfer in NdNiO 2 . Figure 3d shows N α of each hybridization state and Ni-d x 2 Ày 2 orbital as well as the total occupancy of hybridization states as a function of U Ni . We first note that at U Ni = 0, the total occupancy of hybridization states is 0.14, which is significant. As U Ni becomes larger, the total occupancy of hybridization states first increases and then decreases. This is because when U Ni is small, the system is still metallic with all the hybridization states crossing the Fermi level, while the upper Hubbard band of Nid x 2 Ày 2 orbital is just formed and pushed to higher energy. This leads to charge transfer from Ni-d x 2 Ày 2 orbital to hybridization states, providing more itinerant electrons to couple to Ni-d x 2 Ày 2 orbital. However, when U Ni is large, hybridization states are also pushed above the Fermi level, which causes electron to transfer back to Ni-d x 2 Ày 2 orbital (in the lower Hubbard band). In the strong U Ni limit where the Mott gap opens, itinerant electrons in the Nd spacer layer disappear. Figure 3d also shows that for all U Ni considered, the occupancy on interstitial-s orbital is always the largest among the three hybridization states, confirming the importance of the interstitial-s orbital in infinite-layer nickelates. We note that because we calculate the occupancy at finite temperatures, even when the gap is opened, the occupancy of hybridization states does not exactly become zero. Because of the hybridization, we study possible screening of Ni local magnetic moment by itinerant electrons. We calculate local spin susceptibility of Ni-d x 2 Ày 2 orbital: where S z (τ) is the local spin operator for Ni-d x 2 Ày 2 orbital, at the imaginary time τ. g denotes the electron spin gyromagnetic factor and β = 1/(k B T) is the inverse temperature. Figure 3e shows χ ω¼0 loc ðTÞ for two representative values of U Ni . The blue symbols are χ ω¼0 loc ðTÞ for U Ni = 7 eV when the system becomes insulating. The local spin susceptibility nicely fits to a Curie-Weiss behavior, as is shown by the black dashed line in Fig. 3e. χ ω¼0 loc ðTÞ has a strong enhancement at low temperatures. However, for U Ni = 2 eV when the system is metallic, we find a completely different χ ω¼0 loc ðTÞ. The local spin susceptibility has very weak dependence on temperatures (see Fig. 3f for the zoomin). In particular, at low temperatures (T < 250 K), χ ω¼0 loc ðTÞ reaches a plateau. We note that the weak temperature dependence of χ ω¼0 loc ðTÞ is consistent with the experimental result of LaNiO 2 paramagnetic susceptibility 5 , in particular our simple model calculations qualitatively reproduce the low-temperature plateau feature that is observed in experiment 5 . To explicitly understand how the hybridization between itinerant electrons and Ni-d x 2 Ày 2 orbital affects local spin susceptibility, we perform a thought-experiment: we manually "turn off" hybridization, i.e., for each R, we set hsjH 0 ðRÞjd x 2 Ày 2 i ¼ hd xy jH 0 ðRÞjd x 2 Ày 2 i ¼ hd 3z 2 Àr 2 jH 0 ðRÞjd x 2 Ày 2 i ¼ 0. Then we recalculate χ ω¼0 loc ðTÞ using the modified Hamiltonian with U Ni = 2 eV. The chemical potential is adjusted so that the total occupancy remains unchanged in the modified Hamiltonian. The two local spin susceptibilities are compared in Fig. 3f. With hybridization, χ ω¼0 loc ðTÞ saturates at low temperatures, implying that μ eff decreases or even vanishes with lowering temperatures. However, without hybridization, χ ω¼0 loc ðTÞ shows an evident enhancement at low temperatures and a Curie-Weiss behavior is restored (black dashed line). This shows that in paramagnetic metallic NdNiO 2 , the hybridization between itinerant electrons and Ni-d x 2 Ày 2 orbital is substantial and as a consequence, it screens the Ni local magnetic moment, as in Kondo systems [38][39][40] . Such a screening mechanism may be used to explain the low-temperature upturn in the resistivity of NdNiO 2 observed in experiment 27,15 . We note that while we only fix the total occupancy by adjusting the chemical potential, the occupancy of Ni-d x 2 Ày 2 orbital is almost the same in the original and modified models. In Fig. 3f, "with hybridization", Ni-d x 2 Ày 2 occupancy is 0.84 and "without hybridization", Ni-d x 2 Ày 2 occupancy is 0.83. This indicates that the screening of Ni moment is mainly due to the hybridization effects, while the change of Nid x 2 Ày 2 occupancy (0.01e per Ni) plays a secondary role. Correlation strength and phase diagram. We estimate the correlation strength for NdNiO 2 by calculating its phase diagram. We allow spin polarization in the DMFT calculations and study both ferromagnetic and checkerboard antiferromagnetic states. We find that ferromagnetic ordering cannot be stabilized up to U Ni = 9 eV. Checkerboard antiferromagnetic state can emerge when U Ni exceeds 2.5 eV. The phase diagram is shown in Fig. 4a in which M d is the local magnetic moment on each Ni atom. M d is zero until U Ni ≃ 2.5 eV and then increases with U Ni and finally saturates to 1 μ B /Ni which corresponds to a S ¼ 1 2 state. We note that the critical value of U Ni is model-dependent. If we include Op states and semi-core states, the critical value of U Ni will be substantially larger 43 . The robust result here is that with U Ni increasing, antiferromagnetic ordering occurs before the metalinsulator transition. In the antiferromagnetic state, the critical U Ni for the metal-insulator transition is about 6 eV, slightly smaller than that in the paramagnetic phase. The spectral function of antiferromagnetic metallic and insulating states is shown in Fig. 4b and c, respectively. Experimentally long-range magnetic orderings are not observed in NdNiO 2 44 . The calculated phase diagram means that NdNiO 2 can only be in a paramagnetic metallic state (instead of a paramagnetic insulating state), in which the hybridization between Ni-d x 2 Ày 2 and itinerant electrons screens the Ni local magnetic moment. We note that using our model Eq. (2), the calculated phase boundary indicates that Ni correlation strength is moderate in NdNiO 2 with U Ni /t dd < 7 (2) in the antiferromagnetic state with U Ni = 3 eV. A(ω) is the frequency-dependent spectral function and ω represents the frequency. The states above (below) zero correspond to spin up (down). The Fermi level (vertical dashed line) is set at zero energy. The red, blue, magenta, yellow, and green curves represent Ni-d x 2 Ày 2 projected spectral function, Nd-d 3z 2 Àr 2 projected spectral function, Nd-d xy projected spectral function, interstitial-s projected spectral function, and total spectral function, respectively. The inset shows the spectral function of a single Ni atom projected onto its d x 2 Ày 2 orbital. c Same as (b) with U Ni = 9 eV. d The solid symbols are the same as in (a). The open symbols are local moment on each Ni atom recalculated with the hybridization "turned off". COMMUNICATIONS PHYSICS | https://doi.org/10.1038/s42005-020-0347-x ARTICLE COMMUNICATIONS PHYSICS | (2020) 3:84 | https://doi.org/10.1038/s42005-020-0347-x | www.nature.com/commsphys (t dd is the effective hopping between the nearest-neighbor Nid x 2 Ày 2 due to the σ pd bond). This contrasts with the parent compounds of superconducting cuprates which are antiferromagnetic insulators and are described by an effective single-orbital Hubbard model with a larger correlation strength (U Ni /t dd = 8-20) [45][46][47][48] . Finally, we perform a self-consistent check on the hybridization. When the system is metallic, the hybridization between itinerant electrons and Ni-d x 2 Ày 2 orbital screens the spin on Ni site and reduces the local spin susceptibility χ ω¼0 loc ðTÞ in the paramagnetic phase. This implies that once we allow antiferromagnetic ordering, a smaller critical U Ni may be needed to induce magnetism. To test that, we recalculate the phase diagram using the modified Hamiltonian with the hybridization manually "turned off". The chemical potential is adjusted in the modified model so that the total occupancy remains unchanged. Figure 4d shows that without the hybridization, the Ni magnetic moment increases and the antiferromagnetic phase is expanded with the critical U Ni reduced to 1.8 eV (U Ni /t dd ≃ 5). This shows that the coupling to the conducting electrons affects Ni spins and changes the magnetic property of NdNiO 2 40 . Discussion Our minimal model Eq. (2) is different from the standard Hubbard model (single-orbital, two-dimensional square lattice, and half filling) due to the presence of hybridization. It is also different from a standard periodic Anderson model in that (1) the correlated orbital is a 3d-orbital with a strong dispersion instead of a 4f or 5f orbital whose dispersion is usually neglected 20,49,50 ; (2) the hybridization of Ni-d x 2 Ày 2 with the three noninteracting orbitals is all inter-cell rather than onsite and anisotropic with different types of symmetries, which may influence the symmetry of the superconducting order parameter in the ground state 51 . Figure 5 explicitly shows the symmetry of hybridization. The dominant hybridization of Ni-d x 2 Ày 2 orbital, the one with interstitial-s orbital, has d x 2 Ày 2 symmetry. Second, the hybridization of Ni-d x 2 Ày 2 with Nd-d xy and Nd-d 3z 2 Àr 2 orbitals has g xyðx 2 Ày 2 Þ and d x 2 Ày 2 symmetries, respectively 52 . d-wave superconducting states can be stabilized in the doped single-orbital Hubbard model from sophisticated many-body calculations [53][54][55][56] . However, the hybridization between correlated Ni-d x 2 Ày 2 orbital and itinerant electrons fundamentally changes the electronic structure of a single-orbital Hubbard model, in particular when the system is metallic. This probably creates a condition unfavorable for superconductivity 51 , implying that new mechanisms such as interface charge transfer, strain engineering, etc. are needed to fully explain the phenomena observed in infinite-layer nickelates 15 . Before we conclude, we briefly discuss other models for RNiO 2 (R = La, Nd). In literature, some models focus on low-energy physics and include only states that are close to the Fermi level; others include more states which reproduce the electronic band structure within a large energy window around the Fermi level. Kitatani et al. 57 propose that RNiO 2 can be described by the oneband Hubbard model (Ni-d x 2 Ày 2 orbital) with an additional electron reservoir, which is used to directly estimate the superconducting transition temperature. Hepting et al. 20 construct a two-orbital model using Ni-d x 2 Ày 2 orbital and a R-d 3z 2 Àr 2 -like orbital. Such a model is used to study hybridization effects between Ni-d x 2 Ày 2 orbital and rare-earth R-d orbitals. Zhang et al. 28 , Werner et al. 32 , and Hu et al. 33 study a different type of two-orbital models which consist of two Ni-d orbitals. Hu et al. 33 include Ni-d x 2 Ày 2 and Ni-d xy orbitals, while Zhang et al. and Werner et al. 32,28 include Ni-d x 2 Ày 2 and Ni-d 3z 2 Àr 2 orbitals. This type of two-orbital model aims to study the possibility of highspin S = 1 doublon when the system is hole doped. Wu et al. 21 and Nomura et al. 22 study three-orbital models. Wu et al. 21 include Ni-d xy , R-d xy , and R-d 3z 2 Àr 2 orbitals. This model is further used to calculate the spin susceptibility and to estimate the superconducting transition temperature. Nomura et al. 22 compare two choices of orbitals: one is Ni-d xy orbital, R-d 3z 2 Àr 2 orbital, and interstitial-s; and the other is Ni-d xy -orbital, R-d 3z 2 Àr 2 orbital, and R-d xy . The model is used to study the screening effects on the Hubbard U of Ni-d x 2 Ày 2 orbital. Gao et al. 23 construct a general four-orbital model B 1g @1a⨁A 1g @1b which consists of two Ni-d orbitals and two R-d orbitals. The model is used to study the topological property of the Fermi surface. Jiang et al. 29 use a tight-binding model that consists of five Ni-d orbitals and five R-d orbitals to comprehensively study the hybridization effects between Ni-d and R-d orbitals; Jiang et al. also highlight the importance of Nd-f orbitals in the electronic structure of NdNiO 2 . Botana et al. 16 , Lechermann 26 , and Karp et al. 58 consider more orbitals (including Nd-d, Ni-d, and O-p states) in the modeling of NdNiO 2 with the interaction applied to Ni-d orbitals and make a comparison to infinite-layer cuprates. Botana et al. 16 extract longer-range hopping parameters and the e g energy splitting. Lechermann 26 studies hybridization and doping effects. Karp et al. 58 calculate the phase diagram and estimates the magnetic transition temperature. Conclusion. In summary, we use first-principles calculations to study the electronic structure of the parent superconducting material RNiO 2 (R = Nd, La). We find that the hybridization between Ni-d x 2 Ày 2 orbital and itinerant electrons is substantially stronger than previously thought. The dominant hybridization comes from an interstitial-s orbital due to a large inter-cell hopping, while the hybridization with rare-earth-d orbitals is one order of magnitude weaker. Weak-to-moderate correlation effects on Ni cause electrons to transfer from Ni-d x 2 Ày 2 orbital to the Inter-cell hopping from Ni-d x 2 Ày 2 orbital to interstitial-s orbital (a), to Nd-d xy orbital (b), and to Nd-d 3z 2 Àr 2 orbital (c). All the hoppings here are between second nearest neighbors. Brown and green arrows represent positive and negative hoppings, respectively. Note this is a top view. Nd spacer layer and NiO 2 layer are not in the same plane. hybridization states, which provides more itinerant electrons in the rare-earth spacer layer to couple to correlated Ni-d orbital. Further increasing correlation strength leads to a reverse charge transfer, antiferromagnetism on Ni sites, and eventually a metalinsulator transition. In the experimentally observed paramagnetic metallic state of RNiO 2 , we find that the strong coupling between Ni-d x 2 Ày 2 and itinerant electrons screens the Ni local moment, as in Kondo systems. We also find that the hybridization increases the critical U Ni that is needed to induce long-range magnetic ordering. Our work shows that the electronic structure of RNiO 2 is fundamentally different from that of CaCuO 2 , which implies that the observed superconductivity in infinite-layer nickelates does not emerge from a doped Mott insulator as in cuprates. Methods We perform first-principles calculations using density functional theory (DFT) 34,35 , maximally localized Wannier functions (MLWF) to construct the noninteracting tight-binding models 59 , and dynamical mean field theory (DMFT) 36,37 to solve the interacting models. DFT calculations. The DFT method is implemented in the Vienna ab initio simulation package (VASP) code 60 with the projector augmented wave (PAW) method 61 . The Perdew-Burke-Ernzerhof (PBE) 62 functional is used as the exchange-correlation functional in DFT calculations. The Nd-4f orbitals are treated as core states in the pseudopotential. We use an energy cutoff of 600 eV and sample the Brillouin zone by using Γ-centered k-mesh of 16 × 16 × 16. The crystal structure is fully relaxed with an energy convergence criterion of 10 −6 eV, force convergence criterion of 0.01 eV/Å, and strain convergence of 0.1 kBar. The DFT-optimized crystal structures are in excellent agreement with the experimental structures, as shown in our Supplementary Note 1. To describe the checkerboard antiferromagnetic ordering, we expand the cell to a ffiffi ffi 2 p ffiffi ffi 2 p 1 supercell. The corresponding Brillouin zone is sampled by using a Γ-centered k-mesh of 12 × 12 × 16. MLWF calculations. We use maximally localized Wannier functions 59 , as implemented in Wannier90 code 63 to fit the DFT-calculated band structure and build an ab initio tight-binding model which includes onsite energies and hopping parameters for each Wannier function. We use two sets of Wannier functions to do the fitting. One set uses 17 Wannier functions to exactly reproduce the band structure of entire transition-metal and oxygen pd manifold as well as the unoccupied states that are a few eV above the Fermi level. The other set uses 4 Wannier functions to reproduce the band structure close to the Fermi level. The second tight-binding Hamiltonian is used to study correlation effects when onsite interactions are included on Ni-d x 2 Ày 2 orbital. DMFT calculations. We use DMFT method to calculate the 4-orbital interacting model, which includes a correlated Ni-d x 2 Ày 2 orbital and three noninteracting orbitals (interstitial-s, Nd-d xy , and Nd-d 3z 2 Àr 2 ). We also cross-check the results using a 17-orbital interacting model which includes five Ni-d, five Nd-d, six O-p, and one interstitial-s orbital (the results of the 17-orbital model are shown in Supplementary Note 4 of the Supplementary Information). DMFT maps the interacting lattice Hamiltonian onto an auxiliary impurity problem which is solved using the continuous-time quantum Monte Carlo algorithm based on hybridization expansion 64,65 . The impurity solver is developed by K. Haule 66 . For each DMFT iteration, a total of 1 billion Monte Carlo samples are collected to converge the impurity Green function and self-energy. We set the temperature to be 116 K. We check all the key results at a lower temperature of 58 K and no significant difference is found. The interaction strength U Ni is treated as a parameter. We calculate both paramagnetic and magnetically ordered states. For magnetically ordered states, we consider ferromagnetic ordering and checkerboard antiferromagnetic ordering. For checkerboard antiferromagnetic ordering calculation, we double the cell, and the noninteracting Hamiltonian is 8 × 8. We introduce formally two effective impurity models and use the symmetry that electrons at one impurity site are equivalent to the electrons on the other with opposite spins. The DMFT self-consistent condition involves the self-energies of both spins. To obtain the spectral functions, the imaginary axis self-energy is continued to the real axis using the maximum entropy method 67 . Then the real axis local Green function is calculated using the Dyson equation, and the spectral function is obtained in the following equation: A m ðωÞ ¼ À 1 π ImG loc m ðωÞ ¼ À where m is the label of a Wannier function. 1 is an identity matrix, H 0 (k) is the Fourier transform of the Wannier-based Hamiltonian H 0 (R). Σ(ω) is the selfenergy, understood as a diagonal matrix only with nonzero entries on the correlated orbitals. μ is the chemical potential. V dc is the fully localized limit (FLL) double counting potential, which is defined as 68 : where N d is the d occupancy of a correlated site. Here the Hund's J term vanishes because we have a single correlated orbital Ni-d x 2 Ày 2 in the model. A 40 × 40 × 40 k-point mesh is used to converge the spectral function. We note that double counting correction affects the energy separation between Ni-d x 2 Ày 2 and Nd-d/ interstitial-s orbitals. However, because the charge transfer is small (around 0.1e per Ni), the effects from the double counting correction are weak in the 4-orbital model, compared with those in the p-d model in which double counting correction becomes much more important 69 . That is because O-p states are included in the p-d model. The double counting correction affects the p-d energy separation and thus the charge transfer between metal-d and oxygen-p orbitals, which can be as large as 1e per metal atom for late transition-metal oxides such as rare-earth nickelates 69 . Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Code availability The electronic structure calculations were performed using the proprietary code VASP 60
8,052.8
2019-11-03T00:00:00.000
[ "Physics" ]
Analysis of String Matching Compression Algorithms : The improvement rate in microprocessor speed by far exceeds the improvement in DRAM memory. This increasing processor-memory performance gap is the primary obstacle to improved computer system performance. As a result, the size of main memory has increased gradually. To utilize main memory’s resources effectively, data compression can be applied. Through compression, the data can be compressed by eliminating redundant elements. Thus, this study incorporates compression to main memory in order to fully utilize its resources and improve performance of data access. This project evaluates the performance of the compression algorithms in order to increase the performance of memory access. The compression algorithms are string matching based which are LZW and LZSS. Through simulation, the effectiveness and compressibility of the algorithms were compared. The performances of the algorithms were evaluated in term of compression time, decompression time and compressed size. The simulation result shows that LZSS is an efficient compression algorithm compared to LZW. INTRODUCTION Data compression is a technique of encoding information using fewer bits than an unencoded representation would use through specific encoding or compression algorithm. All forms of data, which includes text, numerical and image, contain redundant elements. Through compression, the data can be compressed by eliminating the redundant elements [21] . Data compression techniques use a model to function. The input stream, generated from a data source, is fed into a compressor. The compressor then codes and compresses data. To regenerate original data from the compressed data, decoder is used. The decoder applies the reverse algorithm of that used by the compressor. Moreover, the decoder has some prior knowledge as to how the data is being compressed [4] . Data compression is divided into two major categories, which is lossless compression technique and lossy compression technique. In lossless compression, no information is lost and the decompressed data are identical to the original uncompressed data. While, in lossy compression, the decompressed data may be an acceptable approximation to the original uncompressed data [9] . Lossless compression technique is grouped into two, statistical analysis based compression and string matching compression algorithm. This project focuses on analyzing string matching compression algorithm. The increase of processor clock speed caused the gap between processor, main memory and disk space to widen. As a result, the size of cache and main memory increased. This performance gap affects the reliability and the performance of the memory resource access overall. Thus, this study incorporates compression to main memory in order to fully utilize its resources. RELATED WORKS All forms of data contain redundant elements. Through compression, these redundant elements can be eliminated. First algorithm for compressing data was introduced by Claude Shannon [3] . At present there are lots of techniques available for compressing data, each tailored to match the needs of a particular application. Compression techniques can be classified into two categories, lossless compression techniques and lossy compression techniques. The classification is based on the relationship between inputs and outputs after a compression or expansion cycle is complete. In lossless compression, the output exactly matches with the input after a compression or expansion cycle. Lossless techniques are mainly applicable to data files where a single bit loss can render the file useless. In contrast, a lossy compression technique does not yield an exact copy of input after a compression. Navarro et al., 1999 presented compressed pattern matching algorithms for the Lempel-Ziv-Welch (LZW) compression which run faster than a decompression followed by a search [1] . However, the algorithms are slow in comparison with pattern matching in uncompressed text if compared the CPU time. This mean the LZW compression did not speed up the pattern matching. Matias et al., 1999, resolved the issue of online optimal parsing by showing that for all dictionary construction schemes with the prefix property greedy parsing with a single step look ahead is optimal on all input strings this scheme is called flexible parsing or (FP) [2] . Kniesser et al., 2003, proposed a method for compressing the scan test patterns using LZW that does not require the scan chain to have a particular architecture or layout [3] . This method leverages the large number of Don't-Cares in test vectors in order to improve the compression ratio significantly. Efficient hardware decompression architecture is also presented using existing in-chip embedded memories. LZSS algorithm: The LZSS compression algorithm makes use of two buffers, which are dictionary buffer and look ahead buffer. The dictionary buffer contains the last N symbols of source that have been processed, while look ahead buffer contains the next symbols to be processed. The algorithm attempts to match two or more symbols from the beginning of the look ahead buffer to a string in the dictionary buffer, if no match is found, the first symbol in the look ahead buffer is output as a 9 bit symbol and is also shifted into the dictionary [12] . If a match is found, the algorithm continues to scan for the longest match. It is intended that the dictionary reference should be shorter than the string it replaces. The LZSS algorithm compress series of strings by converting the strings into a dictionary offset and string length. For example, if the string mnop appeared in the dictionary at position 1234, it may be encoded as {offset = 1234, length = 4}. The LZSS dictionary is not an external dictionary that lists all known symbol strings. The larger N, the longer it takes to search the whole dictionary for a match and the more bits will be required to store the offset into the dictionary. Typically dictionaries contain an amount of symbols that can be represented by a whole power of 2. A 432 symbol dictionary would require 9 bits to represent all possible offsets. If need to use 9 bits, the 512 symbol dictionary might needed to have more entries. Since dictionaries are sliding windows, once the (N + 1) th symbol is processed and added to the dictionary, the first symbol is removed. Additional new symbols cause an equal number of the oldest symbols to slide out. In the example above, after encoding mnop as {offset = 1234, length = 4}, the sliding window would shift over 4 characters and the first 4 symbols (offsets 0 ... 3) would slide off the front of the sliding window. m, n, o and p would then be entered the dictionary into positions (N -4), (N -3), (N -2) and (N -1) [21] . LZW algorithm: The LZW Algorithm maintains a dictionary of strings with their codes for both compression and decompression process. When any of the strings in the dictionary appears in the input to the compressor, the code for that string is substituted, the decompressor, when it reads such a code, replaces it with the corresponding string from dictionary. As compression occurs, new strings are added to the dictionary. The dictionary is represented as a set of trees, with each tree having a root corresponding to a character in the alphabet. In default case, there are 256 trees with all possible 8 bit characters [21] . At any time, the dictionary contains all one character strings plus some multiple character strings. By the mechanism by which strings are added to the dictionary for any multiple character string in the dictionary, all of its leading substrings are also in the dictionary. For example, if the string PRADA is in the dictionary, with a unique code word, then the strings PRA and DA are also in the dictionary, each with its own unique code word. The algorithm will always match the input to the longest matching string in the dictionary. The transmitter partitions the input into strings that are in the dictionary and converts each string into its corresponding code word. Since all one character strings are always in the dictionary, all of the input can be partitioned into strings in the dictionary. The receiver accepts a stream of code words and converts each code word to its corresponding character strings [4] . MATERIALS AND METHODS An important criterion in performance evaluation of the compression algorithm is the selection of To test the performance of the algorithm, standard test file, Calgary Corpus is used. The Calgary Corpus is the most referenced corpus in the data compression field and is the de facto standard for lossless compression evaluation. Calgary Corpus consists of text files, images and other achieve files. Details of files in the Calgary Corpus are in Table 1. RESULT AND DISCUSSION This section analyzes and discuses the results obtained for the performance metrics from the simulation using the standard test files of Calgary Corpus. The efficiency and compressibility performance of LZSS and LZW algorithm is evaluated in term of compression time, compressed size (%) and decompression time. The performances of the algorithms were analyzed under different size of dictionary. For this project purpose, the evaluated dictionary size is 2K, 4K and 6K. Figure 1 shows the result of compression time of the test files when the dictionary size is set to 2K. LZSS has less compression time compared to LZW, except for the case of pic file. When compressing the pic file, LZW gives better results. The reason for this exception is the uncommon content of this file, which contains a lot of nulls. When the length component of LZSS is just 5 bits, LZSS can put a pointer to no more than 32 bits. LZW, however, can have a pointer to an infinite string and will look for the longest match in the dictionary. When using LZW, each entry has an old string and a new letter, hence it can save much longer strings. LZW with a pointer of 9 bits build a file that contains the 256 possible characters of ASCII. This will put the pairs (0, 1) (1, 2)…(254, 255) into the dictionary of LZW, according to the algorithm of LZW. If a 0 is added after the last 255, the pair (255,0) will be added and these numbers will use up the entire dictionary. The dictionary has 512 entries. The first 256 entries are of the single characters, while the other 256 entries are of the pairs. LZW can handle these pairs better because LZSS saves one bit for character or pointer flag, several bits for the pointer and several bits for length component. Usually LZSS does not replace a pair of characters by a pointer because of the high price, but even if LZSS replaces the pair, the gain will be small, while LZW can save more bits when pointing just to a pair of characters. Thus, LZW algorithm compress image file faster than LZSS algorithm [22] . Figure 2 and 3 show the result when the dictionary size in increased to 4K and 6K. LZSS still compressed faster than LZW except for the pic file. Figure 4 shows the compression time under three different size of dictionary for a file from the Calgary Corpus, which is Book 1. For compression time, it can be seen that as the dictionary size increases from 2K to 6K, the amount of time both algorithm takes to compress a file also increases. The compression time increases as the dictionary size increases. During compression, searching for a match in a large dictionary requires large amount of time, thus increases the compression time. The graph also shows that LZSS compression algorithm require less amount of time to compress a file than LZW algorithm. Figure 5 shows the result of compressed size achieved with the dictionary size of 2K. LZSS algorithm produce high percentage of compressibility compared to LZW, except in the case of pic file. Figure 6 and 7 show the result when the dictionary size in increased to 4K and 6K. LZSS algorithm has high percentage of compressed size compared to LZW algorithm except for the pic file. Figure 8 shows the compressed size under three different size of dictionary for a file from the Calgary Corpus, which is Book 1. For compressed size, as the dictionary size is increased from 2K-6K, the percentage of compressibility achieved also increases. If the dictionary size is large, it can store more characters or strings, thus, during compression, the compressor will The compressibility achieved through LZSS is higher than LZW algorithm. It is because, in LZSS, a file can be built containing the single characters or (offset, length) pairs as described previously. Such a file will be compressed by LZSS, always in a better manner than by LZW. The compressibility is dependent on the pointer size of LZSS. LZW always puts a pointer, while LZSS uses pointers only in the appropriate cases. If LZSS creates fewer pointers, it will indicate that LZSS has chosen not to put a pointer because it is less adequate. In contrast, LZW puts a pointer because this is its usual behavior and that pointer is more adequate [22] . Figure 9 shows the result of decompression time for both LZSS and LZW algorithm. The graph show that LZSS algorithm requires slightly lower amount of time to decompress a file compared to LZW algorithm except for pic file, where LZW decompress faster. The amount of time needed to decompress a file is same to all size of dictionary. This is due to the fact that decompression process is reverse of compression process using string marching algorithm. During decompression, the decompressor does not require to search through the dictionary. As each code is encountered, it is translated into corresponding character string to produce output. CONCLUSION The objectives of this project had been achieved through simulation study. The simulation result shows that overall LZSS compression algorithm is an efficient algorithm compared to LZW compression algorithm. LZSS achieved high percentage of compressibility, compression speed and decompression speed. LZSS algorithm able to compress or decompress a file faster compared to LZW algorithm. However, when compressing pic file, LZW algorithm gives better result. The reason for this is the uncommon content of this file which contains a lot of nulls. LZW algorithm can have a pointer to an infinite string, while LZSS only have pointer to a finite string. If there is a long sequence of the same character, LZW can compress it in a constant few bytes assuming the length component is long enough to grip the number of the characters. LZSS, however, has to construct the pointers step by step and it will have pointers to two bytes or three bytes. The compression time increases as the dictionary size increases. During compression, searching for a match in a large dictionary requires large amount of time, thus increases the compression time. The compressed size (%) increases as the dictionary size increases. The percentage of compression achieved is high with large size dictionary. Data compression provide increased network throughput without an increase in transmission channel bandwidth. Compressing data allows a user to keep more information in the system memory. The importance of compression gets much more prominent when downloading files as the available network bandwidth has not kept pace with the size of applications or physically transporting files as the storage capacity of magnetic storage devices, like floppy disks, have not kept pace with the size of applications. One future direction of this project is to enhance the LZSS compression algorithm in order to achieve high compressibility for image compression. The result obtained from this study shows LZSS algorithm compress better except for images. So, the LZSS algorithm could be studied in term of the length size and pointer variation for better image compression.
3,619
2008-03-31T00:00:00.000
[ "Computer Science", "Engineering" ]
Laccase-catalysed coloration of wool and nylon The potential for laccase (EC 1.10.3.2) to be used within the area of textile coloration, speci fi cally for the generation of decorative surface pattern design, remains relatively unexplored. The current study presents a novel process for the coloration of wool and nylon 6,6 fi bres via laccase oxidation of aromatic compounds as an alternative to conventional dyeing methods. Emphasis was placed on producing a diverse colour palette, which was achieved through the investigation of three different aromatic compounds as laccase substrates: 1,4-dihydroxybenzene, 2,7-dihydroxynapthalene and 2,5-diaminobenzenesulphonic acid. Reaction processing parameters such as buffer systems and pH values, laccase and aromatic compound concentrations, and reaction times were investigated, all in the absence of additional chemical auxiliaries. Enzymatically dyed fabrics were tested against commercial standards, resulting in reasonably good colour fastness to wash. To demonstrate the coloration and design potential by laccase catalysation of aromatic compounds, specially constructed fabrics using a combination of undyed wool, nylon and polyester yarns were dyed using the one- step laccase-catalysed coloration process. The use of different fi bre types and weave structures enabled simple colour variations to be produced. Shadow, reserve and contrasting effects were achieved with the laccase-catalysed dyeing process developed. Important advantages over conventional processing methods include the use of simpler and milder processing conditions that eliminate additional chemical use and reduce energy consumption. Introduction Coloration is an important process in textile finishing which is commonly used to enhance the appearance and attractiveness of a cloth. Conventional dyeing generally involves the use of several different chemicals, dyeing auxiliaries, in addition to elevated temperatures to assist the dyeing process. The coloration of wool and nylon can be achieved with the use of several dye classes, the most important of which are acid, mordant and premetallised dyes, all of which are applied under acidic conditions with the use of high temperatures, generally at the boil. The adoption of an alternative coloration approach using oxidative enzymes such as laccase could potentially offer processes with improved environmental sustainability by eliminating the inherent drawbacks associated with chemical processes [1,2]. Laccases (EC 1.10.3.2), belonging to a class of enzymes called oxidoreductase, can oxidise an extensive range of simple aromatic compounds such as diamines, aminophenols, aminonaphthols and phenols with or without a mediator, transforming them into coloured polymeric products via oxidative coupling reactions [3]. The reaction mechanism of laccase catalysation is a one electron oxidation of aromatic compounds to form free radicals while reducing molecular oxygen into water ( Figure 1). These free radicals are very reactive and may go on to further react with the initial aromatic compound itself and polymerise in a non-enzymatic pathway to form coloured polymeric products [3,4]. These coloured oxidation products are capable of being adsorbed onto or reacting with numerous textile fibres for fibre coloration [5]. The use of laccase to synthesise new dyes or new synthesis procedures of known textile dyes by enzymes has also been reported [6][7][8]. The versatility and capability by laccase of catalysing the oxidation of a broad substrate spectrum has led to a number of studies exploring the concept of laccase-assisted textile coloration of wool fibres [9][10][11][12][13][14][15][16][17]. The potential to dye other protein-based fibres such as hair has also been investigated [18,19]. Although an extensive list of laccase substrates are disclosed in patents and published papers, only a few substrates have been studied comprehensively. Current knowledge on hues achievable for wool coloration using laccase catalysis is limited to only a few mostly earthy tones. Previous studies have reported that variations of browns [9,10,13,[15][16][17], greys [11], oranges [17], yellows [12] and purples [16] are achievable. Only a few studies have fully evaluated colour fastness properties of fabric dyed by laccase-catalysed coloration. Although studies have demonstrated that laccase-catalysed polymerisation of aromatic compounds can give rise to coloured products useful for dyeing wool, colour diversity reported is thus far too limited to be considered as a serious alternative to conventional acid and reactive dyes which offer a vast selection of hues ranging from bright to deep shades, and in some cases excellent colour fastness. A broader survey of studies exploring laccase catalysis suggests greater colour diversity may be achievable with the use of numerous other known laccase substrates. Suparno et al. [20] reported that laccase oxidation of different dihydroxynapthalenes resulted in purple, brown and green products. Polak and Jarosz-Wilkolazka [21], studying the transformation of benzene and naphthalene derivatives into dyes using fungal biomass (whole-cell biocatalysts), described an array of coloured products, including oranges, reds, yellows, green/blues and purples. A similar range of coloured products were obtained by Sousa et al. [22] exploring the use of p-substituted primary aromatic amines for the synthesis of bio-colorants with laccase oxidation. It is well known that enzymes have a characteristic optimal pH at which their activity is at a maximum; this can be either acidic, neutral or alkaline, and is often wider than one pH unit. The pH-activity relationship of any enzyme depends on the acid-base behaviour of both enzyme and substrate [23]. Shifts in pH values can lead to changes in the shape of the enzyme and affect electrostatic interactions within the enzyme, leading to a change in charge of the amino acids at the active site, with a subsequent possible impact on the enzyme activity towards the substrate. Studies concerned with the characteristics of laccase have reported that the surface charge of laccase can affect its catalytic activity towards its substrates, and that the optimal pH of laccase changes depending on the nature of both the enzyme and the substrate [24][25][26]. Past studies exploring the coloration potential of laccase have predominately favoured the use of an acetate buffer [10,11,13,17]. The use of other buffer systems has generally been overlooked. The aim of this research was to develop a laccase catalysed in-situ dyeing process for wool and nylon fibres as an alternative to conventional methods. Emphasis was placed on producing a diverse colour palette through the exploration of different aromatic compounds as laccase substrates, in addition to reaction processing parameters: buffer systems and pH values, laccase and aromatic compound concentrations, and reaction times, all in the absence of additional chemical auxiliaries. Furthermore, enzymatically dyed fabrics would be tested against commercial standards to determine colour fastness properties. To investigate the use of enzymes for creative textile design, the results obtained from this investigation were further developed and applied to specially constructed jacquard weaves containing different fibre types and woven structures to generate decorative textile surface patterns. To date, no creative applications for laccase have been realised [27]. Undyed plain woven 100% wool fabric with a dry weight of 189 g/m 2 , 50 ends per inch, 45 picks per inch, and a mean fibre diameter of 23 lm was supplied by Drummond Parkland (Huddersfield, UK). Undyed knitted single jersey 100% nylon 6,6 fabric with a dry weight of 159 g/m 2 was purchased from Ray Musson Knitting (Leicester, UK). Woven jacquard fabric samples were produced by Camira Fabrics (Mirfield, UK) constructed with 1200 warp ends using 100% undyed 2/20 Nm wool with 45 ends per inch and undyed weft yarns of nylon 6,6. Camira Fabrics also produced a range of weaves using wool of the same structure in the warp and undyed weft yarns of cotton, nylon 6,6, polyester (PET) or polyethylene. Fabric preparation Prior to any treatment, the wool fabric was scoured in a solution containing 1.6 g/l of sodium carbonate and 2 g/l UPL at 60°C for 30 min, and the nylon fabric was scoured in a solution containing 2 g/l UPL at 70°C for 30 min. Scouring for both fabric types took place in a Datacolor Ahiba Nuance IR dye machine at an agitation of 40 rpm using a liquor-to-goods ratio of 20:1. Scouring was followed by a hot and cold tap water rinse before air-drying at room temperature. To study the effect of the amino end groups on dyeability, amino end groups were removed from nylon samples using the Van Slyke method as illustrated in Scheme 1, according to a procedure used by Smith [28]. An aqueous solution of nitrous acid was made by dissolving 0.5 g of sodium nitrite in 100 ml of deionised Figure 1 Laccase-catalysed oxidation of substrate through mediator [4] water. The solution was acidified with 0.3 ml of acetic acid to form nitrous acid and adjusted to pH 4.0 using sodium acetate trihydrate. Pre-scoured nylon fabric (10 g) was then added to the solution and treated at 100°C for 60 min. Treatment took place in a Datacolor Ahiba Nuance IR dye machine at an agitation of 40 rpm using a liquor-to-goods ratio of 10:1. After the treatment, the nylon fabric was rinsed in tap water and then left to air-dry at room temperature. Enzymatic dyeing (one-step in-situ dyeing process) Fabric samples (1 g) were thoroughly wetted out in deionised water before being placed in a bath containing 1% owf laccase and 1% omf or greater of a chosen aromatic compound. A liquor-to-goods ratio of 30:1 was used. To cover a range of pH values from pH 2.0-11.0, a range of different buffer systems were selected for study ( Table 2). All buffer solutions were prepared at room temperature using a concentration of 0.1 M. Enzymatic dyeing was performed in a Datacolor Ahiba Nuance IR dye machine; the temperature was raised by 2.5°C/min and maintained at a temperature of 50°C for 1, 2, 4 or 8 h with an agitation speed of 40 rpm. Control samples for each compound containing no laccase, and no aromatic compound, were also processed for comparison. Once dyeing was complete, all samples were washed using a hot and cold water rinse and then left to air-dry at room temperature. A post-soap wash was introduced after the dyeing step to remove any unfixed polymer residues and deactivate any laccase present on the surface of the fabric. Selected enzymatically dyed samples were washed in a solution containing 2 g/l UPL at 40°C for 15 min with a liquor-togoods ratio of 20:1 in a Datacolor Ahiba Nuance IR dye machine. Samples were then removed and washed using a hot and cold tap water rinse and then left to air-dry at room temperature. Fabric samples were then tested for initial colour permanence. Colour measurement of dyed fabrics A Datacolor SF600 Plus CT reflectance spectrophotometer with an aperture diameter of 6.6 mm was used to determine the colour values, colour strength and differences between enzymatically dyed fabric samples, represented by the CIE L * a * b * colour space system. Each sample was folded into four and measured four times. All values were measured and calculated under controlled conditions using Color-Tools QC software with the illuminant and observer conditions of D 65 and 10°, respectively. An average colour measurement was calculated from the data collected for each sample. Colour differences between control sample and enzymatically dyed fabric samples are represented as ΔE and were calculated using Eqn 1: where ΔL*, Δa*, and Δb* represent the differences between the corresponding units of each fabric sample. Colour analysis of enzyme treatment liquor To gain insights into the coloured products generated through laccase catalysis, coloration reactions using laccase and aromatic compounds were performed. Solutions were prepared by adding 0.01 g of laccase and 0.01 g of aromatic compound to 30 ml of 0. Results and Discussion To investigate the use of laccase to catalyse the in-situ dyeing of wool and nylon 6,6 fabrics, a range of nine commercially available aromatic compounds were chosen as laccase substrates, principally because of their varying chemical structures. Laccase catalysis of each aromatic compound resulted in the synthesis of coloured compounds producing distinct colour shades in both liquor solution and on fabric [29]. Coloration of nylon fabric was notably lighter and brighter than on wool fabric. Because of the coloration potential observed with varying chemical structures, three of the aromatic compounds were selected for their results to be presented and investigated further (Table 1), namely 1,4-dihydroxybenzene (1,4DHB), 2,7dihydroxynapthalene (2,7DHN) and 2,5-diaminobenzenesulphonic acid (2,5DABS). Wool and nylon fabrics treated with laccase only in the absence of an aromatic compound led to no visible colour change after enzymatic treatment of both fibre types. Treatments containing an aromatic compound in the absence of laccase resulted in no colour to subtle colour changes across both fibre types, with the greatest difference observed on wool treated with compound 2,7DHN (Table 3). Residual liquor solutions in all cases remained clear and colourless after processing, with no visible evidence of colour formation. Both sets of control experiments confirmed the exclusion of either an aromatic compound or laccase from the treatment solutions resulted in neither colour formation in the liquor baths nor significant coloration of either fibre type. Effect of pH and buffer systems on the activity of laccase towards aromatic compounds and the coloration of fibres The pH value is of critical importance in order to optimise enzymatic catalysis. Past studies have predominantly favoured the use of an acetate buffer at pH 5.0 [10][11][12][13][14][15][16][17], while the use of alternative buffer systems and pH values have been overlooked. The influence of different buffer systems and pH values on the in-situ coloration characteristics of the three aromatic compounds when used with laccase was investigated. To cover a range of pH values from pH 2.0-11.0, a range of different buffer systems were selected for study, as shown in Table 2. Laccase and aromatic compound treatments resulted in a diverse range of characteristic hues and depths of shade being produced on both fibre types. Each aromatic compound gave rise to distinctive colour ranges, and the use of different buffer systems resulted in subtle differences in coloration characteristics across both wool and nylon fibres. Compound 1,4DHB in the presence of laccase resulted in two contrasting hues being observed on each fibre. A variety of browns on wool and an assortment of pinks on nylon were observed, as shown in Table 4. The use of a citrate buffer produced a more diverse colour range in comparison to other buffer systems. In all cases wool fabric samples were darker than nylon, with deeper shades of browns on wool being produced with pH values of 6.0 and 7.0, and brighter pinks on nylon with pH values of 5.0 and 6.0. A variety of three distinctive hues, yellow, green and blue, were observed after enzymatic treatment with compound 2,7DHN across both fibre types (Table 5); pH values from 3.0 to 5.0 resulted in a variety of characteristic yellows; however, the use of pH ≥ 6.0 resulted in profound shifts in hue for both fibre types after enzymatic treatment. A variety of greens were produced with pH 6.0 and pH 8.0-11.0 for wool, and pH 6.0 for nylon. An assortment of blues was obtained with pH 7.0 for wool, and pH 7.0-11.0 for nylon. The strongest blues were produced using pH 7.0, of which the citrate and acetate buffers produced the brightest blues on both fibre types. The shifts in hues observed correlated with colour measurements recorded; on the whole a* and b* values shifted from positive to negative values. The results suggested that the use of treatment conditions between pH 6.0 and 11.0 promoted either the laccase, aromatic compound or radicals formed through laccase catalysis to react in very different ways to produce highly coloured products capable of generating hues which were previously unreported on wool and nylon fibres. Interestingly, coloration treatments carried out using pH values from 3.0-7.0 resulted in similar hues across both fibre types; however, pH 8.0-11.0 resulted in contrasting hues on each fibre type with greens on wool and blues on nylon. The greener hue observed on wool but not on nylon at ≥pH 8 may be caused by residual 2,7DHN uncatalysed by the enzyme, as when 2,7DHN was treated on wool and nylon (Table 3) a greenish tinge was observed on wool, but not on nylon. Treatments with the compound 2,5DABS in the presence of laccase resulted in an array of brown, orange and yellow hues, ranging from deep to light shades across both fibre types (Table 6). Subtle variations in hues were observed across the range of buffer systems. The most effective pH ranges for dyeing wool with compound 2,5DABS were pH 3.0-8.0 for wool, and pH 3.0-6.0 for nylon. Coloration performed using acid conditions were found to be the most suitable for producing the deepest and/or brightest shades. Wool fabric samples were darker in comparison to nylon samples which were lighter and/or brighter. At pH 3.0 using a citrate buffer, a purple colour was observed on both wool and nylon, but not with either acetate or hydrochloric buffers, which were orange-brown for wool and yelloworange for nylon. In general, at pH 4.0-6.0 for all buffer systems, wool was coloured orange-brown; but nylon was coloured yellow-orange at pH 4.0-5.0 and a much paler shade of yellow-orange at pH 6.0. As treatment buffer conditions moved towards neutral and alkali conditions, the coloration of both fibre types either gradually became lighter, and/or ineffective with no uptake of coloured products synthesised in-situ. Therefore, pH > 6 was not suitable for coloration of nylon to take place. It is believed that the coloration created with all compounds on both fibre types was a result of laccase oxidative coupling facilitated by the action of laccase, which is able to extract hydrogen protons from substitute groups, -NH 2 , -OH, -COOH present in aromatic compounds, as well as from aromatic amino acid residues of wool like tyrosine. One-electron oxidation of hydroxylated aromatic substrates is accompanied by the reduction of molecular oxygen to water by the transfer of electrons, resulting in highly reactive free radicals being produced capable of reacting non-enzymatically to create coloured products of different polymeric molecular weights. It could be possible for wool to be directly involved in the enzymatic polymerisation of colorants through the aromatic amino acid residues (tyrosine) of wool. Non-enzymatic reaction may also be involved in covalent bonding between laccasecatalysed radicals and amino groups found in wool and nylon fibres during laccase-catalysed coloration [3]. On the basis of the results obtained from investigations into buffer systems and pH values, the citrate buffer was chosen for further investigation primarily due to its wide pH range in comparison with other buffers. In addition, the citrate buffer offered the possibility of producing a more diverse colour palette, as observed from laccase catalysis of compounds studied. Treatments with 1,4DHB enabled greater coloration diversity on nylon, and with compound 2,7DHN offered the possibility of producing a brighter range of blues to strong greens simply through pH control. Furthermore, purple tones as well as strong shades of brown on wool and bright yellow-oranges on nylon were observed with compound 2,5DABS. Analysis of laccase-catalysed colour in enzyme treatment liquors without fabrics To gain insights into the coloured products generated through laccase catalysis, reactions were performed in the absence of fabric. Laccase catalysis of compound 2,5DABS produced coloured products of very high intensity (Table 7). Ultraviolet-visible (UV-vis) spectrum analysis of coloured products displayed a range of absorption peaks mainly at two wavelengths: a peak around 465 nm was observed which gradually disappeared with a decrease in pH, while another peak appeared at around 545 nm with an increase in pH ( Figure 2). Treatments undertaken at pH 3.0 and 4.0 both produced pink coloured products, with the latter more intense than the former with an absorption value of 0.95. Although pH 4.0 was more effective at yielding higher coloured product quantities, stronger coloration on wool fibre type resulted from the use of pH 3.0 ( Table 6). These results suggest dyeing at pH 3.0 could be considered, where the coloured products formed may have greater affinity towards both fibre types. The use of pH 5.0 and 6.0 exhibited double peaks around 455-465 nm and 520-540 nm, respectively, strongly suggesting that the coloured products generated may consist of coloured mixtures similar to the ones generated with the use of pH 7.0-8.0 and 3.0-4.0, respectively. This indicated that the use of different pH conditions could effectively produce compounds of different structures imparting different coloured hues in solution and on fibre. In addition, coloured solutions resulting from reactions were observed to be pH-sensitive. The alteration of the solution pH after processing resulted in solutions shifting hue. This observation suggested coloured solutions may be altered postprocessing. Zhang et al. [16] found that coloured solutions formed through the laccase catalysis of 2,5DABS with similar absorption characteristics as observed in this study could be converted by simply altering the pH conditions, leading to reversible coloured solutions and dyed wool fabrics. Although distinctive coloured solutions, primarily reds, oranges and yellows, resulted from reactions, this diversity did not transcend onto the coloration of wool and nylon fibres. Instead, a limited colour range was observed on both fibre types, mainly variations of red-browns to yellows, as shown in Table 6. Effect of aromatic compound concentration The effect of the concentration of laccase to aromatic compound was investigated to understand how coloration was affected. Laccase concentration was kept constant, but three different concentrations of the aromatic compound were used. Increasing the concentration of compounds 1,4DHB and 2,7DHN increased the depth of shade, similar to conventional dyeing methods, where an increase in dye concentration generates a higher depth of shade. However, Table 8. These observations with 2,5DABS have not been previously reported. In particular, the use of pH 3.0 displayed two distinctive hues, an increase in compound concentrations enabled a shift in hue resulting in brown and purple coloration on wool. Similarly, the use of a 1:4 ratio of laccase-to-compound enabled multiple colours to be produced by simply altering the pH conditions. A purple was produced with pH 3.0, a dark brown with pH 4.0, and a variety of brown-yellows with pH 5.0-7.0 in the case of wool. A similar trend was observed with nylon, although the colours produced were lighter and brighter. The change in colour characteristics resulted in a change with b * values shifting from blue (negative) to yellow (positive), representative of the hue shift observed on both fibres. These results suggest the presence of higher concentrations of 2,5DABS may enhance polymerisation possibilities, enabling radicals formed through catalysis to react in different ways. An increase in pH beyond pH 4.0 resulted in lighter coloration on both fibre types. The hues achieved on wool at 1:4 laccase-to-2,5DABS were more distinctive and more closely resembled the liquor colours, as illustrated in Table 7. Effects of treatment time The effect on coloration due to duration of laccasecatalysed in-situ dyeing was investigated. The results for 1,4DHB (Table 9) show that both wool and nylon displayed an increase in depth of shade with increasing reaction times; therefore, samples treated for 8 h with higher concentrations of 1,4DHB gave the deepest shades, with L* values of 21.4 for wool and 54.2 for nylon. Longer treatment times suggested a deeper colour could simply be achieved by prolonging the contact time between laccase, aromatic compound and fibre. This is in contrast to conventional dyeing methods in which the depth of shade is proportional to the amount of dye present in the dyebath. However, it is worth noting that increasing the contact time from 4 to 8 h only marginally increased the depth of shade achieved, and therefore cannot be seen as economically beneficial. Increasing treatment times from 1 to 2 h during the coloration of wool with 2,7DHN resulted in a deeper shade being produced with all ratios on wool; however, further increasing treatment times from 2 to 8 h resulted in no significant change (Table 10). In comparison, prolonging treatment times from 1 to 8 h with nylon resulted in a gradual increase in the depth of shade being produced with all ratios. In general, a * and b * values remained constant; L * values continued decreasing over time, with the greatest differences observed with ratio 1:4. As observed in earlier experiments, the coloration of wool and nylon remained consistently greener and bluer, respectively, with the use of pH 8.0. Two pH values were chosen for investigation with compound 2,5DABS. The results for pH 3.0 are shown in Table 11. Increasing treatment times from 1 to 8 h during the coloration of wool resulted in no significant change with any of the considered ratios. In the case of nylon, a gradual depth in colour was observed with prolonged treatment times. With pH 4.0, increasing treatment times from 1 to 4 h during the coloration of wool resulted in subtle colour characteristic shifts towards deeper shades being produced with all ratios; these were more obvious with ratio 1:1 than 1:4. However, increasing treatment times from 4 to 8 h resulted in no significant change (Table 12). Nylon coloration using the same conditions resulted in slight increases in depths of shade over time, with subtle colour shifts from pale yellow-oranges developing into medium shades of orange. Previous studies have reported that higher molecular weight compounds may be generated after longer reaction times [3,30]; this may explain why deeper shades resulted from longer reaction times in some cases. Colour fastness Dyed samples were tested to ISO standards for colour fastness and staining due to washing. Different wash fastness values were observed on both fibre types (Table 13). In general, post-washing both fibre types prior to testing had little effect on performance, and in all cases nylon performed better than wool. Visible changes in colour characteristics after testing were observed across all dyed fabrics, with the exception of nylon samples dyed with 1,4DHB. Both fibre types dyed with 1,4DHB resulted in no staining on the adjacent multifibre strips, with all tests obtaining a grade 5. In contrast, profound colour changes were observed with tests dyed with compound 2,5DABS, with wool samples achieving 1/2 and nylon acheiving 3/4 grades. Although a lot of colour was lost to residual liquor, only subtle staining was observed on the cotton component of the multifibre strip for wool tests' all other fibre components remained unaffected. Tests performed with samples dyed with 2,7DHN resulted in greater cross-staining across the multifibre strip; only polyester and acrylic components remained unaffected. Only wool samples dyed with 1,4DHB matched the change in colour requirements and exceeded the standards for minimum staining requirements as stated in specifications AW-1: 2016 [31]. To evaluate the transfer of surface dye from the dyed test samples, rub fastness tests were conducted. Laccase dyeing with compounds 1,4DHB and 2,5DABS resulted in excellent rub fast results, with no staining being observed on either wool or nylon fibres during wet and dry test conditions (Table 13). However, tests conducted on samples dyed by laccase with compound 2,7DHN resulted in colour being transferred onto the cotton rubbing cloth during wet conditions, which was more prevalent on unwashed wool samples. On nylon, staining only occurred on the sample given no post-soaping, suggesting a post-soap wash at 40°C may be necessary to remove residue dye from the surface of the fabric. No staining was observed using dry conditions for this compound. Grades obtained on dyed wool samples were checked against Woolmark Quality Standard specifications, and all compounds met the Table 11 Colour range achieved on wool and nylon when treated in the presence of laccase at pH 3 using varying concentrations of 2,5DABS and duration of treatment minimum standards required for colour fastness to rubbing (dry: AW-1: 2016 [31]; and wet: IF-1: 2016 [32]). In this study, dyed samples were tested to commercial standards for colour fastness to light. Different light fastness levels were recorded for both fibres; however, none of the tested samples met the equivalent blue wool reference 4. A change in colour characteristics was observed across all dyed wool samples. Compounds 1,4DHB and 2,5DABS resulted in a loss of depth of colour. With compound 2,7DHN, a loss in depth of colour and a change in hue were observed. In contrast, profound fading, similar to a bleaching effect, was observed on all dyed nylon samples. Poor light fastness results suggest that coloured compounds responsible for coloration may consist of chemical structures which lack stable electron arrangements and are therefore susceptible to photo-oxidation. Furthermore, different levels of fading were observed across both fibre sets, indicating that coloured polymers may be bound to different functional groups present in each fibre type. This suggests that the chemical structure of fibres play an important role in light fastness properties. Grades obtained from colour fastness to light did not meet the minimum standards stated in specification AW-1: 2016 [31]. Most of the previous studies exploring the application of laccase-catalysed coloration have overlooked testing for light fastness. Studies by Sun et al. [12] and Zhang et al. [16] reported that colour fastness of enzymatically dyed fabrics resulted in a higher level of wash fastness for staining and rub fastness but only moderate light fastness. Enaud et al. [7] applied an azo anthraquinone dye synthesised with laccase on nylon 6, and reported moderate to poor fastness to light and washing. Results obtained from colour fastness tests suggest the dyeing properties of all three investigated aromatic compounds vary considerably across both fibre types. In general, the dyed samples have very good resistance against rubbing, mixed levels of resistance to wash fastness and staining, and poor light fastness properties. Post-washing at 40°C had little effect on the colour fastness results obtained. An understanding of the aforementioned variables, especially the molecular structure of the synthesised coloured products, may give rise to further developments in application processes or finishing treatments to improve fastness properties. Currently, the fixation mechanism is not clear, and further investigations are required to understand how coloured products react with both fibre types. Effect of the amino groups of fibres on laccase-catalysed coloration Wool and nylon fibres contain primary amino groups which act as dye sites for acid dyes, therefore acid dyes are applied under acidic conditions, which facilitates the protonation of the amino end groups, enabling the fibre to obtain a positive charge which attracts the acid dye anions by ionic forces, forming salt linkages. If the amino end groups found on wool and nylon fibres are removed, this may provide evidence of how the coloured products generated through laccase catalysis of aromatic compounds may be reacting with each fibre type. Nylon was selected for investigation because of its simpler structure. Like wool, nylon contains amino end groups; however, nylon contains fewer amino groups than wool and has no side chains, making it a better model for study. Amino end groups were removed from nylon using the Van Slyke method (Scheme 1) forming a deaminated nylon. Control nylon samples were also processed for comparison. The results are presented in Table 14. Enzymatic dyeing with compounds 1,4DHB and 2,5DABS resulted in only visible staining to a much lighter colour on the deaminated nylon in comparison with control samples. In both cases, control samples dyed to a similar depth of shade as observed previously (Tables 4 and 6). Although coloured compounds were formed in the liquor from laccase catalysis for both aromatic compounds, neither coloured product was able to react with the fibre in the absence of amino end groups, suggesting that primary amine groups were directly involved in the formation of the polymerised colorants through laccase catalysation. In contrast, enzymatic dyeing with compound 2,7DHN effectively coloured the deaminated nylon to a visibly brighter blue in comparison with the control sample (Table 14). This suggests that coloured compounds generated through laccase catalysis of compound 2,7DHN do not require the presence of amino acids for coloration to take place. However, 2,7DHN has a conjugated structure of two benzene rings, which should have a higher intermolecular force (consisting of Van der Waals attraction forces) between nylon fibres and coloured compounds catalysed by laccase, especially in the polymerised structure of the colorants. Design potential Wool and nylon fibres can be combined and assembled into fabrics for both aesthetic effects and enhancement of functional properties; the inclusion of nylon fibres in blends with wool is particularly popular as it helps improve performance properties such as tensile strength and abrasion resistance. Wool and nylon blended fabrics are usually dyed with a single class of dye, either levelling or supermilling acid, or metal-complex dyes. Solid (where each fibre component is dyed the same hue and depth), and shadow (where each fibre is dyed to a different depth of the same hue) effects are generally more easily obtained with careful selection of a suitable dye and the use of blocking or levelling agents using a one-step dyeing method. However, due to wool and nylon having similar chemical properties and dyeing characteristics, reserve (where one fibre , and colour contrast effects (where each fibre component is dyed to a different hue) are difficult to achieve using a single dye and/or a one-step dyeing method, as it is difficult to suppress the dyeability of either fibre, to reserve it to white or dye it another colour. An alternative approach would be to dye each fibre in a separate dyebath using optimal application conditions in each case to achieve reserve or contrast effects. However, separate processing is often costly and has technical limitations, and therefore wherever possible blends are dyed using a one-step process [33]. The laccase-catalysed coloration process developed could offer a simple and convenient alternative to conventional dyeing processes for the production of either shadow or colour contrast effects for wool-and nylon-blended fabrics. This could be further explored with the use of specially constructed fabrics to produce decorative surface patterning. To demonstrate the coloration and design potential by laccase catalysation of the selected aromatic compounds, fabrics were specially constructed using a combination of undyed wool and nylon yarns. Undyed cotton, polyester, and polyethylene yarns were also combined with undyed wool and nylon fibres to illustrate reserve effects. Basic plain, twill, satin, and sateen structures were produced within simple jacquard weaves to generate a selection of woven fabric designs. Fabrics were then dyed by laccase using the one-step coloration process. Design trials confirmed a wide range of contrast and shadow coloured effects could be achieved on wool/nylon constructed jacquards, and were further enhanced with the exploration of different processing parameters. The exploration of pH values enabled various two-colourway designs to be generated with compounds 1,4DHB (1% omf; Figure 3) and 2,5DABS (4% omf; Figure 4). The latter produced a more diverse colour range. The exploration of different % omf with compound 2,7DHN resulted in a similar array of possibilities: lower concentrations of compound enabled contrasting effects to be achieved, treatment with laccase dyed the wool a green shade and nylon blue. The use of higher concentrations of compound, 2% and 4% omf, resulted in shadow coloured effects, as both fibre types dyed blue after treatment with laccase, nylon dyeing lighter than wool in both cases ( Figure 5). Figure 6 demonstrates the possibilities of generating reserve coloured effects through the incorporation of polyester or polyethylene into the blended fabrics; after laccase treatment the synthetic component remained undyed while the wool component was dyed. Variations in jacquard weaves also offer the possibility of new colour tones with an exploration of different weave structures, revealing greater proportions of undyed yarns on show to create the impression of a lighter colour. Conclusions The objective of this research was to develop a laccasecatalysed in-situ dyeing process for wool and nylon 6,6 fibres as an alternative to conventional dyeing methods. Emphasis was placed on producing a diverse gamut of colours, which was achieved through the exploration of three different aromatic compounds, a phenol, a benzene and a naphthol derivative, as well as a methodological survey of reaction processing parameters. The use of varied buffer systems, pH values and aromatic compound concentrations proved the most beneficial for increasing the range of possible colours. Previously unreported colours such as pinks, greens and blues were achieved. Colour fastness of the enzymatically dyed wool and nylon fabrics was evaluated, resulting in reasonable good colour fastness to wash, but poor fastness to light. Although the fixation method is not fully understood, the presence of amino end groups were required for coloration of nylon by enzymatic dyeing with 2,5DABS and 1,4DHB, but were not required for coloration of nylon by enzymatic dyeing with 2,7DHN. The key advantages over conventional dyeing methods include the elimination of premanufactured dyes and chemical auxiliaries, and dyeing at ambient temperatures, therefore reducing the complexity of the dyeing process and downstream processing, leading to possible economic and environmental advantages. In addition, the enzymatic dyeing process offers opportunities for multiple colours and shading to be achieved through simple alterations in processing conditions, which is currently not possible with conventional dyes and methods. The results also demonstrate the ability of laccase as a novel and creative tool for the enzymatic process to permit effective surface patterning through controlled applications for shadow and contrast coloured effects. The opportunities discussed could provide the textile industry with realistic and viable options to use enzyme-based surface patterning with the potential of moving towards sustainable development.
8,593.2
2018-07-09T00:00:00.000
[ "Materials Science", "Chemistry" ]
Design of Portable Self-Oscillating VCSEL-Pumped Cesium Atomic Magnetometer : With the demand for fast response of magnetic field measurement and the development of laser diode technology, self-oscillating laser-pumped atomic magnetometers have become a new development trend. In this work, we designed a portable self-oscillating VCSEL-pumped Cs atom magnetometer, including the probe (optical path) and circuits. The signal amplification and feedback loop of the magnetometer, VCSEL laser control unit Introduction In recent years, research on laser detection of alkali metal atomic magnetometers has become a frontier topic in quantum precision measurement and quantum sensing technology.An atomic magnetometer is a high-precision and high-sensitivity magnetic field measurement instrument based on optical pumping and electron paramagnetic resonance technology [1,2].In the research to date on the mechanism of magnetometers, a twobeam structure of pumping light and probe light has usually been used, which makes the volume of magnetometer not very small.In order to miniaturize the magnetometer, the pump-probe structure is the best choice for a portable magnetometer.According to the radio frequency excitation method and detection component that generates the magnetic resonance, optically pumped atomic magnetometers (OPM) can be divided into the tracking type and self-oscillating type [3].Self-oscillating magnetometers can quickly respond to magnetic field changes, and have broad application prospects in areas such as marine anti-submarine activities and aviation magnetic fields [4][5][6][7].Compared with spectral lamps, semiconductor lasers have the advantages of narrower line width, easier miniaturization, and lower power consumption [8,9].Vertical Cavity Surface Emitting LASERs (VCSELs) have been applied in atomic gyroscopes and atomic clocks [10,11].Due to the high sensitivity of atomic magnetometers, the fast response rate of self-oscillating magnetometers, and the advantages of semiconductor lasers, a portable self-oscillating VCSEL-pumped atomic magnetometer is suitable for occasions requiring high sensitivity, fast response rate, small volume, and low power consumption.However, the research on applying miniaturized lasers to magnetometers remains in the exploratory stage [12]. Frequency stability is an important parameter of a laser.Because the linewidth of the laser is narrow, the frequency stability directly affects the sensitivity of OPM.Because the frequency fluctuations and intensity noise can affect the sensor output, it needs to be stable [13].This is usually achieved by controlling the temperature and current of the laser.Frequency stabilization can be divided into passive and active frequency stabilization [14].Passive frequency stabilization can only narrow the linewidth to a limited extent, and active frequency stabilization is needed to achieve high precision.After choosing a stable reference, it must be possible to adjust the frequency automatically through the control system if the laser frequency deviates from the reference frequency.The reference frequency should have high stability and repetition, and the linewidth should be narrow [15,16].There are many ways to approach active frequency stabilization.The usual method is to take the center frequency of the transition line of an atom or molecule as the reference standard, including Lamb depression, Zeeman Effect atomic absorption, and absorption or saturated absorption of atoms or molecules.Among these, the advantages of the atomic absorption method are high-frequency stability and high repetition [17]. The density of atoms in a cell is proportional to the temperature [18].When the vapor temperature changes, the self-oscillating frequency of the magnetometer shifts [19].Therefore, it is necessary to increase the atomic density by heating the cell while introducing as little magnetic field noise as possible.There are four kinds of heating methods: Using an AC signal [20], using intermittent DC [21], using hot gas flow [22], or using a laser [23].The AC heating method is to pass an alternating current into the heating wire to generate heat.The drawback is the introduction of magnetic noise.The intermittent DC heating method is to heat the atomic cell and measure the temperature intermittently.While this makes it easy to control and avoids magnetic noise, it is easy to produce a temperature gradient.The hot air flow heating method heats the atomic cell by heated air.Although it has no magnetic field interference, its temperature stability is not high, its structure is complex, and it can introduce vibration noise or light refraction.Non-magnetic heating can also be realized by laser.Due to the different energy levels of atoms, certain energy levels may be more complex and difficult to describe.Therefore, heating light may cause unnecessary energy level transitions. In this work, a portable self-oscillating VCSEL-pumped Cesium (Cs) atomic magnetometer is designed, including the probe (optical path) and circuits.The signal amplification and feedback loop of the magnetometer, VCSEL laser control unit, and atomic cell temperature control unit are realized.Finally, we test the performance of the magnetometer in a metering station. The System Structure Figure 1 presents the structure of the portable self-oscillating VCSEL-pumped Cs atomic magnetometer.The part shown by the dotted lines is the probe, including the Collimating Lens (CL), Polarizing Beam Splitter (PBS), half-wave (λ/2) plate, quarter-wave (λ/4) plate, Thermistor (THE), Photodiode (PD), and RF coil.The circuit can be divided into three parts.Part A is the amplification and feedback loop.The operational amplifiers U1-U6, feedback resistor R f , and capacitors C f , R RF1 , and R RF2 are in series across the RF coil activated through the analog switch chip U7.Part B is the VCSEL controller.Part C is the temperature controller of the atomic cell, including the voltage Amplifier (VA), Power Amplifier (PA), Analog-to-Digital Converter (ADC), Digital-to-Analog Converter (DAC), Proportion-Integral-Differential (PID) controller, Direct-Digital-Synthesis (DDS) signal generator, and Microcontroller Unit (MCU). In the probe, the laser emitted from the VCSEL is a linearly polarized light.It is collimated by a lens and becomes parallel light, then passes through a λ/2 plate and a circular polarizer composed of a PBS and a λ/4 plate.Adjusting the λ/2 plate can change the light intensity.The pump-probe structure is used to implement the magnetometer to ensure that the light cannot be too weak or too strong.Then, by adjusting the λ/4 plate, the polarization of the parallel light becomes circular.This light passes through the atomic cell and is picked up by a photodiode, then the signal is fed back to the RF coil through the circuit to form a closed loop.Coated cells or buffer gas cells can increase the polarization spin polarization lifetime of the atoms, which is an effective way of improving the signal-to-noise ratio of the optical signal.However, the coated cell may be damaged at high temperatures, which is not suitable for long-term use of the instrument.In addition, as the buffer gas cell needs to work at a high temperature, the cell is wrapped with a twisted pair of heating wires to maintain a warm temperature.Finally, the RF coil is wound around the outside of the probe.The position of the atomic cell is located in the geometric center of the probe structure.3. The Self-Oscillating Signal Circuit 3.1.The Preamplifier (90-Degree Phase Shifter) The amplification and feedback loop is a crucial part of the circuit, as shown in Figure 1 Part A. In 1946, Bloch indicated that there was a 90-degree phase shift between the RF signal (H X ) and the optical signal (M X ) of a self-oscillating OPM.Although there are many schemes to implement a phase shifter [24], the passive resistance-capacitance (RC) phase shifter remains the most suitable integration scheme for instruments.The preamplifier is designed to amplify and phase shift the M X signal.Figure 2 (left) is the equivalent noise model of the preamplifier; I Noise is the sum of various noises considering the photocurrent shot noise, the dark current shot noise, the thermal noise, and the equivalent current noise of the operational amplifier; E V OP is the equivalent voltage noise of the operational amplifier; R i is the equivalent resistance of the photodiode's parallel resistance and the amplifier's input resistance; and C i is the equivalent capacitance of photodiode junction capacitance, amplifier input capacitance, and wire distribution capacitance in parallel.A small non-magnetic Silicon (Si) PIN photodiode is used.Its rising edge time is 25 ns and its junction capacitance value is 10 pF.A preamplifier circuit composed of an ultra-low noise high-speed operational amplifier chip (U1) converts the current signal into a voltage signal, making the photodiode work in a reversed bias state.The feedback resistor R f in the preamplifier circuit is selected by the actual photocurrent and its required gain.Without a feedback capacitor, the circuit is prone to gain bump, step output ringing, and noise gain spikes.To eliminate the above problems, a small capacitor C f is added to R f in parallel.The gain-frequency characteristics of noise gain and signal gain are shown in Figure 2 (right), where log e f is denoted by log f and C f and R f constitute a pole in the frequency response of the amplifier, as shown in Equation (1).If the open loop gain (A OL ) curve crosses the noise gain (1/β) curve as it rises, the circuit may oscillate uncontrollably.Another name for noise gain is closed-loop gain, which is always defined as the reciprocal of the feedback factor (β).The signal gain of an operational amplifier circuit is not always the same as its noise gain.To reach a steady state, the A OL curve needs to intersect the 1/β curve where it flattens, as shown in Equation ( 2): The unity-gain bandwidth ( f GBW in Equation ( 2)) of the preamplifier is 75 MHz.The junction capacitance of the photodiode is 10 pF, the input capacitance of the amplifier is 5 pF, and the distributed capacitance of the wire is generally much less than 0.5 pF; thus, the value of C i in Equation ( 2) is about 15.5 pF.From Equations ( 1) and ( 2), we obtain Equation (3): When R f is equal to 200 kΩ, it can be seen from Equation (3) that the value of C f only needs to be greater than 0.42 pF to satisfy the condition of circuit stability.However, the problem that cannot be ignored is that the phase shift is no less than 8 degrees within the operating frequency range (from 70 kHz to 350 kHz for a magnetic field range from 2 × 10 4 nT to 10 × 10 4 nT) when R f is 200 kΩ and C f is no less than 0.42 pF.This causes an issue with the design of the phase-shift circuit.If an additional phase-shift circuit is connected, the phase-shift degree within the operating frequency range needs to be compensated to 90 degrees. When the selected value of C f is large enough (hundreds of picofarads), the phase shift of the preamplifier circuit is close to 90 degrees.Ideally, the larger the value of the capacitor, the closer the phase shift is to 90 degrees.The influence of the R f , C f , and frequency values on the phase shift ϕ of the preamplifier circuit is shown in Equation ( 4): In this work, the value of C f is selected to be 180 pF and the value of R f is selected to be 200 kΩ.Under this set of parameters, the maximum difference of the phase shift in the frequency range of 70 kHz to 350 kHz is not more than 3 degrees, as shown in Figure 3.Because the circuit is in the form of reverse amplification, the output is 90 degrees ahead of the signal observed on the photodetector. The Automatic Gain Control Circuit and Others The disadvantage of the RC network is that the output signal amplitude varies greatly with the frequency; thus, we designed an automatic gain control (AGC) circuit.Chip U2 is a voltage-controlled gain amplifier.Then, a passive RC high-pass circuit is cascaded in the latter stage; the resistance of the high-pass circuit is 10 kΩ, and the capacitance is selected in the range of 1 nF to 100 nF.While it is not obvious, the phase of the preamplifier circuit can be compensated accordingly.If the self-oscillating signal phenomenon cannot preferably appear in the middle of the operating frequency range, the capacitor value should be adjusted in response.Due to the limitation of U2's f GBW , U3 has to further compensate and amplify the output of U2.The low-noise performance of the circuit depends primarily on the selection of low-noise operational amplifiers and the fully differential form (differential input and output).Then, the RF coil is connected to a multi-channel 2-to-1 analog switch U7; as such, the sign (lead or lag) of the 90-degrees phase shift of the signal is not a concern in practice.When the polarity reversal of the magnetic field is detected, the gear of the analog switch is switched by the controller; U4 is used to convert the differential signal into a single-ended signal, while U6 rectifies and filters the single-ended signal and obtains its average value as feedback, which is used to control the AGC amplifier (U2) after being adjusted by the resistor divider.At the same time, the single-ended signal is used as the output of the magnetometer after simple band-pass filtering (U5) and supplied to the frequency meter. VCSEL Controller In the laboratory, the saturable absorption optical path is usually used to stabilize the frequency of a commercial laser [25].In portable instruments, VCSELs are used as a substitute for commercial lasers.The heated buffer gas cell has a linewidth widening which is unsuitable for frequency stabilization.Therefore, an additional vacuum atomic cell is added to the magnetometer probe without increasing the volume; this is a small cylindrical cell with a diameter of 20 mm and a thickness of 10 mm.At the same time, a photodiode is close to it, as shown in Figure 4.Although the saturated absorption method exhibits better performance than the absorption method, the optical path needs to be integrated into the probe.The more complex the optical path is, the more noise is caused by the vibration, which is unfavorable for its application in instrumentation.In cases where the requirements of laser frequency stability are met, the optical path of the absorption method is simple and easy to build.The remainder consists of the frequency stabilization circuit.The VCSEL manual shows that the wavelength is positively correlated with the current and the temperature of the laser (0.5 nm/mA, 0.06 nm/ • C).Therefore, the control of laser wavelength can be transformed into accurate control of current and temperature.The VCSEL frequency stabilization circuit in this work is shown in Figure 1 Temperature Control Circuits of VCSEL A Thermo-Electric Cooler (TEC) temperature control scheme is often implemented by chip MAX1978 [26].We designed the circuit with reference to the typical circuit as shown in Figure 5.When laying out related wires on PCB, they should be short and thick to reduce the transmission voltage drop of the wires.At the same time, the contact voltage drop of the connector and wiring voltage drop of the circuit board should be minimized and the analog ground, digital ground, and power wired separately to reduce crosstalk.For the PID circuit, the circuit output and heater are disconnected, then a unit step signal is input to the heater and the response of the thermistor (the input of the PID circuit) is recorded in order to determine the circuit parameters.To evaluate the temperature stability of VCSEL, the data were recorded for five minutes (once per second) with an eight-and-a-half-bit Keysight 3458A multimeter.These data included R 1k8 (the value of 1.8 kΩ low temperature drift resistance with accuracy of 0.01%); V re f (the 1.5 V voltage reference obtained from external voltage reference chip REF5025 (noise: 3 µVpp/V, temperature drift: 3 ppm/ • C) through precision low-temperature-drift resistance voltage division), and V the (the voltage applied to the thermistor of VCSEL after 1.8 kΩ resistor voltage division).The data are shown in Figure 6.First, we used the random walk overlay white noise model with trend term to fit the above three data items and obtain their respective eigenvalues.Then, we obtained the relationship between R the and R 1k8 and between V re f and V the according to Ohm's law.Monte Carlo simulations of the thermistor resistance (R the ) were performed to plot the trend of thermistor resistance values 10 5 times, as shown in Figure 7; the statistical chart helps to evaluate the effect of temperature control.The average temperature fluctuation range of the laser does not exceed 0.002 • C, corresponding to the VCSEL frequency fluctuation range less than 60 MHz.This indicates that the best index of TEC temperature control for the MAX1978 is reached (±0.001 • C, as shown in the manual). Current Control Circuits of VCSEL Because controlling the temperature of VCSEL to 0.002 • C is a limit, it is difficult for other schemes to achieve higher accuracy.Accuracy of the current below 1µA seems to be easier to achieve.Therefore, after the range of laser frequency is roughly ensured with a fixed temperature, the scanning current is used to scan the frequency.The absorption peak is obtained on the photodetector passing through the atomic cell by scanning the laser frequency, then the peak position is locked.Because both a rated current and a scanning current are required, the current control part is composed of a constant current source and a micro-current source.The current control part is shown in Figure 8.The part of the constant current source is a variant of the Howland current source.It is capable of both sourcing and sinking current proportional to an input voltage.Because positive feedback and negative feedback are introduced at the same time, it can ensure that the current through the load remains unchanged when the load changes [27].The range of current scanning can be adjusted by adjusting the scanning voltage of the micro-current source or the ratio of the resistance of the emitter of two triodes.Thus, after the output of the constant current source is connected in parallel with the output of the micro-current source, the purpose of scanning near the rated current range is achieved. To evaluate the accuracy of the constant current source, the current output was first connected in series with a resistance of 1.8 kΩ as the load, then the data were recorded for five minutes (100 records per second) with an eight-and-a-half-bit Keysight 3458A multimeter and compared with the constant current output (about 3 mA) of a Keysight B2902A.The noise power spectrum of the measurement results is shown in Figure 9.The results show that when the constant current source outputs current, the accuracy is 0.72 µA (the standard deviation) and the noise is less than 0.2 µA/Hz 1/2 from 0.1 Hz to 20 Hz.The magnitude of the noise is the same as that of the Keysight B2902A, which is 0.5 times better in this test.This corresponds to a frequency fluctuation range of less than 40 MHz for the VCSEL. Circuits of Lock-in Amplifier To stabilize the frequency of the VCSEL, it is necessary to modulate the current at a low frequency, then detect the error of laser frequency through the output of the lockin amplifier.The system uses a peak of absorption to stabilize the laser frequency.A modulated current signal with weak amplitude and far lower frequency than the magnetic resonance frequency (<1 µApp, 1 kHz) is added to the laser current.An AD9833 DDS chip is programmed to generate a 1 kHz sine wave, which is AC coupled to the output of the current source after passing through the resistance attenuation network.In this way, the laser is be modulated.During photoelectric conversion, the conversion from photodiode current to voltage (I-V) is realized on the LT1028chip (low noise amplifier), and then the voltage signal is AC amplified and sent to the AD630 chip (lock-in amplifier), as shown in Figure 10.At the same time, the 1 kHz signal generated by the DDS is connected to the reference input of the AD630 chip.These two signals are represented as Equations ( 5) and ( 6), where x(t) is the component in the output of the I-V conversion with the same frequency as the reference signal (the noise introduced by the current source, the frequency signal of optical magnetic resonance, and other noise in the optical path are removed) and r(t) is the 1 kHz reference signal. When x(t) and r(t) are multiplied, the result is u p (t), as shown in Equation (7).In the spectrum, the frequency ω 0 is moved to ω = 0 and ω = 2ω 0 : Because the frequency of the reference signal is low (1 kHz) and the phase shift θ in the x(t) signal is close to 0, the amplitude V r of the reference signal r(t) is fixed.The DC component of u p (t) reflects the amplitude V s of x(t).After passing through the next link of phase-sensitive detection, i.e., a low-pass filter, the AC component is filtered out, leaving only the DC component, which reflects the slope of the absorption peak and is used as the error input of the PID to control the lock-in amplifier's output to zero. Fast Frequency Stabilization Algorithm The waiting time from startup to the normal operation of the laser-pumped magnetometer mainly depends on the wavelength scanning and locking process of the laser, while the discharge lamp can provide a stable light in a few seconds.Therefore, the time of laser wavelength scanning and locking process should be shortened.A fast laser frequency stabilization algorithm for atomic magnetometers is proposed, which can be roughly divided into the two sub-processes of scanning and locking.The proposed algorithm was tested to ensure that by controlling the process of scanning the temperature and current, the laser wavelength can be correctly set and locked within 30 s after the first start and then locked again within 30 s after the system is reset. The sampling value with the scanning current is obtained by ADC sampling of the MCU.In analyzing these data, the extreme value point is the position of the absorption peak.Accordingly, the current is set to the value at the absorb peak position.A sliding window judgment method is used.In the process of traversing this group of sampling values, the point i from 0 to array length N is regarded as the center point of the window, and whether i is the extreme point is judged according to the data in the current window [i − w, i + w].The window width w is set according to the linewidth of the absorption peak, which is approximately equal to the DAC quantization value of the linewidth of the half peak width.By setting the height h of the window to ensure that there is enough difference between the extreme point and the sampling point at the window boundary, the interference of noise in the sampling data can be eliminated and all qualified extreme points can be obtained accurately.The window height h can be set according to the amplitude of the sampling noise, which is roughly several times the quantization value of the noise peak value.The schematic diagram of the extreme point position obtained from the scanning window and the actual process is shown in Figure 11.After judging whether the wavelength is locked to the position where magnetic resonance can occur through the sampling value of the absorption peak, if the position of the absorption peak is within the tolerance value during the three scans it is considered that the required wavelength position has been found.The wavelength scanning process can be completed in about 30 s. Evaluation of Frequency Fluctuation and Long-Term Frequency Drift In this section, we evaluate the frequency fluctuation according to the fluctuation amplitude ∆U of the lock-in amplifier output after the scanning process.The method is shown in Figure 12.In the scanning process, the time interval of the zero-crossing position of the lock-in output corresponding to the two absorption peaks (Cs-D1 line: 4→3 and 4→4) is recorded as T, and the time interval of the amplitude ∆U of the lock output fluctuation when passing the target absorption peak (Cs-D1 line: 4→3) is recorded as t.The distance between the two absorption peaks of the Cs atom is a constant of about 1167 MHz.Therefore, the output of the stable lock-in amplifier is sampled for one hour to obtain the fluctuation amplitude ∆U.For Figure 12, ∆U = 2.7 V, t = 0.16 s, and T = 4.38 s.According to Equation ( 8), the corresponding fluctuation range ∆λ is 43 MHz.The wavelength was measured at PD1 with the 671B wavelength meter.Because of the limited accuracy of the wavelength meter, it was not used to evaluate the short-term frequency stability; instead, the wavelength was recorded for a long time.We monitored the stable VCSEL wavelength for 24 h and plotted the data, as shown in Figure 13.Compared with the uncontrolled (free drift) VCSEL frequency, this has a better stability effect.During the whole measurement time, the measured data are within the minimum accuracy (0.0008 nm, corresponding to 300 MHz linewidth) of the wavelength meter.It can be inferred that the frequency drift rate is less than 12.5 MHz/hour.An AC heating scheme was realized after comparing various schemes.A structure that keeps the heating wire away from the atomic cell is proposed to eliminate the interfering magnetic field, as shown in Figure 14.The atomic cell is placed in the groove of a nonmagnetic alloy heat-conducting interlayer.The interlayer is placed in an insulation shell made of Teflon.The shell is matched with anti-reflective lenses to keep the vapor cell in a closed space, slow the heat loss, and keep the interior warm.The heat-conducting interlayer increases the distance between the heating wire and the atomic cell while conducting the heat to ensure that there is almost no magnetism at the atomic cell.The remanence decreases with the third power of distance.The symmetrical heating wires counteract the magnetic fields of equal amplitude and opposite directions.As the AC frequency is far from the Larmor frequency, the noise can be filtered out.The temperature control system is designed as shown in Figure 1 Part C. A DDS chip is used to generate an AC signal with adjustable frequency.The multiplier is used to multiply the AC signal with the DAC output of the MCU to adjust the amplitude of the signal.The signal is amplified by a voltage amplifier and a power amplifier.The temperature measurement is realized by high-precision ADC. Evaluation of Temperature Stability In order to explore the performance of the temperature control system, the following experiments were conducted at 25 • C. The curve is drawn according to the measurement of the thermistor; as shown in Figure 15 (left), the output was stable to within 0.1/ • C over 10 min, which can be regarded as the real temperature of the non-magnetic alloy layer.In these experiments, the temperature of the atomic vapor cell is conducted by the nonmagnetic alloy interlayer, then the vapor temperature is further evaluated with the output of the photodetector.A commercial laser is used to emit a beam with a constant power and a stable wavelength of 894.6 nm.After attenuation to a certain intensity (60 µW), the beam is injected into a Cs atomic cell placed in a magnetic shielding bucket.The photoelectric detector observes the received transmitted light intensity, and the output is detected by a multimeter (RIGOL-DM3058E, 5-1/2 Digits).The laser intensity of the incident atomic vapor cell is now kept constant.Because the atomic density in the vapor cell is proportional to the temperature, as the temperature increases, the atoms absorb more light, and the light intensity received by the photodetector is lower.Figure 15 shows the following: (1) when the temperature is controlled at 25 • C (t1), the output voltage of photodetector remains stable at about V1 and fluctuates within ∆V1; (2) the heating system starts to work, and the measured temperature of the vapor cell rises to the set value (t2), after which, about 10 minutes, the temperature measurement value of the system output becomes stable and the fluctuation range is less than 0.1 • C; (3) in about 40 min, the output voltage of the photodetector is stable at V2 and fluctuates within ∆V2, and the heat transfer between the non-magnetic alloy and the atomic vapor is almost in equilibrium.The temperature fluctuation range of the vapor ∆t can be evaluated by Equation (9) (see Table 1).The testing results indicate that the fluctuation range of the vapor cell temperature is less than 0.018 Test and Evaluation The integrated portable self-oscillating VCSEL-pumped cesium atomic magnetometer is shown in Figure 16.The optical path is integrated into a cylindrical probe with a diameter of 70 mm and a length of 180 mm.The circuit is integrated into a cylindrical metal barrel with a diameter of 62 mm and a length of 350 mm.We tested the magnetometer at the First-Class Weak Magnetic Metering Station of NDM (Magnetism Testing and Calibration Laboratory Station of Yichang Testing R&D Institute).The report shows that it can work within a magnetic field range from 2 × 10 4 nT to 10 × 10 4 nT, as shown in Table 2.The fitting relationship between the standard magnetic field and the instrument indication is B Normal = B × 1.0155 − 716.25, as shown in Figure 17.Thus, the results in the measurement range are considered to be linear.The noise power was tested in the laboratory, as shown in Figure 18 (left).We generated a measured magnetic field using a 3D printed Helmholtz coil at an angle of 45 degrees to the magnetometer probe with the Keysight B2902A as the current source.The output of the magnetometer was acquired with the Keysight 53230A frequency meter.We found that worse results are obtained if a current source with lower accuracy than the Keysight B2902A is used to generate the measured magnetic field in the laboratory.Therefore, it is necessary to evaluate the performance of the magnetometer in an accurate magnetic field [28].The noise power spectrum was measured at the metering station as well, as shown in Figure 18 (right).The root mean square (rms) noise of the magnetometer is 3 pT/Hz 1/2 as tested at the metering station.The performance of the VCSEL-pumped magnetometer designed in this work was compared with that of the lamp-pumped CS-3 self-oscillating atomic magnetometer [29].They have the same operating range, and their sensitivities are in the same order of magnitude (CS-3: 0.6 pT/Hz 1/2 rms).The design of this work refers to the appearance of CS-3, as they are basically the same in size; however, this design does not require such a large volume, and can be reduced even further.Due to the low power consumption of the VCSEL, the power consumption of the designed magnetometer is only about 9 W, which is less than the 12 W of the CS-3 (24 V 0.5 A at 20 • C). Conclusions A portable self-oscillating VCSEL-pumped Cs atomic magnetometer is designed in this work.Its circuits mainly include the amplification and feedback loop, VCSEL control circuit, and temperature controller of the atomic cell.In this work, the maximum phase shift difference of the circuit within the frequency range of 70 kHz to 350 kHz is not more than 3 degrees, the frequency of VCSEL is stabilized to within 43 MHz, and the temperature of the atomic cell is stabilized to within 0.02 • C. The magnetometer can work in the magnetic field range from 2 × 10 4 nT to 10 × 10 4 nT, and the noise is 3 pT/Hz 1/2 rms.This work is Figure 1 . Figure 1.The structure of the portable self-oscillating VCSEL-pumped Cs atomic magnetometer. Figure 2 . Figure 2. The equivalent noise model of the preamplifier (left) and the gain-frequency characteristics of noise gain and signal gain (right). Figure 3 . Figure 3.The relationship between phase shift degree and R f , C f , frequency. Part B. Figure 4 . Figure 4.A vacuum atomic cell and a photodiode are added to the original structure. Figure 6 . Figure 6.Drift of the value of low-temperature drift resistance, drift of the voltage of external reference REF5025, and drift of the voltage applied to the thermistor of the VCSEL. Figure 7 . Figure 7.The trend and statistical chart of R the over 10 5 times, conducted by Monte Carlo simulation. Figure 8 . Figure 8.The current control circuits of the VCSEL. Figure 9 . Figure 9.The noise power spectrum of the current source. Figure 10 . Figure 10.The circuits of the lock-in amplifier. Figure 11 . Figure 11.The schematic diagram of the fast frequency stabilization algorithm (left) and the actual process of frequency stabilization after using the algorithm (right). Figure 12 . Figure 12.Evaluation of the frequency fluctuation according to the fluctuation amplitude of the lock-in amplifier output.∆λ = t T × 1167 MHz (8) Figure 13 . 5 . Figure 13.The monitored wavelength data of the VCSEL for 24 h in stable and free drift mode. 5. Temperature Controller of Atomic Cell 5.1.The Heater Structure and the Heating Scheme Figure 14 . Figure 14.The structure of the atomic cell heater. Figure 15 . Figure 15.Measurement of the thermistor (left) and the voltage output by the photodetector (right). Figure 17 . Figure 17.The fitting relationship between the standard magnetic field and the instrument indication. 2 )Figure 18 . Figure 18.The noise power spectrum of the magnetometer as tested in the laboratory (left) and at the metering station (right). Table 1 . • C, 0.015 • C, and 0.006 • C, respectively, at 40 • C, 50 • C, and 60 • C. Data recorded when evaluating the temperature stability of steam with the output of the photoelectric detector. Table 2 . The working range of the magnetometer.
7,687.4
2022-11-09T00:00:00.000
[ "Physics" ]
Mathematical Modeling of Structure and Dynamics of Concentrated Tornado-like Vortices: A Review : Mathematical modeling is the most important tool for constructing the theory of concentrated tornado-like vortices. A review and analysis of computational and theoretical works devoted to the study of the generation and dynamics of air tornado-like vortices has been conducted. Models with various levels of complexity are considered: a simple analytical model based on the Bernoulli equation, an analytical model based on the vorticity equation, a new class of analytical solutions of the Navier–Stokes equations for a wide class of vortex flows, and thermodynamic models. The approaches developed to date for the numerical simulation of tornado-like vortices are described and analyzed. Considerable attention is paid to developed approaches that take into account the two-phase nature of tornadoes. The final part is devoted to the analysis of modern ideas about the tornado, concerning its structure and dynamics (up to the breakup) and the conditions for its occurrence (tornadogenesis). Mathematical modeling data are necessary for interpreting the available field measurements while also serving as the basis for planning the physical modeling of tornado-like vortices in the laboratory. Introduction One of the most common states of a moving continuous medium is vortex motion.Among the colossal variety of different vortex structures, concentrated vortices stand out [1].Concentrated vortices are compact spatial regions characterized by high vorticity values, which are surrounded by a flow with significantly lower vorticity (or with zero in the case of an ideal fluid).Concentrated vortices are widespread in Earth's atmosphere [2][3][4] and in the Sun [5][6][7]. The main problems with studying destructive atmospheric vortices (including tornadoes) are as follows: (1) assessment of the probability of the occurrence of a vortex structure at a given point in space at a given time, (2) prediction of the development of an already arisen element, and (3) investigation of the possibilities of the weakening and destruction (decay) of the vortex structure as well as changes in the path of its propagation [8,9]. The subject of this review is the methods of mathematical modeling of the structure and dynamics of vertically oriented concentrated tornado-like vortices, which are analogs of vortex formations observed in nature ("dust devils", water tornadoes, fire tornadoes, etc.).The main purpose of mathematical modeling is to ascertain the main characteristics of vortices that determine their destructive potential (azimuthal velocity and corresponding pressure drop, geometric dimensions of the vortex, etc.) at all stages of their life cycle (from generation to decay).In the review, significant emphasis is given to the consideration of works devoted to the development of analytical and simplified methods of mathematical modeling that allow for the obtainment of accurate solutions.Despite their rapid development, computational fluid dynamics (CFD) methods are quite effective since they allow us to isolate the main physical mechanisms and focus on their detailed consideration. The review is constructed as follows.Section 2 contains a description of analytical models of tornado-like vortices with various levels of complexity.The results of numerical studies of tornado-like vortices are described and analyzed in Section 3.This section highlights the importance of taking into account the multiphase nature (in this particular case, the two-phase nature) of the tornado.Lastly, Section 4 is devoted to a short description of modern ideas about the tornado.In conclusion, some directions for further improvement of the theory of tornado-like vortices are formulated. Mathematical Modeling: Analytical Simulation Among the simplest classes of tornado-like vortex models are those based on the Bernoulli equation.Note that even such simple models can accurately describe the basic properties of real tornadoes, namely, the thickening of the funnel with distance from the ground and an increase in azimuthal (tangential) velocity as the funnel approaches the ground. Simple Analytical Models The simplest model of a tornado is a vortex flow with a vertical axis of symmetry and a fixed core (or funnel) [26].Figure 1 shows a diagram of the vortex under consideration.The axis z is the axis of symmetry of the tornado; z = 0 corresponds to the surface of the Earth.The horizontal axis r shows the distance from the axis of symmetry of the tornado.At r → ∞ , the boundary between a stationary funnel and rotating air is z = h. of vortex formations observed in nature ("dust devils", water tornadoes, fire tornadoes, etc.).The main purpose of mathematical modeling is to ascertain the main characteristics of vortices that determine their destructive potential (azimuthal velocity and corresponding pressure drop, geometric dimensions of the vortex, etc.) at all stages of their life cycle (from generation to decay).In the review, significant emphasis is given to the consideration of works devoted to the development of analytical and simplified methods of mathematical modeling that allow for the obtainment of accurate solutions.Despite their rapid development, computational fluid dynamics (CFD) methods are quite effective since they allow us to isolate the main physical mechanisms and focus on their detailed consideration. The review is constructed as follows.Section 2 contains a description of analytical models of tornado-like vortices with various levels of complexity.The results of numerical studies of tornado-like vortices are described and analyzed in Section 3.This section highlights the importance of taking into account the multiphase nature (in this particular case, the two-phase nature) of the tornado.Lastly, Section 4 is devoted to a short description of modern ideas about the tornado.In conclusion, some directions for further improvement of the theory of tornado-like vortices are formulated. Mathematical Modeling: Analytical Simulation Among the simplest classes of tornado-like vortex models are those based on the Bernoulli equation.Note that even such simple models can accurately describe the basic properties of real tornadoes, namely, the thickening of the funnel with distance from the ground and an increase in azimuthal (tangential) velocity as the funnel approaches the ground. Simple Analytical Models The simplest model of a tornado is a vortex flow with a vertical axis of symmetry and a fixed core (or funnel) [26].Figure 1 shows a diagram of the vortex under consideration.The axis z is the axis of symmetry of the tornado; , the boundary between a stationary funnel and rotating air is h z = .It is assumed that the density of air in the core is lower than the density of moving air.The speed of rotation of the air increases toward the surface of the funnel and reaches a maximum on the surface of the Earth. The Bernoulli equations for rotating air and for the stationary air ( 0 = ϕ U ) of the core (funnel) are written as It is assumed that the density of air in the core is lower than the density of moving air.The speed of rotation of the air increases toward the surface of the funnel and reaches a maximum on the surface of the Earth. The Bernoulli equations for rotating air and for the stationary air (U φ = 0) of the core (funnel) are written as where A = p for r = ∞ and z = h; ρ c < ρ.The simplest solution is obtained for the case of incompressible air, i.e., ρ = const and ρ c = const.This assumption is not strict because it is true for most real tornadoes.Next, we assume the constancy of the circulation of moving air along any circle located around the core (funnel), i.e., Γ = 2π r U φ = const. We subtract (1) from ( 2), taking into account the equality of pressures on the surface of the funnel from the rotating and stationary air.Then, for the velocity of air on the surface of the funnel U φ f , we obtain where the subscript f denotes the surface of the funnel.In a more concise form, expression (3) has the form U 2 φ f = 2(∆ρ/ρ)g(h − z) = 2 g ( h − z).From this relation and the condition of constant circulation, it is easy to obtain Equation ( 4) gives the shape of the funnel.For g = const, h = const; a larger value Γ 2 corresponds to a wider funnel with a larger area of destruction. On the surface of the Earth (z = 0), we give U 2 φ f 0 = 2g h and r 2 0 h = α.Here, U φ f 0 and r 0 are the values of the azimuthal air velocity and the radius of the funnel on the Earth's surface.Equation ( 4) gives the widening of the funnel with increasing distance from the ground, which is in qualitative agreement with most real tornadoes. In the work [27], a new class of analytical solutions of the Navier-Stokes equations is obtained, which allows for the prediction of the characteristics of complex vortex flows.One of the simplest solutions of the Euler and Navier-Stokes equations is for a plane vortex sink (vortex source).In [27], this solution is generalized for the case when axial flow is superimposed on axisymmetric vortex sinks.A new solution (more precisely, a family of solutions) for a viscous incompressible fluid, taking into account the superimposed shear longitudinal flow, significantly expands the scope of its use, allowing one to build pictures of various vortex flows, including tornadoes. In [27], a stationary vortex flow in which the velocity does not depend on the axial coordinate z is considered.In this case, the Navier-Stokes equations for the radial velocity projection U r and the azimuthal velocity projection U φ become independent of the equation for the axial velocity projection U z .For axisymmetric distributions U r and U φ ( U z may depend on φ), the continuity equation is reduced to the form d (r U r )/dr = 0. Hence, we obtain that rU r = const.For the sink (source), we have U r = G/(2π r) = Re r ν/r, where G = 2πν Re r is the gas flow rate through a cylindrical surface of unit length r = const. Transformations of the equation of motion for the azimuthal velocity projection lead it to the following form: where ξ = ln(r/r 0 ), and r 0 is the length scale.The general solution (5) has the form , where the first term of the right-hand side represents solid-state rotation (Re r = 0) and the second term denotes a potential vortex.In [27], we are limited to considering only a particular solution (C 1 = 0).In this case, U φ = K/(2 π r), where K = 2π C 2 is the circulation along the circle r = const and z = const.In further analysis, a dimensionless parameter was used: the Reynolds vortex number Re φ ≡ rU φ /ν = K/(2 π ν)Re φ .The flow in the plane (r, φ), determined by the corresponding projections of the velocity vector ( U r , U φ ), is a well-known vortex sink (vortex source) at Re r < 0 (Re r > 0).The purpose of the analysis conducted in [27] is to obtain a generalized solution for vortex flow by superimposing on it an axial (longitudinal) flow.Note that the axial flow ( U z = U z (z)) independent of the coordinate z does not affect the continuity equation and the equations of motion for radial U r and azimuthal U φ velocity projections. The equation of motion for the axial velocity projection U z is written in [27] as where W = U z r 0 /ν.The axisymmetric solution (6) has the form W = W c + W r (r/r 0 ) Re r , where W c and W r are constants. To account for the nonzero longitudinal pressure gradient (∂p/∂z = const = 0), a term P exp(2 ξ) is added, where P is a dimensionless parameter characterizing the longitudinal pressure gradient and having the form P = (r 3 0 /ρν 2 )(∂p/∂z).Thus, the solution for W takes the form W = W c + W r (r/r 0 ) Re r + W p (r/r 0 ) 2 , where W p = P/(4 − 2Re r ).The third term on the right side is the contribution of the longitudinal pressure gradient. For a flow in an unbounded space (0 ≤ r < ∞), W c is the velocity on the axis at Re r > 0 or at infinity at Re r < 0 and W p = 0. Thus, W c is a free parameter characterizing a homogeneous part of the axial velocity profile.Other parameters, namely, W p and W r , characterize the inhomogeneous shear of the axial velocity due to the longitudinal pressure gradient and radial advection, respectively. As a result, the velocity field is determined by the following relations: Expressions in (7) are a generalized solution for a vortex sink [27].These relations satisfy the Navier-Stokes equations and contain five dimensionless parameters: Re r , Re φ , W c , W p , and W r .The first two expressions are well-known solutions for the classical vortex sink.The third expression is a solution for the case of the superposition of axial flow on a vortex sink. Further, in [27], by integrating the equation for the streamline using (7), the following expressions are obtained: where a = W c /(2Re r ), b = W r /[Re r (Re r + 2)], c = W p /(4Re r ), and the vortex parameter (twist parameter) S = U φ /U r = Re φ /Re r .The velocity field is identical to the axisymmetric surfaces of the flow (8), differing only in the shift z 0 along z.The projections of curved streamlines on the plane z = const are logarithmic spirals, as follows from (9).The plane vortex sink is a special case obtained from ( 8) and ( 9) at a = b = c = 0. To visualize the flow pattern, which is necessary for comparison with experimental results, in [27], the stream function Ψ is used.The radial and axial projections of the gas velocity are related to the stream function as Using (7) as well as ( 8) and (9), in [27], an expression is obtained for a dimensionless stream function Ψ in the form Ψ = a(r/r 0 ) 2 + b(r/r 0 ) Re r +2 + c(r/r 0 ) 4 − z/r 0 , where Ψ = Ψ/(ν r 0 Re r ). The pressure distribution for the resulting family of solutions has the form [27] p = p ∞ − 0.5ρ(ν/r) 2 (Re 2 r + Re 2 φ ) + ρ(ν/r 0 ) 2 P z/r 0 .It follows that the pressure reaches a minimum at r → 0 .For most practical applications, Re 2 φ >> Re 2 r ; therefore, the minimum pressure is mainly associated with the presence of vortex motion.The decrease in pressure reflects the so-called cyclostrophic balance, i.e., the mutual counteraction of centrifugal force and radial pressure gradient.Also, the pressure may decrease or increase in the axial direction, depending on the sign P. Solutions in [27] describe a wide range of different vortex flows, including tornadoes.Solution ( 7) makes it possible to analyze the formation of a tornado near the surface, as well as to interpret the effect of a sharp expansion of the funnel at a certain height from the ground. Figure 2a shows the meridional streamlines in the area where the tornado funnel is expanding [27].Figure 2b illustrates the formation of a tornado near the Earth's surface [27] In [28], a mathematical model describing the process of formation of intense atmospheric vortices of the tornado type due to instability is constructed.The instability condition is an increase in the vertical component of the air velocity in the direction of the Earth's surface or an increase in the concentration of suspended particles.Such conditions can be realized in some cases in thunderclouds in the atmosphere and over the dusty or snow-covered surface of the Earth.The initial stage of tornado development is being studied in research [28]. When writing equations of geophysical hydrodynamics, taking into account the presence of solid (or liquid) suspended particles having a density p ρ in the air with a density a ρ , the assumption of a linear relationship between turbulent friction and the speed of motion ( γ is the proportionality coefficient) was used.So, the density of the mixture of air and particles was equal It should be noted that in order to conduct an analytical study, the vertical component of the velocity in [28] was considered to be known from observations.Then, in [28], an equation was constructed for the vertical component of vorticity: To do this, it is necessary to differentiate the equation for х U In [28], a mathematical model describing the process of formation of intense atmospheric vortices of the tornado type due to instability is constructed.The instability condition is an increase in the vertical component of the air velocity in the direction of the Earth's surface or an increase in the concentration of suspended particles.Such conditions can be realized in some cases in thunderclouds in the atmosphere and over the dusty or snow-covered surface of the Earth.The initial stage of tornado development is being studied in research [28]. When writing equations of geophysical hydrodynamics, taking into account the presence of solid (or liquid) suspended particles having a density ρ p in the air with a density ρ a , the assumption of a linear relationship between turbulent friction and the speed of motion (γ is the proportionality coefficient) was used.So, the density of the mixture of air and particles was equal ρ = ρ a (1 − Φ) + ρ p Φ.It should be noted that in order to conduct an analytical study, the vertical component of the velocity U z (x, y, z, τ) in [28] was considered to be known from observations.Then, in [28], an equation was constructed for the vertical component of vorticity: To do this, it is necessary to differentiate the equation for U x by y and the equation for U y by x and then subtract the first equation from the second (from the obtained) equation. Further, in [28], an analysis of the equation obtained above for the vertical component of vorticity was conducted.For this, Equation (11) was presented in the form The first term of the right side of Equation ( 12) in atmospheric physics is called the convergence of flows.Available observational data [29,30] indicate that it is the convergence of flows that dominates the initial stage of the vortex development.With the use of the continuity equation, the convergence of flows can be expressed in terms of the vertical velocity gradient, representing (12) in the following form: where ω z0 is the initial (at τ = 0) value of vertical vorticity.Integrate (14) in time from 0 to τ.As a result, we obtain [28] where U z is the vertical component of the velocity, taking into account the gravitational subsidence of suspended particles, i.e., U z = U z + aσΦ.Expression (15) clearly shows that the growth of vorticity in tornadoes can occur exponentially, which is characteristic of explosive instability [28].In the absence of convergence, Equation (11) takes the following simple form, Dω z /Dτ = −γω z + M, where M is the right side of Equation (11), i.e., there is no explosive instability.The presence in the equations of motion of a coefficient of friction γ with a frequency dimension, which has a physical meaning of momentum loss in the collision of particles moving at different velocities, leads to exponential attenuation of vorticity ω z over time. From the analysis of Equation (15), it follows that explosive instability for vortices with a vertical axis of rotation is realized under the condition [28] In [28], it is noted that there is no vertical velocity component in the troposphere under normal conditions.Vertical velocity appears when the air flow moves around mountains and hills and when air flows collide (for example, cold and dry air from Canada and warm and humid air from the Gulf of Mexico over the territory of the United States).In addition, as a result of convection, secondary movements occur over a highly overheated surface, and the formation of thunderclouds may occur, under which rapid vertical flows are observed.Precipitation does not occur in the front part of the thundercloud due to large wind speed gradients, and there are updrafts that lead to the formation of a cloud shape in the form of an anvil.Precipitation is formed in the central and rear parts of the thunderstorm cloud, contributing to the emergence of vertical air flows directed downward. Under a mature thunderstorm cloud, the convergence of wind speed is of the order of 10 −3 s −1 .The vertical velocity profile U z (x, y, z, τ) can be represented as a parabola with a maximum U zmax (x, y, τ) in the middle of the thickness layer H, i.e., The concentration of suspended particles usually increases as it approaches the underlying surface, i.e., ∂Φ/∂z > 0. It follows from ( 16) and the parabolic vertical velocity profile that in this case, the condition of vortex instability takes the form 4(U zmax /H) (1 − 2z/H) + aσ(∂Φ/∂z) > γ. We will carry out transformation (15) taking into account ( 14) and the parabolic vertical velocity profile and under the assumptions U zmax = const, I/ω z << γ.As a result, we obtain the formula [28], suitable for calculating the vertical distribution of vorticity ω z : In [28], some calculations are performed using the vorticity dependence obtained above on the vertical coordinate and time.Calculations of the dependence of the vorticity ω z on the vertical coordinate z in the area 0 < z < H/2 under the thundercloud were performed.Calculations were carried out at ω z0 = 10 −8 s −1 , U zmax = 10 m/s (downward movement), H = 1 km, ∂Φ/∂z = 0, and γ = 10 −4 s −1 .The results of calculations showed that a very strong vortex is formed in the lower part of the storm cloud, the rotating trunk of which falls below the cloud but does not reach the surface of the Earth.A similar pattern is characteristic of a tornado, which is defined as a rapidly rotating air funnel in contact with both the surface of the Earth and a cloud. A similar vortex, but with the throat pointing upwards, can also be calculated using Formula (17) in the region H/2 < z < H, setting U zmax < 0 (upward movement).Calculations of the ascending vortex were performed according to Formula (17) of the dependence of the vorticity ω z on the vertical coordinate z above the underlying surface.Calculations were performed at ω z0 = 10 −8 s −1 , U zmax = −10 m/s (upward movement), H = 1 km, ∂Φ/∂z = 0, and γ = 0.01 s −1 .The results showed that the ascending vortex grows with time, rising upwards.The value of vorticity above the underlying surface is lower than under a thundercloud due to stronger friction near the Earth's surface (with all other parameters equal). Similar tornadoes, each in the form of a funnel with a base on the ground and a trunk rising to the clouds, have been repeatedly observed.A fully developed tornado is obtained by closing two vortices [28]: one coming from a cloud and a surface one, both in the area of constriction ∂U z /∂z = 0.The binding for the accepted parabolic velocity profile takes place at z = H/2.A significant role in the formation of the vortex according to (17) should be played by the concentration gradient of suspended particles ∂Φ/∂z, which is very significant to account for. Thermodynamic Models All of the hydrodynamic models of tornado-like flows described above did not directly take into account the most important "thermodynamic" factor: convective instability of the atmosphere due to the implementation of high-temperature gradients in the surface layer.The following are simple thermodynamic models of tornadoes that take into account the most important mechanism for generating atmospheric vortices. One of the first mathematical models (Gutman's model) of tornado-like vortices was proposed in [31].The strength of this model is an attempt to take into account the stratification of the atmosphere depending on a specific synoptic situation.Here are just two basic equations (the equation of thermodynamics of moist air and the equation for vertical velocity) of the original system of equations for a tornado [31]: where T (r, z) and p (r, z) are deviations of temperature and pressure from the corresponding values T(z) and p(z) at a great distance from the tornado, Γ V is wet-adiabatic gradient at temperature T and pressure p, R is gas constant for air, and a t and ν t are coefficients of turbulent thermal conductivity and turbulent viscosity, respectively.During the process of solving the above system of equations, the principal role should be played by the term αU z located on the right-hand side of ( 18) and proportional to (in the case of (α > 0)) the intensity of the convective instability energy conversion.As noted in [31], the left part of (19) contains terms that are negligible and not taken into account in the analysis of most meteorological processes, namely, −β(U r ∂(p /p)/∂r + U z ∂(p /p)/∂z).These terms take into account the heat consumption for the expansion or compression of air when its pressure changes and becomes commensurate with other terms of the equation for typical values of pressure drop (p = 50 − 100 mbar) on the axis of the tornado. Note also that the equation of cyclostrophic balance is used for azimuthal velocity.As boundary conditions on the tornado axis (r = 0), the radial and azimuthal velocities as well as the radial gradients of vertical velocity and temperature deviation (T ) from the equilibrium value are assumed to be equal to zero: U r = U φ = 0, ∂U z /∂r = ∂T /∂r = 0.At a great distance from the vortex ( r → ∞ ), all disturbances should attenuate, as a result of which the gas-dynamic (radial, azimuthal, and vertical components of velocity) and thermodynamic (pressure and temperature deviation) parameters should tend to zero: U r → 0 , U φ → 0 , U z → 0 , p → 0 , and T → 0 . The intensity of rotation in [31] was set by selecting the value of circulation in a circle of a sufficiently large radius centered on the axis of the tornado.In order to obtain an analytical solution to the problem, it was assumed that ν t = a t = const, T = const, λ = const, and α = const, assuming values for these parameters of the averages for the entire stratified unstable (α > 0) layer of the atmosphere "penetrated" by a tornado.Further, in order to reduce the initial system to a system of ordinary differential equations in [31], a variable r is replaced by a variable ζ = r 2 √ αλ/(4aν t ).Here, a is an arbitrary constant value.In Ref. [31], as mentioned above, rotation in the parent cloud (tornado-cyclone) was set and not found in the course of solving the problem.From the solving of a system of equations, formulas for calculating the main characteristics of a tornado are obtained.To calculate the parameters of the tornado, the following circulation value was taken [32]: Γ = 7.5 • 10 3 m 2 /s.For the remaining dimensional values, the most probable values were taken: With the above values taken into account, U φmax = 0.3( 4 √ αλ/ √ ν t )Γ = 75 m/s was found for the maximum azimuthal velocity at r ≈ 60 m.The pressure in a tornado is always lower than the pressure in the ambient air.For the tornado axis, where the pressure is lowest, p = −0.15(ρ√ αλ /ν t )Γ 2 ≈ −100 mbar was obtained.The angular velocity is zero at the periphery, reaching its maximum in the center of the tornado The resulting solution showed that the vertical velocity, generally increasing with height, has a negative component near the axis of rotation at all heights.This is due to a strong drop in pressure in the tornado and the lack of air flow from below (the inflow of air from the sides is slowed down by centrifugal force).The presence of a negative component leads to the appearance of a "compensatory" descending jet in the lower part of the tornado.With height, the velocity and width of the descending flow decrease in absolute magnitude and vanish at the next value of the vertical coordinate z ≈ 0.1 √ λ/α(β Γ 2 )/ (RTν t ) ≈ 560 m.It can be seen that the velocity of the descending flow is proportional to the square of the circulation (∼ Γ 2 ), while the maximum azimuthal velocity (see above) is proportional only to the circulation in the first degree (∼ Γ). Conclusions [31] showed that to simulate a tornado-like vortex, it is necessary first to calculate the rotating parent cloud (tornado-cyclone) and then solve the problem of the emergence of a tornado from a tornado-cyclone due to unstable stratification.Note that the solution [31] admits two limiting cases: (1) in a stably stratified atmosphere, the resulting solution vanishes, since convective instability is a necessary condition for the occurrence of a tornado, and (2) in the absence of rotation, the found solution turns into known expressions describing an upward convective flow over an overheated surface. Model [31] received further development in the works [33][34][35] and others.In [33,34], a different solution of the nonlinear differential equation obtained in [31] has been found.According to this solution, a descending current surrounded by an ascending flow takes place in the central part of the tornado. In [35], it is shown that the downward flow on the vortex axis occurs when the vertical pressure gradient is taken into account in the equation for the vertical velocity U z , i.e., ∂p/∂z.In this work, two layers of the atmosphere were considered: (1) the lower one with unstable stratification (0 ≤ z ≤ h; h is the height of this layer) and ( 2) the upper one with stable stratification (z > h).These layers were formed by setting a temperature gradient that decreases with height.The following boundary conditions different from [31] were assumed on the tornado axis (r = 0): U z = 0, ∂U φ /∂z = ∂T /∂z = 0.At a great distance from the vortex ( r → ∞ ), conditions other than [31] were set in the form U z = 0, p = 0, T = 0, and ∂U φ r/∂r = 0. Note that unlike [31], a condition is also set for the attenuation of all disturbances at high altitudes ( z → ∞ ) as U φ = U z = 0, p = 0, and T = 0.As for the initial conditions, τ = 0 at rest (U φ = U z = 0) is set, and at the moment of time τ = τ 0 , rotation (U φ = U φ0 ) is set at the periphery of the vortex. As a result of the calculations performed in [35] at h = 3 km, the following characteristic parameters of a tornado-like vortex were found: (1) the diameter of the vortex was about 1 km (the boundary was determined using the condition: U z = 0.1 U zmax ), (2) the maximum value of the azimuthal velocity was U φmax = 75 m/s (r = 45 m, τ = 56 min), (3) the maximum value of the vertical velocity was U zmax = 45 m/s (r = 0, z = 1700 m, τ = 36 min), (4) the maximum pressure drop reached ∆p max = 110 hPa (r = 0, z = 0, τ = 56 min), (5) the maximum heating was T max = 18 • C (r = 0, z = 1800 m, τ = 60 min), and ( 6) the maximum cooling in the lower part of the vortex was several degrees. The main drawback of the model [35] is an extremely rough description of turbulent friction.The coefficient of turbulent shear viscosity was assumed to be ν t = 10 m 2 /s, which is probably much less than the real values.As a result, the calculated vortex practically did not weaken (even after the incident 120 min after the start of the count). In [36], some boundary and initial conditions were changed.In this work, for the development of a non-rotating convective "pipe" at the initial moment of time (τ = 0), an initial temperature perturbation was set T 0 = const.In order to give an initial twist at a sufficient distance from the axis of the vortex (r = 1000 m), the initial value of the azimuthal velocity (U φ0 = 10 m/s) was set, independent of height, and maintained unchanged for several time steps.Thus, in [36], a vortex developing from an ascending convective flow swirled at the periphery was calculated.The study showed that the stability of the numerical scheme strongly depended on the set values of the coefficient of turbulent viscosity ν t , the convection parameter β, and the thickness of the surface boundary layer δ included in the boundary condition (on the Earth's surface) for vertical velocity.As rightly noted in [37], the main idea of the studies described above-to obtain a powerful tornadolike vortex from relatively weak convective currents arising over overheated surfaces-is questionable. In [38], a simple hydrodynamic model of tornado-like vortices (Kurgansky's model) is proposed, developing Gutman's thermodynamic approach [31] and its subsequent modifications [35,36].The analysis based on [38] contains the equation of thermodynamics of moist air, which is linearized with respect to deviations of temperature T = T e (z) + T and pressure p = p e (z) + p from the values of T e and p e in the atmosphere surrounding the vortex, depending only on altitude z.In [38], a vortex solution is obtained in the approximation of the weak compressibility of atmospheric air in the dynamic sense.The main equation is solved together with the equations of motion and the continuity equation in the Boussinesq approximation.The transformation of the equation for the vertical component of velocity, taken on the axis of symmetry (r = 0), and the equations for the radial component of velocity (cyclostrophic balance) make it possible to receive on the right side of the final equation the convective available potential energy (see, for example, [39,40]).Traditionally, it is called CAPE (convective available potential energy).As a result, in [39], we come to the following special case of the equation: where U φmax is the maximum azimuthal velocity in the vortex, taken at the level of free convection (z = h); U z is the corresponding velocity of upward motion in the center of the vortex. A formula similar to (21) was obtained from general considerations (without detailing the three-dimensional structure of the vortex) in a review [41] for the case of a compressible atmosphere.Consideration of the vortex structure is necessary to establish a connection U 2 φmax and U 2 z /2.For dry convective vortices and when refusing to use the cyclostrophic balance equation, an analog of Formula (21) was obtained in Refs.[42,43]. Let us make one important remark.When the term U 2 z /2 is neglected in the right part (21), a formula is obtained for the "thermodynamic velocity limit" [41,[44][45][46][47], determined by the hydrostatic pressure deficit in the center of the vortex.Since the vertical velocity vanishes at z → ∞ , the term U 2 z /2 describes the pressure deficit in the center of the vortex caused by the Bernoulli effect in the downward-tapering vortex core.The sum CAPE + U 2 z /2 on the right side ( 21) is equal to the pressure deficit U 2 φmax maintained by the conditions of cyclostrophic balance. One of the well-known problems of studying tornadoes is the prediction by Formula (21) of significantly lower maximum velocity values in a tornado at values known from observations of CAPE.In [38], an attempt is made to solve this problem by considering two (supercritical and subcritical) vortices and a detailed analysis of the vertical helicity flow in the constructed "composite" vortex.For this purpose, the helicity balance condition is used.It is suggested that the helicity is generated by the buoyancy force in the main (subcritical) vortex updraft due to the correlation of buoyancy and vertical vorticity.Further, the helicity is transmitted downwards, where it dissipates due to turbulent viscosity, first in the vortex decay region and finally in the surface boundary layer (during the interaction of a supercritical vortex with the Earth's surface). In conclusion, we note that estimates and conclusions performed in [38] have a double meaning.Firstly, they indicate, in accordance with the observational data, that at a given value of the convective available potential energy (CAPE = const), the generation of atmospheric vortices of various intensities is possible (U φmax = var).Secondly, in contrast, a vortex (tornado) of a given intensity (U φmax = const) can form at different values of convective potential energy (CAPE = var). In [48], an approach (Renno's model) is proposed that develops the ideas of Shuleikin [49] and considers natural convection as a heat engine-a device that turns the heat accumulated in the lower layer of the atmosphere into mechanical work.The analysis is performed in the approximation of the Carnot cycle, which has maximum thermodynamic efficiency and is limited by hot and cold adiabats as well as hot and cold isotherms.Since the work produced by a heat engine in one cycle is equal to the total mechanical energy due to convective instability, we write [48] where TCAPE is the total convective available potential energy received by a heat engine in one cycle in a reversible process and converted into mechanical energy.It includes the available energy converted into kinetic energy by both (ascending and descending) flows, i.e., TCAPE ≈ 2CAPE.As a result of the application of the first law of thermodynamics, the following relation is obtained for estimating the vertical velocity of the air flow: U z = (µ −1 TCAPE) In [50,51], a simple thermodynamic model was developed to describe the intensity of "dust devils" and water tornadoes, respectively.The developed theoretical approach is based on the concept of thermodynamic "efficiency" (by analogy with a heat engine) of atmospheric vortices, defined as where in T dS and out TdS is the amount of heat at the entrance and exit from the vortex (heat engine), respectively; S is entropy.Thermodynamic efficiency can be represented in the following form: η ≡ (T h − T s )/T h , where T h and T s are the entropy-averaged temperatures of the heat source and sink, respectively.In [48], an assumption was made that the temperature of the "dust devil" heat source is equal to the average temperature of the near-surface air.The use of the first law of thermodynamics [50] allowed us to obtain an equation for the pressure loss in the radial direction across the "dust devil".Further, in [50], assuming that the cyclostrophic balance condition and the equation of state of an ideal gas are fulfilled, expressions for determining the maximum tangential (azimuthal) velocity and vertical velocity are obtained.In [51], close (to [50]) relations for calculating the velocity components for water tornadoes are derived, differing in the presence of terms responsible for the latent heat of vaporization.Based on the obtained ratios, estimates of pressure drops along with tangential and vertical velocities were made, which correlate well with real measurements of the parameters of "dust devils" [52,53] and water tornadoes [54][55][56][57]. For anyone interested in thermodynamic models of tornadoes based on CAPE, we can recommend a relatively recently published paper [58].It provides a brief overview of three main varieties of mathematical models of this kind: (1) models based on the balance of entropy [48] and their modifications, (2) models based on two height scales [59] arising from a mismatch between latent heating and radiative cooling profiles, and (3) models based on zero buoyancy [60]. There are works [61][62][63][64] where it is shown that intense atmospheric vortices (called macrovortices in these studies), including tornadoes, can occur due to the presence of mesovortices in the atmosphere.Mesovortices are vortices whose scales are much smaller than the external scales of the phenomenon under consideration (in this case, a tornado).It is shown in [61,62] that it is possible to transfer the energy of turbulent air movement along the hierarchy of scales: from mesovortices to macrovortices, occurring at a certain ratio of the energy of these vortices, i.e., E mes /E mac .In this study, the possibility of generating intense macrovortices due to the initial energy of mesovortices was demonstrated for the first time in a numerical simulation of an axisymmetric vortex in a one-dimensional incompressible nonstratified atmosphere.It is revealed that in the case of the energy of the mesovortices running out, the reverse process of energy leaving into the mesoscale begins. The mechanisms of the origin of the initial field of mesovortices can be different: the formation of coherent structures of a 50-500 m scale in the surface layer of the atmosphere [65]; vortices in floating turbulent jets caused by both natural and anthropogenic factors [66]; anomalies of the average monthly temperature [67]; and, of course, the destruction of global vortices, for example, tropical cyclones [68]. In [63,64], a numerical model of tornado development in a three-dimensional compressible dry adiabatic atmosphere from a cloud of mesovortices was implemented.At the initial moment of time, the vertical and radial components of the velocity were absent; a weak calm cyclonic wind with an amplitude of 1.5 m/s was set.As a result of numerical modeling, the features of the formation of the vertical radial circulation and spiral structure characteristic of tornadoes are studied.In general, the mushroom-shaped structure of the tornado formed in about one minute.The simulated vortex structure persisted for several minutes (the velocity varied in the range of 43-35 m/s) and then gradually faded to 12 m/s for half an hour, which is typical for tornadoes of low and medium intensity. It is noted in [63] that the hypothesis of a dry adiabatic atmosphere does not allow for modeling the slow processes of accumulation of the energy necessary for the generation of tornadoes for mesovortices.To do this, it is necessary to take into account the influx of energy due to the heating of the underlying surface by solar radiation, the heat of phase transformations, and other factors. Mathematical Modeling: Numerical Simulation Despite the obvious difficulties in setting up correct boundary and initial conditions, there is a colossal amount of numerical calculations for tornado-like flows. Main Trends of Numerical Modeling All numerical studies of tornado-like vortices can be conditionally divided into three classes. The first class of studies is devoted to solving axisymmetric Navier-Stokes equations in a two-dimensional cylindrical coordinate system.However, in [69], where the structure and dynamics of axisymmetric tornado-like vortices were studied, the following was shown.In the case of sudden expansion and the beginning of "wandering" (precession) of the vortex before its decay, its structure ceases to be axisymmetric.Therefore, the studied flow cannot be adequately described by an axisymmetric mathematical model. The second class of studies is devoted to solving full-scale three-dimensional equations and comparing the obtained results with the characteristics of real natural vortices. The third class of flows is devoted to solving three-dimensional laboratory-scale equations and comparing the obtained results with the characteristics of model tornadoes (laboratory-simulated tornadoes). Below, we consider and analyze some of the results of these studies. As a result of the analysis of the data obtained, the following conclusions were made [70]: (1) the size of the vortex core is mainly a function of the twist parameter, (2) the size of the kernel does not depend on the Reynolds number at large values of the latter, (3) the size of the kernel ceases to depend on viscosity when it reaches small values (molecular viscosity values), and (4) the areas of the practically stationary core and the non-rotating external flow are separated by a thin layer with a high vortex. In [71], it is noted that the most important parameter determining the structure of the vortex is the twist parameter.The results of calculations for different values of the twist parameter (S = 0 − 1.0) at the Reynolds number (by radial velocity) of the order 10 3 made it possible to draw the following conclusions: (1) when S = 0, the flow breaks away from the lower surface due to a negative pressure gradient; (2) when S = 0.1, the separation of the flow, still taking place, deflects the vector of vortices around the angular region, thus preventing the formation of a concentrated vortex at short distances from the surface; (3) when S = 0.4, the flow is detached from the lower surface, and convergence (convergence) does not occur, generating a large vortex and vertical velocity in the angular region; the decay of the vortex occurs above this region, with the propagation of inertial waves of large amplitude downstream; and (4) when the S = 1.0 downward flow reaches the lower surface, the vortex moves very close to the surface. In [72], a numerical simulation of a non-stationary three-dimensional flow in a vortex chamber [14] was performed.At the initial moment of time, the parameters of the axisymmetric non-rotating flow were calculated, where air enters through the sides below and exits through the upper part of the chamber.The distributions of the three components (radial, azimuthal, and vertical) of the velocity and pressure vector in the meridional section and their development over time were obtained.It is shown that when superimposed on the upward current of rotation at the lower levels, the structure of the flow changes from "single-cell" (upward flow everywhere) to "double-cell" (updraft surrounding the central downward flow). In [73], a simple model of the flow in a vortex chamber is proposed.Assumptions are made about the weak dependence of the main characteristics of the flow in height, which is consistent with the data from the experiments.This allows for the integration procedure at a vertical coordinate, which makes it possible to first reduce the three-dimensional equations to two-dimensional ones and take into account the assumption of axisymmetric to onedimensional equations.The performed calculations showed that features of the dynamics of the vortex such as the development of the downward flow and the expansion of the core are the result of the pressure distribution at the top of the chamber.The dependence of the vertical velocity and pressure distributions on the twist parameter are analyzed.The presence of two modes was revealed: turbulent and laminar at high and low twisting parameters, respectively.An explanation is given why the pressure in a turbulent vortex is much higher than in a non-turbulent one with the same twist parameter (S = const) and volume flow rate (Q = const). In [74], numerical investigations of vortices observed in experiments [14,16] were continued.As a result of the calculations, stationary fields of all components of the velocity vector U r = U r (r, z), U φ = U φ (r, z), and U z = U z (r, z), vertical vorticity ω z = ω z (r, z), stream function Ψ = Ψ(r, z), and pressure p = p(r, z) were obtained for different values of the twist parameter (S = 0.1-1.0)and viscosity (ν = 0.93-10 • 10 −4 m 2 /s).It was found that at a relatively low twist parameter, the vortex is concentrated and laminar.In this case, the radial incoming air flow reaches the axis of the chamber, and the vertical velocity takes positive values everywhere.An increase in the twist parameter leads to the destruction of the vortex with a free critical point on the axis.Below this point, the flow is strictly ascending and laminar, while immediately adjacent to it, a region of weak descending and highly turbulent flow is formed.It should be noted that at high altitudes, the flow becomes ascending again.A further increase in the twist parameter leads to the displacement of the critical point towards the lower wall of the chamber, and the flow becomes descending and turbulent along the entire axis and acquires a biconical structure.The highest twist parameter values correspond to the flow characterized by a wide inner region occupying a significant part of the entire vortex chamber.Secondary vortices develop near the boundary of ascending and descending flows, where large gradients of the main parameters (all velocity components and vertical vorticity) occur.It was found that an increase in viscosity from the laminar (molecular) value to the turbulent one intensifies mixing processes.This delays the formation of the inner region and reduces the gradients of the main parameters at the boundary of ascending and descending flows. Studies [75][76][77][78][79] are dedicated to the analysis of the dynamics of tornado-like flows.The analysis is conducted using the large eddy simulation (LES) method. In [75], the possible role of turbulence in the interaction of tornadoes with the Earth's surface is studied.The influence of secondary helical vortices developing around the main vortex on its kinematics is analyzed.It is found that these intense vortices have higher intensity and lead to the appearance of velocity fluctuations, the magnitude of which is about 1/3 relative to the averaged velocity. In [76], three-dimensional modeling of the non-stationary interaction of a tornadolike vortex with a surface is carried out.The influence of the main physical parameters (circulation, horizontal convergence, effective roughness, vortex speed, and structure of the inflow) on the parameters of the "corner flow" (the region where the central vortex reaches the Earth's surface) is considered.It is shown that the main parameter determining the dynamics of the corner flow is the radial inflow of fluid in the wall layer with low angular momentum relative to the moment that occurs in the main vortex above it. The influence of compressibility (Mach number) of the tornado-like flow on its characteristics is studied in [77].The conclusion is drawn about the insignificant influence of compressibility on the dynamics of the vortex.In [78,79], an investigation is made regarding the possibilities of intensifying quasi-stationary and non-stationary turbulent vortices.The key role of the corner flow twist parameter in the intensification (increase in azimuthal velocity and decrease in pressure) of the vortex in the surface area is revealed. In [80][81][82], a turbulent LES model was developed and verified using laboratory experimental data.The Navier-Stokes equations (Reynolds-averaged) were used for the vertical and radial velocity components in the following form: where D z and D r are terms responsible for diffusion. Two-Phase Nature of Tornado The study of the features of the motion of the dispersed phase (cloud drops, raindrops, particles, and fragments) in tornado-like vortices is of considerable interest due to several reasons. The first reason is that the presence of a dispersed phase in the form of drops, soil particles, and fragments visualizes (makes visible) atmospheric vortices [3].Tracer particles of low inertia are used for the physical modeling of vortex structures in laboratory conditions [83][84][85].Figure 3 shows a typical frame with a fixed laboratory vortex.The vortex funnel and debris cloud, visualized by low inertia and large particles, respectively, are clearly seen in this photograph.Even when conducting numerical studies, a technique is used that consists of the introduction of low-inertia particles, following the streamlines of the carrier air to visualize the calculated flow patterns [82]. Two-Phase Nature of Tornado The study of the features of the motion of the dispersed phase (cloud drops, raindrops, particles, and fragments) in tornado-like vortices is of considerable interest due to several reasons. The first reason is that the presence of a dispersed phase in the form of drops, soil particles, and fragments visualizes (makes visible) atmospheric vortices [3].Tracer particles of low inertia are used for the physical modeling of vortex structures in laboratory conditions [83][84][85].Figure 3 shows a typical frame with a fixed laboratory vortex.The vortex funnel and debris cloud, visualized by low inertia and large particles, respectively, are clearly seen in this photograph.Even when conducting numerical studies, a technique is used that consists of the introduction of low-inertia particles, following the streamlines of the carrier air to visualize the calculated flow patterns [82]. The second reason is that the possibility of measuring the velocity of the dispersed phase opens the way to studying the dynamics of the vortex structure.The above applies both to laboratory conditions [86] and natural vortices.It is known that the simplest video recording of atmospheric vortices can provide useful information by measuring the velocity of debris of various masses.The third reason is that the latent heats of phase transformations (primarily condensation and evaporation) during the formation (disappearance) of droplets have a significant effect on the generation process, dynamics, and stability of tornado-like vortices [3]. The fourth reason is the reverse effect of the dispersed phase on the behavior of the vortex structure.There are numerous works showing that, at certain concentrations, the dispersed phase can have a significant effect on the characteristics of an atmospheric vortex and its behavior (up to decay) [87][88][89][90].The second reason is that the possibility of measuring the velocity of the dispersed phase opens the way to studying the dynamics of the vortex structure.The above applies both to laboratory conditions [86] and natural vortices.It is known that the simplest video recording of atmospheric vortices can provide useful information by measuring the velocity of debris of various masses. The third reason is that the latent heats of phase transformations (primarily condensation and evaporation) during the formation (disappearance) of droplets have a significant effect on the generation process, dynamics, and stability of tornado-like vortices [3]. The fourth reason is the reverse effect of the dispersed phase on the behavior of the vortex structure.There are numerous works showing that, at certain concentrations, the dispersed phase can have a significant effect on the characteristics of an atmospheric vortex and its behavior (up to decay) [87][88][89][90]. The fifth reason is that the presence of fragments and other dispersed inclusions can make a decisive contribution to the negative consequences (destruction and casualties) of a tornado [3]. Below are the results of works that consider some aspects of the two-phase nature of vortex structures, namely, phase transformations and features of particle motion along with their reverse effect on the characteristics of carrier air. Processes of Evaporation and Condensation A pioneering study that considers the process of tornado formation due to rotation in a thundercloud (modeled "from above") is [91].This study developed a three-dimensional model of a thundercloud, which is based on compressible gas equations that take into account the Coriolis force.The model considers the content of water vapor, cloud droplets and raindrops, condensation and evaporation processes, and the corresponding latent heat of phase transformations. Additional transport equations for all unknown quantities φ, which are the potential temperature Θ = T/Π (Π = (p/p 0 ) (k−1)/k ), the concentrations of water vapor q v , cloud droplets q c , and raindrops q r , were presented in the following generalized form: where M φ is the terms defined by "microphysical" processes and D φ is terms determined by the intensity of turbulent transfer.Expressions for M φ = M Θ , M q v , M q c and M q r in [91] were represented as where γ = Λ/c p Π, Λ is the latent heat of vaporization, q vs is the ratio of the components of the saturation mixture, and dq vs /dτ is the rate of condensation or evaporation of cloud droplets q c .Terms A r , C r , and E r are the rates of auto-recalculation, accumulation, and evaporation of raindrops, respectively; W is the rate of winding of raindrops. In [91], a number of assumptions were made in which the "turbulent" terms in (29) D φ = D Θ , D q v , D q c and D q r were represented as follows: where ν t is the coefficient of turbulent viscosity; strokes denote the fluctuation values of the parameters.As a result of numerical calculations, a number of interesting results were obtained: (1) flow bifurcation is initiated by a high concentration of water droplets realized in the central part of the developing updraft, (2) updrafts are supported by an influx of moist surface air while rain falls between them, and (3) all of the above leads to the realization of self-sustaining convection in the cloud under consideration. In [92], a three-dimensional simulation was performed that followed the study of tornadogenesis processes within a storm supercell.Within 40 min, the generation and decay of two tornadoes were recorded.The lifetime of each tornado was approximately 10 min.The maximum speed in the surface region exceeded 60 m/s for both tornadoes.Conclusions were made: (1) the tornadogenesis is initiated by the growth of rotation in the area above the cloud base; (2) the intensification of rotation leads to a decrease in pressure, an increase in forces caused by vertical pressure gradients, and the generation of an intense ascending current at this level; (3) the ascending current leads to a rapid increase in the convergence of air in the subcloud layer; (4) this leads to an increase in the vertical vorticity in the convergent flow, generating a tornado; (5) the weakening (dissipation) of the tornado begins with a decrease in forces caused by pressure gradients in the vertical direction; and (6) the tornado dissolves due to the loss of its source of positive vertical vorticity. In [93], the process of generating a tornado-like vortex within a storm supercell was studied.For modeling, a computer system previously created at the University of Colorado to calculate cloud dynamics was used.The system includes a number of blocks, one of which is a block of "microphysical" processes, which allow one to calculate the processes of formation and growth of liquid (droplets) and solid (ice) particles as well as the features of their dynamics.To initiate the convection of air masses in the cloud at the initial time in the center of the calculation cell, a thermal "bubble" of rectangular shape was formed (height: 3 km and length: 10 km).The temperature of the "bubble" was 1.5 K, and the water vapor content was 2 g/kg higher than the corresponding values in the surrounding space.The calculations performed with high spatial resolution made it possible to obtain and analyze the dynamics of the fields of various components of the air velocity vector as well as the pressure and water vapor content in selective vertical and horizontal sections.Calculations have shown that after 90 min on the edge (direction "east-southeast") of the storm supercell, a "pipe" of low pressure is formed in the region of high gradients of vertical velocity in the horizontal direction, which begins its spread towards the Earth's surface.Thus begins the formation of a strong vortex in the subcloudal layer.Therefore, in [92,93], it was clearly shown that the initial stage of tornado formation proceeds for many tens of minutes. Let us note another work [94], numerous results of which demonstrate the possibilities of numerical modeling of the dynamics of downward air flows.It used a mathematical model similar to [93].It is shown that both the intensity and duration of the tornado depend on the thermodynamic characteristics of the rotating downward flow, depending on the surrounding humidity of the surface layer and the nature of atmospheric precipitation in the rain curtain.Model tornadoes were more intense and long-lived in the case of relatively warm rotating downward flows, contributing to a strong convergence of angular momentum in the surface layer.Warmer downward flows are formed in conditions of high relative humidity and in conditions of relatively low precipitation concentration. The importance of considering possible phase transformations on the dynamics of tornado-like atmospheric vortices was discussed in [95][96][97]. In [95], a system of equations describing the flow of moist air inside a tornado funnel was constructed, allowing for an analytical solution for the quasi-stationary case (without considering the processes of tornado initiation and dissipation) using two small parameters of the problem: (1) the specific humidity of air ξ = 0.05-0.3(the ratio of the mass of water vapor to the mass of moist air) and (2) the ratio of the tornado funnel radius to its height, r 0 /L = 0.01-0.1.This work considers the processes of energy and mass transfer as stationary in a developed tornado, and the flow is considered in a rotating cylinder, the walls of which consist of hailstones and water droplets.Condensation of moisture from the air occurs on the surface of this cylinder.The walls of the tornado funnel are considered impermeable, and condensation occurs in a thin layer (compared to the radius of the tornado) on the inner surface of the cylinder.It is assumed that the walls of the funnel consist of hailstones, water droplets, and other objects sucked up by the tornado.It is also assumed that the parameters of the walls remain unchanged during the entire time the tornado exists. In the absence of data on the mass transfer coefficient from the flow of moist air to the walls of the tornado funnel, the diffusion equation for humidity (water vapor) was written in an approximate form: where U z (0) is the velocity along the axis of the tornado funnel and D t is the coefficient of turbulent diffusion of moisture in the tornado funnel.Taking into account the boundary condition ξ(r, z = 0) = ξ(0) = const, solution (32) was obtained in the following form in [95]: As a result, the solutions obtained in [95] introduced another important criterion for the existence of tornadoes: the critical vertical humidity gradient (dξ/dz) cr , which determines the conditions for the onset of the updraft.In [96,97], modifications were made to the flow model of [95]. To numerically describe the hydrodynamics of a two-phase flow and heat and mass transfer processes taking into account condensation inside the tornado vortex, transport equations for averaged quantities were used with coefficients of turbulent viscosity, diffusion, and thermal conductivity constant for the entire computational domain.Water droplets formed due to bulk condensation were considered only in the equations of energy and diffusion by introducing corresponding source terms.According to observational estimates presented in [98], the thermal power of a tornado of average intensity can reach Q = 1-10 GW.Calculations were performed for two power values of Q = 1 GW and Q = 5 GW for small (r 0 = 20 m) and large (r 0 = 40 m) vortex radii.The results confirmed that the energy of water vapor condensation is sufficient to provide the observed tornado lifespan. Disperse Phase Motion in Tornado-like Flows In [99], the characteristics of the movement of solid particles with different inertial properties in a "dust devil" were studied based on numerical simulations using the LES method and the Lagrangian trajectory approach.The motion of dust particles (density 2560 kg/m 3 ) of three different sizes (100 µm, 200 µm, and 300 µm) was calculated in a previously calculated non-stationary velocity field of the air vortex.During the calculations, trajectories of 20,000 particles of each size, which were injected in equal portions (400 particles) into the lower part of the vortex every 0.1 s, were analyzed.As a result, the spatial arrangement of all introduced dust particles in the "dust devil" after 5 s was obtained.The following conclusions were made: (1) the particle arrangement in space is significantly non-uniform; (2) the smallest small-inertial particles rise to a height of 25 m or more, and their spatial arrangement is the most uniform; (3) the decrease in vertical air velocity with increasing distance from the surface and the increase in particles inertia lead to a significant decrease in the maximum lift of larger particles; and (4) the non-uniform distribution of larger particles in space is due to lower values of their velocities caused by centrifugal forces. In [100], the behavior of trajectories and concentration fields of particles in the axisymmetric flow of viscous incompressible fluid, modeling the interaction of a vertical vortex filament thread with a horizontal plane, was investigated based on numerical calculations.For the description of the motion of the carrying gas phase, a self-similar solution of the Navier-Stokes equations obtained by Gol'dshtik [101] was used.The parameters of the dispersed phase, including concentration, were calculated using a complete Lagrangian approach along the selected trajectories.The influence of particles on the parameters of the carrying gas was not taken into account, as the volume and mass concentration of particles were assumed to be small [90,102]. As a result of the calculations in [100], the possibility of multiple intersections of particle streamlines and the formation of "folds" in the concentration field of the dispersed phase were demonstrated.For heavy particles (exceeding the carrying phase in density), the formation of a "bowl-shaped" accumulation surface of the dispersed phase and a zone of particle deposition near the base of the vortex were observed (Figure 4).In the case of taking into account the force of gravity (Fr = ∞), the edge of the bowl-shaped accumulation surface of particles is twisted into a spiral around a circle.The position of this circle is determined by the zero balance of hydrodynamic (drag force), gravitational (gravity), and inertial (centrifugal force) forces acting on particles in the vortex flow.In works [103,104], a two-fluid tornado model is developed.The first (primary) fluid is water vapor that condenses during a sharp pressure jump.The second (secondary) fluid is composed of solid particles picked up by the vortex.It should be noted that there is no proper description of the mathematical model used in [103,104], but the results of numerical modeling of the life cycle of a tornado and the effects of a tornado on a car and a small house moving along a highway are presented. In article [105], mathematical modeling of tornado-like vortices was conducted using the LES method.The features of the vortex throwing "projectiles" of two types were studied: a wooden board weighing 14 kg and a car weighing 1810 kg.Statistical distributions of the maximum values of the horizontal velocity components of these "projectiles" were obtained. Article [106] presents a detailed analysis of the influence of solid, low-inertia fragments on tornado characteristics.Modeling was performed based on the Euler-Euler (two-fluid) model, which assumes the use of one type of equation to describe the continuous and dispersed phases within the framework of the mechanics of interpenetrating media with the account of the particles' backward influence (in English-language publications, such calculations are called "two-way coupling"). Three dimensionless parameters that determine the dynamics of fragments in the tornado were found: (1) the parameter of the twist of the corner flow, determining the "type" (structure) of the tornado, Γ ∞ , and γ are the characteristic radius of the core above the area of angular flow, the moment of rotation at a considerable distance from the core, and the loss of the flow's rotational momentum in the "surfacecorner-core" area, respectively); (2) a parameter that is a measure of the relative importance of "centrifugation" of fragments and is determined as the ratio of the radial acceleration in the angular flow to the acceleration of free fall, , where is the characteristic azimuthal velocity; and (3) a parameter that is a measure of the ease of lifting fragments and is determined as the ratio of the characteristic velocity In works [103,104], a two-fluid tornado model is developed.The first (primary) fluid is water vapor that condenses during a sharp pressure jump.The second (secondary) fluid is composed of solid particles picked up by the vortex.It should be noted that there is no proper description of the mathematical model used in [103,104], but the results of numerical modeling of the life cycle of a tornado and the effects of a tornado on a car and a small house moving along a highway are presented. In article [105], mathematical modeling of tornado-like vortices was conducted using the LES method.The features of the vortex throwing "projectiles" of two types were studied: a wooden board weighing 14 kg and a car weighing 1810 kg.Statistical distributions of the maximum values of the horizontal velocity components of these "projectiles" were obtained. Article [106] presents a detailed analysis of the influence of solid, low-inertia fragments on tornado characteristics.Modeling was performed based on the Euler-Euler (two-fluid) model, which assumes the use of one type of equation to describe the continuous and dispersed phases within the framework of the mechanics of interpenetrating media with the account of the particles' backward influence (in English-language publications, such calculations are called "two-way coupling"). Three dimensionless parameters that determine the dynamics of fragments in the tornado were found: (1) the parameter of the twist of the corner flow, determining the "type" (structure) of the tornado, S c ≡ r c Γ 2 ∞ /γ (r c , Γ 2 ∞ , and γ are the characteristic radius of the core above the area of angular flow, the moment of rotation at a considerable distance from the core, and the loss of the flow's rotational momentum in the "surface-cornercore" area, respectively); (2) a parameter that is a measure of the relative importance of "centrifugation" of fragments and is determined as the ratio of the radial acceleration in the angular flow to the acceleration of free fall, A a ≡ U 2 φc /(r c g), where U φc ≡ Γ ∞ /r c is the characteristic azimuthal velocity; and (3) a parameter that is a measure of the ease of lifting fragments and is determined as the ratio of the characteristic velocity of the tornado and the settling (swirling) velocity of the fragments, A v ≡ U φc /W. First, the tornado parameters were calculated in the absence of particles and then with particles.The calculations included the particle size, which was varied (d p = 200-2000 µm), phase density ratio (ρ p /ρ = 2000-8000), and some surface characteristics.It was found that the sand cloud (tornado cascade) reaches a quasi-stationary state when the mass of suspended particles becomes equal to the mass of those deposited.As a result of the calculations, the main characteristics (formation time, total mass of fragments, height of the cascade, maximum tangential velocity of the vortex as a whole, etc.) of the vortex and tornado cascade were obtained.The following conclusions were drawn: (1) intense exchange of momentum between air and suspended sand particles that make up the tornado cascade led to a decrease in the maximum vortex velocity; (2) the accumulation of low-inertia fragments near the corner flow in the surface layer has a significant effect on the strength of the tornado; and (3) with an increase in the inertia of the particles, the time of formation of the tornado cascade, the total mass of the particles involved, and the geometric dimensions of the cascade decrease significantly. Latest Results The formation of strong tornadoes, according to the generally accepted opinion, mostly begins with the generation of so-called supercells.Supercells are characterized by strong rotation at the middle level [107].The formation of small and weaker tornadoes that occur in North America is not accompanied by strong circulation at the middle level.Such tornadoes have been named "non-supercell tornadoes" [108] or "nonsupercyclonic" tornadoes [109]. Dynamics of the Cyclostrophic Balance As shown in [110], the compression of a developing surface cyclone occurs only when surface friction is taken into account.The cyclostrophic balance between the pressure gradient and the centrifugal force, which exists without surface friction, is disturbed (Figure 5).The pressure gradient (directed inward towards the axis) becomes greater than the centrifugal force (directed outward).This disturbs the cyclostrophic balance and leads to a strong radial inflow of air.Therefore, air parcels penetrate much closer to the axis than to the equilibrium radius of cyclostrophic balance.To satisfy the continuity equation (mass conservation), the radial inflow must turn as it approaches the axis.It rises rapidly at high speed.The axial helical jet can be regarded as the upward continuation of the surface boundary layer.The "new" cyclostrophic balance maintains very low pressure at the axis. In [111,112], it is proposed to use a certain fictitious force that creates false (artificial) horizontal vorticity.However, [113] notes that in meteorology, the laws of mass, energy, momentum, angular momentum, and entropy conservation must be inviolable.In [113], it is shown that rapid tornado genesis in modeling can be associated with a combination of the following factors: (1) excessively strong convective initiation in a very unstable atmosphere, which leads to too-strong low-level updraft at the beginning of the supercell's life; (2) the use of a lower boundary condition of semi-slip; (3) the absence of perturbations to excite significant turbulence, which leads to too-strong friction force and too-large shear in a toothin layer near the ground [114]; and (4) the use of omnipresent, all-pervading external force to maintain a stable horizontally homogeneous medium-this force introduces additional degrees of freedom that allow the restrictions of the Taylor-Proudman theorem to be bypassed but leads to changes in the dynamics within the storm. A possible mechanism for rapid tornado genesis in mathematical modeling can be described as follows.As a result of factor 4, the external force is balanced by three forces in the surrounding environment.This balance is concentrated at the ground by factors 2 and 3. Inside the storm, friction and the Coriolis force no longer balance the external force, so it creates strong horizontal vorticity in the shallow surface layer.The tilt and stretching of this vorticity (if it occurs along the flow) by the upward flow, according to factor 1, will create significant vertical vorticity at low heights. As shown in [110], the compression of a developing surface cyclone occurs only when surface friction is taken into account.The cyclostrophic balance between the pressure gradient and the centrifugal force, which exists without surface friction, is disturbed (Figure 5).The pressure gradient (directed inward towards the axis) becomes greater than the centrifugal force (directed outward).This disturbs the cyclostrophic balance and leads to a strong radial inflow of air.Therefore, air parcels penetrate much closer to the axis than to the equilibrium radius of cyclostrophic balance.To satisfy the continuity equation (mass conservation), the radial inflow must turn as it approaches the axis.It rises rapidly at high speed.The axial helical jet can be regarded as the upward continuation of the surface boundary layer.The "new" cyclostrophic balance maintains very low pressure at the axis.In [111,112], it is proposed to use a certain fictitious force that creates false (artificial) horizontal vorticity.However, [113] notes that in meteorology, the laws of mass, energy, Examples of the formation of vortices in high-temperature substances and under the action of a strong magnetic field, as well as under conditions of fast plasma processes, are possible [115][116][117][118]. The above-described tornado genesis mechanism is consistent with the results of natural measurements of the development of a tornado mesocyclone during the PECAN field experiment conducted in 2015, in South Dakota, which used an extensive network of stationary and mobile observation systems for low-level convective jets [119]. Main Mechanisms of Tornadogenesis To date, much is known about the environmental conditions necessary for supercell tornado genesis.At the same time, the dynamics of ground-level vorticity in the process of tornado genesis itself are still not sufficiently studied. In the literature, a very large number of seemingly contradictory mechanisms responsible for high values of ground-level vertical vorticity can be found.All of these mechanisms can be divided into two main classes [120]. The first mechanism class is based on the upward tilt of horizontal vorticity generated primarily baroclinically in the downdraft.This mechanism is referred to as the "downdraft mechanism".Recent works [121][122][123][124][125] represent this approach. In Ref. [120], it is clearly shown that both downdraft and updraft mechanisms play an important role in tornadogenesis.It is known that pre-tornadic maxima of vertical vorticity are generated primarily by the downdraft mechanism, while the dynamics of a fully developed tornado vortex are controlled by the updraft mechanism.This paper makes an important conclusion that there is a transition between these two mechanisms, which occurs during tornadogenesis.This transition is a result of the axisymmetrization of the pre-tornadic vortex "blob" and its intensification by vertical stretching.These processes facilitate the development of corner flow, the presence of which promotes the generation of vertical vorticity by tilting horizontal vorticity near the ground (i.e., through the in-andup mechanism).Plasma vortexes are the subject of a separate study and require special consideration [142][143][144][145]. The first mechanism class is based on the upward tilt of horizontal vorticity generated primarily baroclinically in the downdraft.This mechanism is referred to as the "downdraft mechanism".Recent works [121][122][123][124][125] represent this approach. The second mechanism class is based on the upward tilt of horizontal vorticity near the ground (Figures 6 and 7), which is created by a strong horizontal gradient in the updraft (the "in-and-up mechanism").Recent works [126][127][128][129][130][131][132][133] illustrate this approach well.In addition, we note that different vortices can form under different conditions [134][135][136][137][138][139][140][141].In Ref. [120], it is clearly shown that both downdraft and updraft mechanisms play an important role in tornadogenesis.It is known that pre-tornadic maxima of vertical vorticity are generated primarily by the downdraft mechanism, while the dynamics of a fully developed tornado vortex are controlled by the updraft mechanism.This paper makes an important conclusion that there is a transition between these two mechanisms, which occurs during tornadogenesis.This transition is a result of the axisymmetrization of the pretornadic vortex "blob" and its intensification by vertical stretching.These processes facilitate the development of corner flow, the presence of which promotes the generation of vertical vorticity by tilting horizontal vorticity near the ground (i.e., through the in-andup mechanism).Plasma vortexes are the subject of a separate study and require special consideration [142][143][144][145]. In conclusion, we mention works [119,146,147] in which the authors tried to take into account the influence of both mechanisms (downdraft mechanism and in-and-up mechanism) on tornado generation and dynamics in their calculations. Main Factors of Tornadogenesis The downdraft mechanism and in-and-up mechanism are needed to start the process In conclusion, we mention works [119,146,147] in which the authors tried to take into account the influence of both mechanisms (downdraft mechanism and in-and-up mechanism) on tornado generation and dynamics in their calculations. Main Factors of Tornadogenesis The downdraft mechanism and in-and-up mechanism are needed to start the process of tornadogenesis.The role of both of these mechanisms is not yet fully understood.Since only the ascending mechanism can lead to the transformation of the horizontal vorticity into a vertical one, a transition between these mechanisms is likely to take place.The presence of a downward mechanism is necessary for the rapid intensification of a tornado and is necessary throughout the life of a tornado until its collapse. Here, we summarize some factors facilitating tornado appearance (generation).The four major circumstances affecting the generation of the ascending air flow are as follows (see Figure 8): (i) The presence of warm and, consequently, relatively light air near the underlying surface, which tends upwards; (ii) A possible presence in the warm air of sand (soil) particles or drops (in the case of a waterspout), lifted from the surface, which often have a greater (compared to the air) temperature, resulting in additional heating of air; (iii) The presence of (due to various reasons) rotation (vorticity) of the warm air, which generates low-pressure regions at the center of the developing ascending flow, thus facilitating the condensation of water vapor in the air and the release of heat; (iiii) Intensified condensation of the water vapor in the warm ascending air, when it is lifted and interacts with cold air, facilitating its further heating and making it lighter. Evidently, there exist four major factors affecting the formation of the tornado funnel; they are as follows (see Figure 8): (1) The presence of cold and, consequently, relatively heavy air at the top, tending to descend to the underlying surface; (2) The presence in the cold air of water drops, making it heavier (enhancing the effective density); (3) The presence of rotation (vorticity) of the cold air, generating a low-pressure region at the center of the developing funnel, which facilitates further concentration of drops, their coagulation, and higher effective density of the descending cold air; (4) Intensified evaporation of drops in the cold air, which interacts with the warm air near the surface, facilitating its further cooling and making it heavier. All of the above physical mechanisms should be reflected in modern numerical models of tornados.In terms of aerodynamics, a tornado is an analog of a tube carrying warm (i) The presence of warm and, consequently, relatively light air near the underlying surface, which tends upwards; (ii) A possible presence in the warm air of sand (soil) particles or drops (in the case of a waterspout), lifted from the surface, which often have a greater (compared to the air) temperature, resulting in additional heating of air; (iii) The presence of (due to various reasons) rotation (vorticity) of the warm air, which generates low-pressure regions at the center of the developing ascending flow, thus facilitating the condensation of water vapor in the air and the release of heat; (iiii) Intensified condensation of the water vapor in the warm ascending air, when it is lifted and interacts with cold air, facilitating its further heating and making it lighter. Evidently, there exist four major factors affecting the formation of the tornado funnel; they are as follows (see Figure 8): (1) The presence of cold and, consequently, relatively heavy air at the top, tending to descend to the underlying surface; (2) The presence in the cold air of water drops, making it heavier (enhancing the effective density); (3) The presence of rotation (vorticity) of the cold air, generating a low-pressure region at the center of the developing funnel, which facilitates further concentration of drops, their coagulation, and higher effective density of the descending cold air; (4) Intensified evaporation of drops in the cold air, which interacts with the warm air near the surface, facilitating its further cooling and making it heavier. All of the above physical mechanisms should be reflected in modern numerical models of tornados.In terms of aerodynamics, a tornado is an analog of a tube carrying warm air upwards, rather than cold air downwards.This is evident, since the flow rate of the carried heated air greatly exceeds the flow rate of the cold air.The flow rate of the warm air, defined by the geometry (above all, by the cross-sectional area) of the carrying channel and the vertical velocity component, is one of the characteristics determining the tornado force.The vertical gradient of air temperature that characterizes the degree of atmosphere instability determines the speed of the ascending flows.The largest vertical temperature gradients are implemented in low clouds (cumulonimbus clouds); therefore, the most violent tornados develop under such conditions, as a rule.The movement of the tornado funnel together with the cloud facilitates further involvement of new portions of warm air and is a factor increasing the tornado stability. Conclusions A review of works devoted to the mathematical modeling of air tornado-like vortices has been made.Analytical models of various levels of complexity created to date (based on the Bernoulli, Navier-Stokes, and vorticity equations) and the possibility of numerical modeling are described.The results of computational and theoretical studies of the conditions of formation, dynamics, and main characteristics of vertical concentrated vortices are given and analyzed.Special attention is paid to features taking into account the two-phase nature (phase transitions, the movement of particles in vertical vortices, and the particles' back effect on the characteristics) of tornado-like vortices. It can be concluded that the main problems of modeling tornado-like vortices, related to predicting the probability of their occurrence, development, and control, largely remain unsolved. Apparently, further progress in building the theory of concentrated tornado-like vortices will be determined by the following: (1) The ability to plan laboratory studies of the conditions for the generation, stability, and control of significantly non-stationary concentrated vortices; (2) The ability to improve measurements of the characteristics of real natural vortex structures; (3) The ability to develop mathematical models that adequately describe the structure and dynamics of both laboratory and natural non-stationary vortices throughout their life cycle (from generation to decay). surface of the Earth.The horizontal axis r shows the distance from the axis of symmetry of the tornado.At ∞ → r . Current lines are constructed for the following conditions: a = 10, b = 10, c = −10, Re r = 4, and S = −50 for Figure 2a and a = c = 0, b = −0.25,Re r = −4, and S = 50 for Figure 2b.The specified parameter values were chosen to satisfy the condition U z → 0 at r → ∞ for streamlines approaching the Earth and for crosslinking the flow characteristics shown in Figure 2a,b.Mathematics 2023, 11, x FOR PEER REVIEW 6 of 32 Figure 2 . Figure 2. Meridional streamlines (a) at the place of funnel widening and (b) near the ground surface [27]. Figure 2 . Figure 2. Meridional streamlines (a) at the place of funnel widening and (b) near the ground surface [27]. atics 2023, 11, x FOR PEER REVIEW 21 of 32 is determined by the zero balance of hydrodynamic (drag force), gravitational (gravity), and inertial (centrifugal force) forces acting on particles in the vortex flow. Figure 5 . Figure 5. Scheme of the intensification of the upward flow with the friction force taken into account: (a) an initial cyclostrophic balance, (b) a decrease in centrifugal force due to friction and decrease in air velocity, (c) a new cyclostrophic balance, and (d) an intensification of radial air inflow and upward movement.Here, p F is the pressure gradient force, and c F is the centrifugal force.The ve- locity and friction force vectors are not given. Figure 5 . Figure 5. Scheme of the intensification of the upward flow with the friction force taken into account: (a) an initial cyclostrophic balance, (b) a decrease in centrifugal force due to friction and decrease in air velocity, (c) a new cyclostrophic balance, and (d) an intensification of radial air inflow and upward movement.Here, F p is the pressure gradient force, and F c is the centrifugal force.The velocity and friction force vectors are not given. Figure 6 . Figure 6.Scheme of generation of horizontal vorticity and its inclination by ascending flows near the Earth's surface: (a) generation of horizontal vorticity by a horizontal buoyancy gradient, (b) generation of ascending flows by a horizontal buoyancy gradient, and (c) inclination of initially horizontal vortex filaments by a surrounding vertical flow. Figure 6 . Figure 6.Scheme of generation of horizontal vorticity and its inclination by ascending flows near the Earth's surface: (a) generation of horizontal vorticity by a horizontal buoyancy gradient, (b) generation of ascending flows by a horizontal buoyancy gradient, and (c) inclination of initially horizontal vortex filaments by a surrounding vertical flow.matics 2023, 11, x FOR PEER REVIEW 24 of 32 Mathematics 2023 , 32 Figure 8 . Figure 8. Main factors affecting the generation of the ascending air flow and formation of the tornado funnel. Figure 8 . Figure 8. Main factors affecting the generation of the ascending air flow and formation of the tornado funnel. 1/2, where µ is the dimensionless coefficient of mechanical energy dissipation.It can be concluded that the larger values of µ correspond to smaller values of vertical velocities.
20,231.4
2023-07-26T00:00:00.000
[ "Environmental Science", "Physics" ]
A superelastochromic crystal Chromism—color changes by external stimuli—has been intensively studied to develop smart materials because of easily detectability of the stimuli by eye or common spectroscopy as color changes. Luminescent chromism has particularly attracted research interest because of its high sensitivity. The color changes typically proceed in a one-way, two-state cycle, i.e. a stimulus-induced state will restore the initial state by another stimuli. Chromic systems showing instant, biphasic color switching and spontaneous reversibility will have wider practical applicability. Here we report luminescent chromism having such characteristics shown by mechanically controllable phase transitions in a luminescent organosuperelastic crystal. In mechanochromic luminescence, superelasticity—diffusion-less plastic deformation with spontaneous shape recoverability—enables real-time, reversible, and stepless control of the abundance ratio of biphasic color emissions via a single-crystal-to-single-crystal transformation by controlling a single stimulus, force stress. The unique chromic system, referred to as superelastochromism, holds potential for realizing informative molecule-based mechanical sensing. Color changes by external stimuli, so-called chromism, have been intensively studied to develop smart materials due to the stimulus-responsiveness of chromic materials. Here the authors demonstrate luminescent chromism during a mechanically controllable phase transition in a luminescent organosuperelastic crystal. C hromism 1 has been intensively studied with organic materials to exploit their colorability, transparency, designability, and other advantages in response to various stimuli such as photons, heat, electric charge, vapor, and mechanical stress. Fundamental research has a fascination with chromism because color changes based on wavelength shifts in absorption, emission, or reflection correlate with stimuli-responsive structural changes of materials on the subnanometer to micrometer scale. Chromism is also attractive from a practical viewpoint since the color changes are easily detectable by the naked eye or simple spectroscopy. The process of color changes induced by pressing, shearing, cutting, and other mechanical forces is called mechanochromism [2][3][4][5][6][7] , which is particularly important because the most fundamental stimuli in nature are commonplace mechanical forces that can be generated without any special apparatus. Mechanochromism based on elastic deformation has been demonstrated in amorphous gel materials [8][9][10] and a few molecular crystals under high pressure (3)(4)(5)(6)(7)(8)(9)(10) 11 to show continuous changes in structural periodicity and color, which has led to quantitative mechanical sensing. Mechanically induced defects can also induce mechanochromism 4 . These systems exhibited spontaneously reversible chromism, but most chromic processes demand another stimulus for reverse chromism, i.e., reversion to the initial color. As we previously reported about the polymorph-dependent luminescence of organic crystals, the luminescence color depends on the difference in molecular packing in crystals, and no chemical reactions are involved. Information about such systems could prove fruitful for elucidating the crystal structure-property relationship 12,13 . Mechanochromism based on a phase transition, e.g., polymorphic 14 or crystal-to-amorphous 2,3 , is spontaneously irreversible but shows a biphasic color change at a high resolution, which is useful for memory storage and sensing tiny forces. In this context, the focus of this work is organosuperelasticity, which we first discovered in 2014 15 . Elasticity is a common physical property in the spontaneous shape recoverability of materials. Recently, research on the manner of elastic deformation of organic crystals has been intensive 16-20 despite a general perception of their brittleness. In contrast, superelasticity or more specifically plastic deformation with spontaneous shape recoverability is a minor and unusual physical property, except in special kinds of metallic solids called superelastic alloys and shapememory alloys 21,22 , and research is still in its infancy especially in organic crystals 15,[23][24][25][26][27][28][29][30] . In the elastic deformation, the densitygradient distribution of components causes strain to accumulate, whereas the superelastic deformation can accommodate strain through orientation changes of domains upon phase transition or twinning; thus, superelastic deformation has a potential ability to abruptly change physical properties. Since superelastic materials were developed specifically for metal alloys, their electronic physical properties limited coupling to conductivity and, recently, magnetism 31 . In organosuperelasticity, organic solids even in the single-crystal state show spontaneously reversible mechanically induced phase transitions, indicating the potential functionality of superelasticity with dielectric and optical properties due to the organic materials' characteristics. Very recently, organoferroelasticity-diffusion-less plastic deformation leaving spontaneous strain without spontaneous shape recoverability-in luminescent organometallic crystals was reported 32 . The luminescence is stable and unchanged during the organoferroelastic mechanical twinning, giving no mechanochromism. Ideally, superelastic solids have optical functionality, and the occurrences and amounts of the phase transitions can be easily controlled in real time as material strain caused by mechanical stress, enabling biphasic color switching with excellent controllability in direction, region, timing, and velocity ( Fig. 1). We find that single crystals of the luminescent organic compound 7-chloro-2-(2′-hydroxyphenyl)imidazo[1,2-a]pyridine (7Cl) 13 show superelasticity with luminescent color changes. Results Polymorphs and solid-state luminescence of 7Cl. Solid-state luminescence from 7Cl derivatives is based on an excited-state intramolecular proton transfer (ESIPT) (Fig. 2a) [33][34][35][36] , resulting in a large Stokes shift and suppression of aggregation-caused quenching. Note that even small changes in solid-state molecular arrangements can cause a large shift of the emission spectra of 7Cl crystals, due to the formation of an unusual environmentsensitive zwitterionic keto form in the excited state 37 . Most ESIPT-luminescent molecules are emitted from a nonionic keto form via enol-to-keto tautomerization when excited. Actually, two polymorphic 7Cl crystals, YG and YO, showed yellow-green and orange fluorescence, respectively, under ultraviolet (UV) light (365 nm) (Fig. 2b, c). Mechanical deformation of a YG and YO crystal. Superelastic behaviors were observed in YG crystals sheared under an optical microscope equipped with polarizing plates (Supplementary Movies 1 and 2). Under polarized white (PW) light, the mother (α YG ) domain was converted into a daughter (β) domain in A singlecolor (absorption or emission) material in the mother (α) phase shows biphasic colors superelastically when force is applied, indicated by a black arrow, along with a polymorphic phase transition from the α phase to a stress-induced daughter (β) phase. association with the deformation of a YG crystal ( Supplementary Fig. 11). The deformed crystal spontaneously reverted to its initial shape when the β domain contracted after the shearing force was removed. Interestingly, the different fluorescence, yellow-green and orange, exhibited by the α YG and β domains, respectively, under UV light suggests that the β domain is the YO (β YO ) crystal (Fig. 3a). The conversion of the α YG domain into the β YO domain in the superelasticity process was also confirmed by in situ fluorescence spectroscopy during the superelastic behavior ( Supplementary Fig. 3) and by single-crystal X-ray diffraction Fig. 4, Supplementary Table 3). The superelasticity by conversion of the α YG domain into the β YO domain was quite surprising because the transition between YG and YO polymorphs was a thermally non-inducible monotropic one according to differential scanning calorimetry (DSC) measurements ( Supplementary Fig. 9). In general shape-memory alloys and some organosuperelastic materials, on the other hand, superelasticity is based on thermally reversible enantiotropic phase transition and shows a large temperature dependence ( Supplementary Fig. 10). The mechanofluorochromism of YG crystals has spontaneous reversibility and controllability of fluorochromic regions originating from the characteristics of superelasticity in single crystals. Based on these features, the ability to sense mechanical force across a threshold was demonstrated by in situ fluorescence spectroscopy, during the superelasticity process, measuring changes in the ratio of two solidstate emissions correlating to the abundance ratio of the areas of the α YG and β YO domains ( Supplementary Fig. 3). In the case of a YO crystal, a daughter (β YG ) domain confirmed by single-crystal X-ray diffraction measurements of a YO crystal in the state with coexisting α and β domains ( Supplementary Fig. 5, Supplementary Table 4) was generated by shearing the mother (α YO ) domain under both PW and UV light (Supplementary Fig. 11c, d). Here, a small β YG domain was also generated and spontaneously grew, suggesting that YG crystals are thermodynamically more stable than YO crystals at ambient conditions. Such clear and irreversible responses-crystal deformation and emission color change induced by a stimulus, a momentary mechanical force exceeding a threshold-are useful for sensing applications. Crystallographic studies of as-prepared and mechanically deformed crystals of 7Cl. Molecular movement during the shearinduced phase transition between the two polymorphs was then investigated by X-ray crystallographic study of a superelastically deformed YG crystal (Fig. 3b-d). The interface of the two polymorphic domains was indexed as 1 20 ð Þ αYG //(120) βYO (or (120) αYG // 1 20 ð Þ βYO ), resulting in a calculated bending angle of 42.1°, which agrees well with the measurement by optical microscope observation (ca. 42°). The α YG domain transformed into the β YO domain by a 68°(or 61°and 16°) rotation of 7Cl molecules, which also induced their displacive motions of 2.0 and 1.9 Å along the a-and b-axes, respectively, to optimize the herringbone arrangement at the interface (Fig. 3c-e, Supplementary Figs. S6-S8). The relatively large molecular rotation in comparison with previous examples of organosuperelastic materialse.g., terephthalamide that rotates ca. 10°-32°during thermally or mechanically induced phase transition-is one possible reason for the monotropic nature of a 7Cl crystal. While anisotropic shear stress can effectively trigger the specific molecular movement required for the crystal transition, thermal activation of molecular rotation may not be practically sufficient to induce phase transition in this case. The emission color difference between the YG and YO crystals can be attributed to a difference in the arrangement of 7Cl molecules; for example, the π overlap of 7Cl molecules is larger in a YO crystal than a YG crystal, leading to red-shifted emission spectrum (Fig. 3d, e, Supplementary Fig. 6). Mechanical characterization of superelasticity in a YG crystal. Stress-displacement curves recorded in shear tests revealed the effects of temperature and light on the mechanical properties of YG crystals (Fig. 4, Supplementary Figs. 12-17, Supplementary Table 6 Table 3). The σ f and E S values are ca. 3 and 9 times, respectively, those of phase transition-based organosuperelasticity in a terephthalamide crystal 15 are ca. 22 and 47 times, respectively, which are those of twinning-based organosuperelasticity in a 3,5-difluorobenzoic acid crystal 23 (Supplementary Table 7). More importantly, the σ c values of 1.173 MPa at 0°C (Fig. 4 short-dashed line, Supplementary Fig. 12) and 1.136 MPa at 100°C (Fig. 4 solid line, Supplementary Fig. 14) are close to those measured at r.t., demonstrating temperature independence of the superelasticity attributable to a mechanically induced phase transition between monotropic polymorphs, which is different from that between enantiotropic polymophs in the superelasticity of shape-memory alloys (Fig. 4 inset, Supplementary Fig. 17). In addition, the mechanical parameters under UV light almost correspond to those measured under PW light, showing UV light independence of the superelasticity (Supplementary Fig. 16). Discussion In conclusion, ESIPT-fluorescent crystals of 7Cl, a pure and simple organic molecule, exhibit spontaneously reversible chromism due to a temperature-independent superelasticity, showing biphasic luminescent color switching with an emission intensity dependent on the domain volume and the sensing of small stress; for example, a YG crystal with an intersectional area of ca. 19 μm 2 can detect an Aphaenogaster famelica (ca. 3 mg). Organosuperelasticity offers the advantage of reversible and strict crystal deformation in association with molecular rearrangement against applied force. Mechanochromism can be induced at any position (s) at any time by superelasticity, or mechanically well-regulated structural transition. These characteristics differentiate superelastochromism from conventional chromisms, e.g., superelastochromism can detect in situ mechanical stress quantitatively: detection of the moment when mechanical stress is applied and removed, and how much work is done on a crystal specimen. The advantage of ESIPT molecules is their strong emissions with a color sensitive to solid-state molecular arrangement. Combining these advantages will open up the science of superelastochromism, which can expand the designability and applicability of chromic materials in the future. Fig. 18). 13 Microscope observations. An optical microscope (SZ61, Olympus Co.) equipped with polarizing plates and a digital camera was used to record mechanical deformation of crystals using tweezers. Methods DSC measurements. DSC measurements of 7Cl crystals were carried out using a DSC-60 (Shimadzu Co.) and DSC 7020 (Hitachi High-Technologies Co.) instruments under a nitrogen gas flow (65 ml min −1 ). Crystals in YG and YO form with the size of a few hundreds of micrometers for the measurements were prepared by slow evaporation of a 7Cl solution in 1:1 mixture of THF and toluene and in ethanol, respectively, at r.t. Experimental conditions are summarized in Supplementary Table 1. Single-crystal X-ray structural analysis. A mechanically deformed YG (coexisting state of α YG and β YO domains) and YO crystals (coexisting state of α YO and β YG domains) were prepared in addition to as-prepared 7Cl single crystals in YG and YO forms. To avoid spontaneous dissipation of a β YO domain, the mechanically deformed YG crystal was partially cleaved at the α YG //β YO interfaces. Singlecrystal X-ray diffraction measurements of the crystals were performed at 298 K (25°C) with a CMOS detector (Bruker Photon III C14) with a nitrogen-flow temperature controller using a rotating anode X-ray source (MoKα radiation (λ = 0.71073 Å)). Multi-scan absorption corrections were applied using the SADABS program. The structure was solved by intrinsic phasing methods (SHELXT-2014/5) and refined by full-matrix least-squares calculations on F 2 (SHELXL-2016/6). Nonhydrogen atoms were refined anisotropically; hydrogen atoms were fixed at calculated positions by riding model approximation. With respect to the mechanically deformed crystals, X-ray diffraction patterns were obtained around the interface of α YG //β YO and α YO //β YG and they were analyzed as twins. Crystal face indexing was carried out using APEX III Ver.2016.1-0 program package with a twin-resolution program ( Supplementary Figs. 4 and 5). Force measurements. Shear tests were carried out on a universal testing machine. A crystal fixed on a glass base was sheared by a glass jig and observed under an optical microscope equipped with polarizing plates. Light sources: LA-HDF5010 (90W, HAYASHI-REPIC CO., LTD.) for VIS light and HLV-24UV365-4WPCLTL (365 nm, 0.7A/3.3W, CCS Inc.) connected to PD3-5024-4-EI(A) (CCS Inc.) for UV light. The experimental information and schematic representation of the setup are shown in Supplementary Table 2 and Supplementary Fig. 1, respectively. Data availability All the data generated or analyzed during this study are included in this published article (and its Supplementary information files) or available from the authors upon reasonable request. The X-ray crystallographic coordinates for structures reported in this study have been deposited at the Cambridge Crystallographic Data Centre (CCDC), under deposition numbers 1969297-1969298. These data can be obtained free of charge from The Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif.
3,357.2
2020-04-14T00:00:00.000
[ "Materials Science" ]
Influence of Polymer Reagents in the Drilling Fluids on the Efficiency of Deviated and Horizontal Wells Drilling Improving the efficiency of well drilling process in a reservoir is directly related to subsequent well flow rates. Drilling of deviated and horizontal wells is often accompanied by an increase in pressure losses due to flow resistance caused by small size of the annular space. An important role in such conditions is played by the quality of borehole cleaning and transport capacity of drilling fluid, which is directly related to the rheological parameters of the drilling fluid. The main viscosifiers in modern drilling fluids are polymer reagents. They can be of various origin and structure, which determines their features. This work presents investigations that assess the effect of various polymers on the rheological parameters of drilling fluids. Obtained data are evaluated taking into account the main rheological models of fluid flow. However, process of fluid motion during drilling cannot be described by only one flow model. Paper shows experimentally obtained data of such indicators as plastic viscosity, dynamic shear stress, non‐linearity index and consistency coefficient. Study has shown that high molecular weight polymer reagents (e.g., xanthan gum) can give drilling fluid more pronounced pseudoplastic properties, and combining them with a linear high molecular weight polymer (e.g., polyacrylamide) can reduce the value of the dynamic shear stress. Results of the work show the necessity of using combinations of different types of polymer reagents, which can lead to a synergetic effect. In addition to assessing the effect of various polymer reagents, the paper presents study on the development of a drilling fluid composition for specific conditions of an oil field. Introduction Process of well construction is complex and costly one. Efficiency of the entire hydrocarbon production process depends on applied technological solutions. Analytical review of scientific and technical literatures in the field of complications and accidents during well construction [1][2][3][4], in particular, directional and horizontal wells, shows that to prevent the most frequent complications and accidents, first of all it is necessary to use high-quality drilling mud, which will correspond to geological and technical-technological conditions of drilling. In recent decades, classic clay compositions have been replaced by more complex polymer solutions [5][6][7][8]. Clay solutions can be treated with various additives, for example, acrylic polymers, to control the properties of the drilling mud, including rheological characteristics [9]. Polymer reagents can be divided into two groups according to functional features and points of concentration. The first group includes surface-active substances (surfactants), which can be concentrated at the phase's boundary and act as emulsifiers, foaming agents or defoamers, dispersants or wetting agents. Reagents of the second group are in a dispersion medium and affect the technological properties of the drilling fluids. In turn, reagents, which are located in a dispersed medium and affect the technological properties of the drilling fluids, are divided into organic and inorganic. They can change the structure and properties of a dispersed medium, suppress or activate the effect of surfactants, and can also regulate the concentration of hydroxides and bind undesirable ions. Inorganic polymers include silicates, chromates and polyphosphates [10][11][12]. Organic polymers of natural or synthetic origin are most developed. Such polymers are obtained by chemical processing of natural macromolecular compounds or by synthesis from low molecular weight substances. M.M. Dardir et al. propose to use synthetic reagent (ether) in the composition of the drilling fluid, which has high biodegradability and low toxicity [13]. Polysaccharides, lignosulfonates, tannins, humates are natural polymers; cellulose ether is semi-synthetic and synthetic polymers include petrochemical derivatives such as ethylene oxide and acrylic polymers [9]. F.T.G. Dias et al. show the possibility of using modified starch in non-aqueous drilling fluids [14]. Polymeric reagents are often improved by additional treatment. Work [15] presents the composition of a saline solution based on attapulgite treated with polyacrylamide, in which this reagent improves both rheological and filtration indicators. Authors of this work will investigate the effect of polymer reagents of different origins and structure on the rheological parameters of the drilling fluid. Rheological properties of the drilling muds affect almost all processes and indicators associated with well drilling; therefore, they are among the most important. In particular, rheological properties to a large extent determine the cleaning degree of well bottomhole from sludge and cooling of rock-cutting tool, transporting ability of the washing fluid, hydraulic resistance in all parts of the circulation system in the well and hydrodynamic pressure on its walls and bottomhole during drilling; amplitude of pressure fluctuations during start-up and stop of the pumps, performing of tripping operations and development of the well with drilling string pacing; intensity of washing fluid enrichment with sludge, etc. [16][17][18]. Regulation of solutions' rheological properties, continuously changing in the process of deepening, and maintaining them in accordance with requirements of drilling is one of the most important tasks of solution's chemical treatment. Rheological properties of solutions are affected by a large number of variable factors: temperature, pressure, component composition of the solution and concentration of each component, content of high-dispersed and colloidal fractions of clay, rate of shear deformations and thixotropic effects [19]. Choice of a rheological model, by which rheological parameters will be determined, is also important. Analysis of scientific and technical literature on this issue [11,20,21] showed that today there is no rheological model, which could give necessary accuracy of approximation over entire interval of shear rate changes, corresponding to drilling mud circulation in the well. Therefore, accuracy of calculated parameters will depend on both the correctly selected model and its applicability in a particular case. Method developed and presented in the article by R. Wi'sniowski, K. Skrzypaszek and T. Małachowski allows selecting a rheological model for the drilling fluid. Models of Bingham plastic, Casson, Ostwald-de Waele and Newton propose to apply a linear regression method to determine the rheological parameters, and models of Herschel-Bulkley, Vom Berg and Hahn-Eyring propose to apply the non-linear regression method [22]. Hyperbolic model by C. Vipulanandan and A.S. Mohammed predicted the maximum shear stress of the drilling fluid, whereas other two models (Herschel-Bulkley and Casson) studied in this work assumed infinite shear stress tolerance for the drilling fluid [9]. Main polymer reagents used to treat drilling fluids mostly affect the structural and rheological parameters of fluids. As mentioned above, they can be of different origin and structure, which will affect the rheological parameters of the drilling fluid and the quality of the entire process of well drilling. Table 1 shows various compositions of polymer drilling fluids and the results of their investigations. Basic method for the low temperature rheology control of WBDF is proposed as follows: First, the development of a rheology modifier that increases the rheology at a high temperature, but does not affect it at a low temperature, and can help to achieve a flat rheology of WBDF over a wide temperature range. Second, optimization of the type, content, size, and shape of bentonites and weight materials is important. Third, shortening the chain length and optimizing the molecular structure of polymers are required. Based on this method, two high-performance WBDFs were optimized; their rheological properties were more stable over a wide temperature range from 4 to 75 °C [26]. Table 1 shows that use of polymer reagents improves the quality of clay suspensions and allows adjusting the rheological and filtration parameters of drilling fluids. Aim of this work is to study the effect of various polymeric reagents on the rheological parameters of drilling fluids and equivalent circulation density. These indicators determine the future permeability and porosity parameters of the reservoir. Therefore, study of this issue is relevant to improve drilling efficiency. Also relevant is consideration and control of equivalent circulation density (ECD), which takes into account not only hydrostatic pressure of the drilling mud column, but also friction pressure losses, as a result of which hydraulic pressure in the well becomes more than the hydrostatic one. To calculate pressure that a solution with equivalent circulation density exerts on borehole walls, it is necessary to add pressure losses in the annular space during circulation in the interval from a wellhead to a given depth and hydrostatic pressure of the drilling mud column at the same depth. Resulting pressure will be equal to hydrostatic pressure of solution column with equivalent circulation density [27]. This problem is particularly relevant for horizontal section of the wellbore. Often the geometric dimensions of annular space in the horizontal section of the wellbore are small. The magnitude of pressure losses on local resistances directly depends on geometry of annular space, and the smaller it is, the higher the hydrodynamic pressure. Quality of bottomhole cleaning from sludge and transporting ability of the solution, which is directly related to the rheological parameters of the drilling fluid, plays an important role in such conditions. Poor cleaning will lead to borehole sludging and annular space reduction, which in turn will cause an increase in pressure losses and an increase in ECD [26]. The most efficient way to control ECD is the regulation of rheological parameters of the washing fluid [20]. By changing the rheological parameters of the washing liquid, one can adjust ECD by decreasing or increasing it. Drilling muds are complex systems that change their rheological parameters and follow different flow laws depending on dispersion medium of the solution, amount of solid phase, presence of polymeric and lubricating additives, temperature, pressure, and shear strain rate. There are several rheological models that are more or less suitable for describing the circulation of the drilling mud in the well. However, all of them allow obtaining rheological parameters with necessary accuracy only on a certain interval of shear strain rates, defined for each model. Wrong choice of mathematical model for determining the rheological parameters of the drilling mud can lead to an error in calculating the equivalent circulation density. Therefore, for a more accurate prediction of ECD, it is necessary to understand what law is suitable for movement of liquid. Control and analysis of rheological parameters of drilling muds allows accurately adjusting ECD and avoiding complications in construction process of horizontal wells. Study of the influence of various polymeric reagents on the rheological properties of drilling fluids, monitoring and analysis of these parameters will allow accurately adjusting the ECD, avoid complications and increase the efficiency of the construction process of deviated and horizontal wells. The aim of this work is to study the effect of various polymer reagents on the structural and rheological parameters of the drilling fluid, as well as to justify and develop the composition of the drilling mud, which increases efficiency of drilling a horizontal section of the wellbore, taking into account rheological parameters of the drilling mud and equivalent circulation density. Drilling Fluid Rheology Drilling fluids used at drilling of oil and gas wells exhibit non-Newtonian behavior: both shear thinning and shear solidification. It is usually believed that in such solutions there is a limiting shear stress and thixotropic effects are present [12]. The first rheological model used to describe rheological behavior of clay suspensions was Bingham-Shvedov or visco-plastic model. It describes substances that, at stresses below a critical value τ0 called the ultimate shear stress or dynamic shear stress, do not deform, but flow at high stresses like viscous liquids [9,28]. Most drilling fluids are also characterized by the fact that their rheological properties depend both on magnitude of shear stress and on its duration. If viscosity is determined not only by shear rate, but also by shear duration, then such substances are called thixotropic. In thixotropic substances, with an increase in duration of the load, a decrease in viscosity is observed. After the end of deformation process and final rest time, substance regains its initial state. Despite the fact that Bingham-Shvedov model can be used to describe the behavior of drilling muds flowing inside the pipe at high shear rates, this model cannot be applied to all types of fluids. With appearance of various methods for processing water-based clay muds and with development of non-aqueous drilling fluids, limitations of Bingham model became increasingly apparent [12,20]. Behavior of such drilling muds lies between the boundaries described by Newtonian model and Bingham-Shvedov model. This behavior is called pseudoplastic. Relation between stresses arising in a fluid and the shear rate for pseudoplastic fluids is described by Ostwald-de Waele power-law model. Like Bingham-Shvedov model, Ostwald-de Waele model does not provide an absolutely accurate characteristic of the drilling mud. However, its use is preferable if drilling mud is treated with polymers or is completely clayless and polymeric. If the fluid flow obeys a power-law model, then the yield and viscosity curves in logarithmic coordinates will be straight lines. In this model, drilling mud is a pseudoplastic fluid that does not have an ultimate dynamic shear stress. Thus, fluid begins to flow immediately after applying a shear load to it [22]. Each of the mentioned above models is applicable only in individual cases. Bingham-Shvedov model includes an indicator of ultimate shear stress, but does not provide an accurate description of solution behavior at low deformation rates. Power-law model, on the contrary, more accurately describes behavior of solution at low shear rates, but due to absence of ultimate shear stress in the model, it cannot describe behavior of solution at extremely low shear rates close to zero. Therefore, behavior of typical drilling muds is between a viscoplastic and a pseudoplastic model. There are also three-parameter models. Herschel-Bulkley model is obtained by combining a viscoplastic model with the Ostwald-de Waele model and takes into account dynamic shear stress [9]. This model is suitable for describing some of the low-solids drilling muds and solutions treated with polymer reagents. It covers a wider range of shear rates. However, determining the rheological parameters of solutions for this model and integrating the equations of their motion is quite difficult. It seems more convenient and with acceptable accuracy to use simple Bingham and Ostwald-de Waele models with different rheological parameters for various intervals of shear rates. Often, in calculations, a pseudoplastic model is used with different values of consistency and non-linearity parameters in the shear rate ranges corresponding to the flow in annular space of the well and drilling string. In drilling bit nozzles, the drilling mud can be considered as Newtonian fluid. Justification on Choice of the Research Object In order to assess the effect of polymer reagents, it is necessary to determine the object of research. In this paper, it is a drilling fluid for drilling a well in an oil reservoir. This drilling fluid is necessary for drilling a horizontal section of oil wellbore in the reservoir of an oil field located in the Republic of Tatarstan (Russia). Properties of the experimental well are presented in Table 2. Selection of optimal rheological properties of the solution is a difficult task. Given the experience of drilling horizontal wells, value of following characteristics should be considered:  plastic viscosity in the conditions of the specific field;  dynamic shear stress (DSS), which influences hydrotransport of sludge to the day surface by laminar flow and prevention of weighting agent precipitation in surface circulation system;  relative viscosity (RV) that is important for timely control of viscosity at drilling site;  static shear stress (SSS) characterizes the structural and mechanical properties of the solution. Usually it is enough that SSS10 = 5-6 Pa, SSS1 = 2-3 Pa. At the same time thixotropy coefficient should be in range of 1-1.5;  water loss (filtration) (F) during drilling in the formation must be minimal to preserve permeability of the reservoir. Absence of filtration is also unacceptable, since in this case a filter cake, which will maintain stability of the reservoir and reduce the coefficient of friction, cannot form on walls of the well that is very important at drilling horizontal section;  thickness of clay cake (k) should have low friction properties;  value of pH defines performance of stabilizer chemicals. The effect is stronger in presence of inorganic inhibitors and mineralizers. Most stabilizer inhibitor reagents are anionic or amphoteric, therefore, for their effective operation, it is necessary to maintain an alkaline environment in the drilling mud. Thus, taking into account the geological conditions and drilling experience in this field, as well as recommendations from scientific and technical literature, necessary technological properties of the developed solution are presented in Table 3. Methods and Equipment For research and modification, one of the basic formulations of a polymerbentonite solution with a density of 1140 kg/m 3 used for drilling a horizontal section of the wellbore in a reservoir was selected. Composition of solution is presented in Table 4. Research considers studying the effect of polymer reagent and modifying the basic composition of washing fluid by replacing the polymer reagent with another or a combination of others, differing from carboxymethyl cellulose (CMC) by molecular weight, structure and properties, in order to improve rheological characteristics, while maintaining other technological parameters within required limits. To find optimal reagent or their combination, it is necessary to study rheological parameters of the drilling mud. Composition, in which rheological parameters will be optimal (for example, such as low values of DSS and non-linearity index), can be considered the most effective and appropriate for use in these conditions. Effect exerted by polymer reagent depends primarily on its type, structure and molecular weight [16,20]. Therefore, various polymeric reagents were selected for study considering their origin and structure:  with an average molecular weight (CMC, polyanionic cellulose (PAC)) and high (Xanthan, polyacrylamide (PAA));  synthetic (PAA) and natural (CMC, PAC, Xanthan);  anionic (CMC, PAC, Xanthan) and manifesting amphoteric properties (PAA);  with a linear structure (CMC, PAC, PAA) and branched (Xanthan). Rheological parameters of drilling muds depend not only on component composition, but also on the shear rate. Therefore, for a deeper analysis of rheological characteristics, their comparison and selection of optimal composition, it is necessary to measure rheological parameters, if possible, on a wider range of shear rates. Different rotation speeds of rotational viscometer correspond to different deformation rates of washing fluid: from low speeds at the beginning of circulation to very high speeds during passing through nozzles of the drilling bit. Measurements were carried out in following ranges of viscometer rotor speeds: 600-400, 400-300, 300-200, 200-100, 100-0 rpm [29]. For each range, rheological parameters were determined using a viscoplastic and power-law solution model. For research, Rheotest RN 4.1 rotational viscometer manufactured by RHEOTEST Medingen GmbH, Germany, was chosen, which automatically registers viscosity and fluidity curves and allows deep rheological description of test medium in wide range of stresses and shear rates. Figure 1 shows a general view of the viscometer Rheotest RN 4.1. Measuring system with a cylinder of the viscometer is necessary to study rheological characteristics of substances with a viscosity of up to 100,000 mPa•s. It consists of a stationary measuring cup and a cylindrical rotor, which is placed in this cup. To study the behavior of liquids at different temperatures, thermostated vessels are available. Analysis of Obtained Rheological Characteristics and Selection of the Optimal Drilling Mud Formulation Rheological parameters of solution with addition of different polymer reagents were measured. Study defined plastic viscosity (η), DSS (τ0), non-linearity index (n) and consistency coefficient (K). Results are presented in Table 5. First was the basic composition, which is polymerbentonite solution. Content of polymer reagent: high-viscosity PAC-4 g/L, low-viscosity PAC-1 g/L. Magnitude of DSS and plastic viscosity is in the required range. With an increase in the shear rate, plastic viscosity of the solution decreases, and the non-linearity index n becomes higher, which indicates the effect of "shear thinning". Experience of well construction and research of processes occurring during circulation of a drilling mud show that it is most advantageous to use pseudoplastic fluids as drilling fluids that have a non-linearity index n < 0.3 [18,21]. Circulation of such a solution in the well provides effective removal of sludge, and resulting hydraulic resistance is minimal, which is very important during drilling-in of the reservoir by a horizontal wellbore. Non-linearity index of solution 1 has a rather low value n = 0.37. The second analyzed composition was a solution with the addition of another cellulose ether-polyanionic cellulose. Like CMC, there are high-viscosity and low-viscosity brands. This polymer has a good inhibitory ability in clay rocks. It is widely used in the treatment of clay and clayless drilling muds. Content of polymer reagent: high-viscosity PAC-4 g/L, low-viscosity PAC-1 g/L. Rheological parameters of the solution with addition of PAC were similar to the parameters of solution 1. All of them are in the required range. Similarity of the results is explained by the fact that these polymeric reagents have a similar linear structure, a slightly different molar mass, and both are anionic polymers. Third analyzed composition was a solution with addition of xanthan gum as a polymer reagent. Content of xanthan-4 g/L. At the same concentration of the polymer reagent, viscosity of the solution was higher. Values of plastic viscosity and dynamic shear stress were higher than required values. However, non-linearity index turned out to be 0.23, which indicates that this polymer gave the washing liquid more tangible pseudoplastic properties. These properties are also manifested by a more intensive decrease in plastic viscosity with an increase in shear rate. This is explained by the fact that xanthan gum has a strong difference in structure from cellulose ethers. Firstly, it has a large molecular weight and branched structure, and not linear as in CMC and PAC. Flexibility of macromolecules in linear polymer is always higher than that of branched ones because branched polymers have a large number of short and frequently located side chains, which increase the rigidity of macromolecule due to reduced rotation possibility for individual units relative to each other. Secondly, presence of a larger number of functional groups makes it less mobile due to possible interactions. Such a solution will have higher carrying and cleansing abilities, but the value of hydraulic resistance during circulation will be higher. Fourth analyzed composition was a solution with addition of PAA as a polymer reagent. Content of PAA-1 g/L. This composition has similar values of plastic viscosity and a non-linearity index with solutions №1 and №2, however, DSS value was significantly lower. This feature can be explained in terms of the polyacrylamide macromolecule structure. Functional groups in polyacrylamide are attached to the main chain, and are not connected with cyclic groups, like in starch or cellulose ethers. This makes the polyacrylamide macromolecule very flexible, therefore the resistance arising from initiation of washing liquid flow is less and, accordingly, initial shear stress is significantly lower. Analysis of obtained data revealed that addition of PAA leads to a decrease in the initial shear stress, and addition of xanthan gum gives washing liquid more tangible pseudoplastic properties. Both of these qualities are favorable from a technological point of view. Pseudoplastic properties allow the solution to have high viscosity in the annulus and low viscosity when the fluid flowing in the pipe string and bit nozzles. Lower value of DSS will reduce the amplitude of pressure fluctuations during start-up and stopping of pumps and tripping operations, as well as the likelihood of stagnation zones' formation with accumulation of cuttings in them. Therefore, the fifth analyzed composition was a solution with simultaneous addition of PAA and xanthan gum. Content of reagents: PAA -0.5 g/L, xanthan-2 g/L. Thus, in solution 5, it was possible to simultaneously obtain lower DSS values and more tangible pseudoplastic properties compared to the basic formulation. For a visual representation of differences in rheological characteristics, they are presented on flow and viscosity curves (Figures 2 and 3). Viscometer software allows displaying up to 4 curves on one graph at a time. Therefore, base composition with CMC, solution 3 with xanthan, solution 4 with PAA and solution 5 with PAA and xanthan are chosen for comparison. Graphs show that solution 5, which contains both PAA and xanthan, will create the least resistance during circulation. Solution 4 with PAA has the lowest initial shear stress, and solution 3 with xanthan, due to its higher molecular weight and branched structure, will thicken the solution more strongly than CMC at the same concentration and have more tangible pseudoplastic properties. In addition to assessing the effect of polymer reagents on the rheological parameters of the drilling fluid, the aim of the work was to develop a new composition. SSS of drilling mud 5 was measured using the VSN-3 device. SSS1 = 2.3 Pa and SSS10 = 5.7 Pa. Compliance with the requirements specified in Table 2 of remaining technological properties of solution with considered formulation was also measured. Relative viscosity according to the results of three measurements was 34 s, which corresponds to the necessary requirements. Filtration was measured using a filter press. For 30 min at an excess pressure of 0.7 MPa [29], the amount of filtered liquid was 5 cm 3 . Filter cake formed strong and thin (about 1 mm). Coefficient of friction for the filter cake was determined on a KTK-2 device. Angle was 4 degrees, and the tangent was 0.069, which is a low indicator. It means that resulting filter cake has good lubricity and will help reduce friction at the "steel-rock" interface. This is especially important during drilling directional and horizontal wells. Level of pH is 9, which indicates alkaline medium in this drilling mud that is necessary for satisfactory operation of polymer and other reagents. Technological properties of developed composition of the drilling mud are presented in Table 6. Conducted study has shown that various polymer reagents have different effects on rheological properties of the washing fluid. This is due to the difference in their structure and functionality of elementary units. Branched polymer reagents give the washing fluid more tangible pseudoplastic properties. Pseudoplasticity is a necessary property of a modern drilling mud whereas it is necessary to reduce hydraulic resistance at a high deformation rate (bit nozzles and fluid movement in pipes) and increase the holding and transporting ability of the fluid at low shear rates in the annulus. Linear polymers, in which functional units are located at the main circuit, have greater flexibility and create less resistance at movement initiation, which is very important during start-up of pumps and resuming circulation after tripping operations. Performance of polymer reagents largely depends on mineralization of dispersion medium and formation fluids that are in contact with the drilling mud. Presence of functional groups in polymer reagents capable of interaction and dissociation creates the need to control this parameter. High mineralization can lead to the fact that electrostatic repulsion between functional groups is lost, which helps the main polymer chain to be in an extended state (globulization of the polymer reagent). The shape of the polymer reagent chain affects its performance. Reagents rolled into a coil or spiral are less effective. As a result of the combination of polymer reagents with various properties, it was possible to obtain the composition of the washing liquid with optimal rheological characteristics ( Table 7). Choice of Mathematical Model None of the mathematical models can give an accurate description of rheological characteristics over entire range of shear strain rates, since the behavior of the polymer solution varies greatly at different shear rates [9,22,28]. Behavior of real drilling muds lies somewhere between Bingham-Shvedov viscoplastic fluid model and Ostwald-de Waele power-law model. Therefore, accuracy of these mathematical models will be different for different shear rates. To calculate the ECD, one needs to know the rheological parameters of the drilling mud (DSS and plastic viscosity according to viscoplastic model or coefficients K and n according to power-law model) at low shear rates that correspond to the flow of the drilling mud in annular space. Typically, shear rates of a solution in a drilling string approximately correspond to the shear rates prevailing in a rotational viscometer at rotational frequencies of 300-600 min −1 , and shear rates of a solution in an annular space correspond to the shear rates prevailing in a rotational viscometer at rotational frequencies of 6-100 min −1 . Viscometer software allows obtaining flow curves, approximating the data obtained by various mathematical models and comparing the difference between mathematical model and real measurements over entire range of shear rates for this measurement. Therefore, to select a more accurate model and, accordingly, a more correct calculation of the ECD, it is necessary to compare the accuracy of the models at low shear rates of up to 100 min −1 . To compare the accuracy of measurements, solution 5 with optimal rheological parameters was chosen. Measurement results are presented in Table 8. Thus, at high shear rates, which correspond to rotation frequency of viscometer rotor 300-600 min −1 , accuracy of mathematical models is comparable and both can be used to calculate hydraulic losses (during circulation inside the pipe string or if solution passes through the bit nozzles). However, at low shear rates, which correspond to rotational frequencies of 0-100 min −1 (solution flow in annular space), accuracy of power-law model is higher. Therefore, to calculate the ECD, it is most expedient to use rheological parameters of the washing liquid according to a power-law model, namely, consistency coefficient K and the non-linearity index n. Calculation of Equivalent Circulation Density (ECD) Equivalent circulation density of the drilling mud takes into account not only hydrostatic pressure of the drilling mud column, but also pressure loss during circulation of the washing fluid through the annular space. Pressure loss in the annular space will be the sum of pressure loss due to friction and pressure loss in local obstacles. Formulas used to calculate the friction pressure loss in the annular space will depend on flow regime. The following is a calculation of the pressure loss during circulation of a pseudoplastic fluid in an annular space. Upon transition to a turbulent flow regime, frictional pressure losses will begin to increase significantly. Therefore, at first, it is necessary to determine flow regime by comparing actual flow rate of drilling mud Q with the critical Qcr. Turbulence of the flow in annular space begins when [30]: . . where n-drilling mud non-linearity index. Critical flow rate at which the flow goes into turbulent mode is calculated by formula: Obtained data show that when selecting a structuring agent for a drilling fluid, it is necessary to take into account the rheological parameters of the resulting fluid. Thus, the replacing CMC reagent with a combination of PAA and Xanthan can reduce the ECD by 4%. Conclusions and Recommendations 1. Conducted laboratory study has shown that high-molecular polymer reagents (e.g., xanthan gum) can give flushing liquids tangible pseudoplastic properties, and their combination with a linear high-molecular polymer (e.g., PAA) can reduce DSS value. Thus, at selection of polymer reagents, it is necessary to take into account their structure, molecular weight and properties. Combination of different types of reagents can lead to a synergistic effect. 2. Optimal composition of the drilling mud for the conditions of an oil field located in the Republic of Tatarstan (Russia) includes xanthan gum and PAA as polymer reagents. 3. Calculation of ECD showed the need to consider rheological parameters when selecting polymer reagents for drilling fluids. This is especially important when drilling productive formations, where an increase in the density of the drilling fluid can lead to more severe contamination of the bottomhole zone of the formation and, as a result, to decrease further well production. Fanning friction factor; n drilling mud non-linearity index; . . Critical flow rate at which the flow goes into turbulent mode, m 3 /s; d well diameter, m; drilling string external diameter, m; drilling mud density, kg/m 3 ; K consistency coefficient; . . Reynolds number for a power-law fluid; coefficient of hydraulic resistance; pressure losses in the annular space, Pa; length of the well, m; . local pressure losses at string locks in the annular space, Pa; ′ average pipe length in a given drilling string section, m; ′′ section length of the drilling pipes with the same diameter, m; external diameter of the locking joint, m; upward flow velocity of the solution in the annular space, m/s. equivalent circulation density, kg/m 3 ; g acceleration of gravity, m/s 2 ; H vertical well depth, m.
7,498.4
2020-09-09T00:00:00.000
[ "Materials Science" ]
Firm Size As A Moderation Factor: Testing The Relationship of Capital Structure With Dividend Policy This study examines size as a variable that can strengthen and weaken the relationship between debt policy and dividend policy. Presearch using a sample of 26 companies of 65 population Basic industrial and chemical manufacturing companies listed on the Indonesia Stock Exchange in 2011-2015, which is determined by purposive technique. The variables observed include debt policy as an independent variable, dividend policy as the dependent variable, and firm size as a moderating variable. The analysis tool uses regression moderating analysis (MRA). The results prove that the Debt to Asset Ratio (DAR) has a negative and insignificant effect on the Dividend Payout Ratio (DPR), firm size negatively moderates and there is a significant relationship between capital structure and dividend policy. Introduction The capital market is an alternative long-term investment option, where one of the products that often gets the attention of investors is stocks because of the investor's expectation of stock returns, in the form of capital gains and dividends. Regarding dividends, this return is usually expected by investors who are oriented towards longterm returns, whose mechanism is decided at the general meeting of shareholders (GMS). According to Suad Husnan and Enny Pudjiastuti (2002: 333), dividend policy is a policy that concerns the use of profit which is the right of shareholders, basically the profit can be divided as dividends or retained to be reinvested. Meanwhile, according to Brigham & Houston (2001), the optimal dividend policy is a dividend policy that can create a balance between current dividends and future growth that can maximize the company's stock price. Regarding dividend policy, there are several relevant theories including the bird in the hand theory, which explains that investors who prefer dividends have the view that dividends have less risk and a more certain rate of return compared to capital gains. While inagency theoryIt is explained that investors generally want a relatively stable dividend distribution (steady stream) and increase in the future, because dividend stability can increase investor confidence in the company and support the company's performance prospects (profitability), thereby reducing investor uncertainty in investing funds in the company (Philippatos and Sihler, 1991) in Ambarwati (2012). Many factors influence the dividend ratio. According to Hanafi (2004), factors that can influence dividend policy are investment, profitability, liquidity, access to markets, income stability, and restrictions. While Levy and Sarnat (1990) in Van Horn (1986), the factors that influence dividend decisions include profitability, liquidity position, debt capacity, capital structure, prohibition on debt covenants, level of company expansion, level of company profit, firm stability, ability to enter. capital market, controlling group actors, the position of shareholder as taxpayer, tax on profits that were done wrongly, and the inflation rate. Based on this explanation, it is clear that dividend policy is influenced by many factors, but several previous empirical researchers have presented dividend policy related to capital structure. Research by Dewi et al (2012), proves that capital structure has a negative effect on dividend policy. Meanwhile Siswantini (2014) and Sulistyowati et al (2013) prove that capital structure has a positive effect on dividend policy. A very different result was stated by Gita and Rohmawati (2010, which proved that the capital structure did not have a significant effect on dividend policy). Based on the explanation above, it is clear that there is still a gap in the relationship between capital structure and dividend policy. Thus the relationship cannot be said to be an established relationship, but it still creates ambiguity (confusion). The gap that occurs is believed to be due to the presence of other variables that influence this relationship. Based on the search for several references related to several previous studies leadingfirm size as a variable that can affect the relationship between capital structure and dividend policy.Research by Musiega et.al (2013), proves that Size is able to mediate the relationship between the leverage variable and dividend policy. Consider existing research gaps, the researchers are interested in conducting research which is different from the previous research, namely by adding the Firm Size as a moderating variable. Based on the research problems that have been stated, the objectives of this study are: is there an effect of capital structure on dividend policy? Is there an effect of profitability on dividend policy? Is the effect of capital structure and profitability on dividend policy moderated by firm size? Literature Study Signalling Theory Signaling Theory, Bhatacharya (1972) inIrham Fahmi (2013),emphasizes the importance of the information issued by the company on investment decisions outside the company. Signalling theoryin Suharli (2006), management will pay dividends to signal the company's success in posting profits. Furthermore, Weston andCopeland, 1997 in Nursandari (2013), the level of company profit is a basic element of dividend policy so that financial ratio analysis affects dividend policy. Agency Theory Jensen and Meckling (1976) in Najmudin (2011), describes the relationship between separation of ownership and company control. Describing the conflict between the principal and the agent, which states that agency costs are the sum of a) expenses for monitoring by the owner (principal), b) expenses for binding by the agent, and c) other costs related to company control . The existence of this separation will create a conflict of interest between shareholders and managers. Packing Order Theory Inner order pecking theory Najmudin (2011), starting from the premise that companies use internal funding when available, and choose to issue debt over equity when external funds are needed. Issuance of new shares is a last resort. Maidah (2016), explained that if the use of internal funds is insufficient, then the second alternative is to use debt. This stigma means that the larger the size of the company indicates that the company has a higher number of assets and tends to use larger external funds. along with the increasing growth of the company. Dividend Policy Dividend is a part of profit which is the right of shareholders regarding their ownership in a company. Halim (2015: 7), dividend is the distribution of profits given by the company that issued the shares for the profits generated by the company. Generally, dividends are an attraction for shareholders with a long-term orientation. There are different types of dividends, according toMusthikawati (2010) in Nursandari (2013), namely cash dividend, property dividend, script dividend, liquidating dividend, Sudana (2011: 164), relates to determining the percentage of net profit after tax that is distributed as dividends to shareholders. WhileMulyawan (2015)Dividend policy is a decision to share the profit earned by the company to shareholders as dividends or to hold it in the form of retained earnings to be used as investment financing in the future. Capital Structure Wild, 2005 in Mulyawan (2015), capital structure is a combination of long-term debt and securities used by a company to finance its operational activities. Irham Fahmi (2016), capital structure is a description of the form of the company's financial proportion, namely between the capital owned by long-term liabilities and equity (shareholder's equity) which is a source of financing for a company. Raharjaputra (2011: 296) The capital structure is the proportion between long-term debt and equity in order to finance its investment (operating assets). Capital structure is measured using a capital structure ratio known as the leverage ratio (Nuswandari, 2013). According to Kasmir (2012: 151), the leverage ratio is a ratio used to measure the extent to which the company's assets are financed with debt. Firm Size The size of the company is the size of the company seen from the size of its equity, firm value, or the result of the total asset value of a company Riyanto, 2011in Agustiniet al, 2015. Sudarsi, 2002in Prasetia et al (2014, company size is the natural log of total assets. Meanwhile, Huang (2002) in Damayanti (2013) states that company size is a reflection of the size of the company as measured by the natural logarithm of Sales. Hypothesis Development Effect of Capital Structure on Dividend Policy Jensen et al (1992) supported by Megginson (1997) and Chen and Steinler (1999) in Dewi (2008), explained that debt policy negatively affects dividend policy. Companies with high levels of debt will try to reduce agency cost of debt by reducing their debt. Debt reduction can be done by financing investments with internal sources of funds so that shareholders will give up their dividends to finance their investments. The same thing was statedKalay (1982) in Suhadak and Darmawan, (2011), that companies that use high leverage will cause the company to reduce or not increase its dividend payments (Suhadak and Darmawan, 2011: 170). Jensen et. al (1992) inDewi (2008), debt policy has a negative effect on dividend policy because the use of debt that is too high will cause a decrease in dividends where most of the profits will be allocated as reserves for debt settlement. Agency theory in Dewi and Sedana (2012), suggests a negative relationship between capital structure and dividend policy where debt is a way to reduce agency conflicts. Companies that have debt, then the company will be forced to remove the available cash from the company to pay interest on debt and pay off debt before distributing the dividend policy. The relationship between model structure and dividend policy is empirically explained Dewi (2011), which provesthat the capital structure has a significant negative effect on Dividend Policy. These results are supported by several subsequent researchers, namely Sumiadji (2011), Dewi and Sedana (2012), Vo and Nguyen (2014), Mandala (2014), and Pramana et al (2015), who in essence explain that to reduce agency problems. then part of the company's funds must be willing to pay installments and interest on debt, so there is a tendency to reduce its priority to increase the dividend ratio. Based on the explanation above, then the research hypothesis 1 (H1) can be formulated, namely: H1 : The higher the capital structure, the smaller the dividend ratio The Effect of Capital Structure on Dividend Policy Moderated by Firm Size. Sjahrial (2008) explains that companies with a larger size have greater confidence in obtaining sources of funds, so that it will be easier to obtain credit from outside parties. Empirically, the relationship between firm size and capital structure is explained by Sunarya (2013), which proves that firm size strengthens the relationship between capital structure and dividend policy. There is a tendency to use larger loan amounts compared to smaller companies. Research results that are also relevant to the research of Sunarya (2013), presented by Elsa (2012), Joshua and Komang (2013), Tariq (2015), Maidah (2016), Karadeniz et al (2011), Tatik andBudiyanto (2015), Rehman (2016), and Trinh and Phuong (2016), who prove that company size determines the company's capital structure. Agency TheoryVogt (1994) in Dewi (2008), identifies that the size or size of the company plays a role in explaining the dividend payout ratio in the company. Empirically Khoiro et al (2013), which proves that company size has a significant and positive influence on dividend policy. The positive significant effect explains that companies with a higher size tend to have an increase in dividend policy. Pecking order theory, Smith andWarner, 1979 in Hadianto (2007), large companies can easily finance their investment through the capital market because of the small information asymmetry that occurs. Investors can get more information from large companies when compared to small companies. So, obtaining funds through the capital market makes the proportion of debt smaller in the capital structure. Titman and Wessel 1988 in Hadianto (2007), that the issuance of equity to small companies costs more than large companies. In other words, the larger the company size, the cheaper the cost of issuing equity. Hadianto (2007) empirically proves that company size has a significant negative effect on capital structure. This result is consistent with the pecking order hypothesis that for large companies, consideration of the cost of issuing equity in the capital market is quite cheap (Titman and Wessels, 1988) and the low level of information asymmetry that occurs (Smith and Warner, 1979). If this is realized, the proportion of equity ownership is greater than debt. The empirical research of Hapsari (2010) (2011), prove that a larger company size presents the cost of issuing equity in the capital market which is quite cheap so that using equity is more possible than debt. Signaling theory, Damayanti (2015) states that large companies with high growth rates need more funds for investment activities, so that the funds obtained from retained earnings are not paid out as dividends. Zang, 2014 in Amalia, (2016), the negative correlation between dividend yield and size shows that small companies are more able to pay dividends. EmpiricallyNurhayati (2013), proves that firm size has a significant negative effect on the dividend payout ratio. The larger the size of a company, the smaller the dividend ratio. Lanawati and Amilin (2015) who prove that Firm Size has a negative sign, which means that an increase in firm size will result in a decrease in the dividend payout ratio. Other consistent results are by Winatha (2001), Damayanti and Achyani (2006) and Sulistyaningsih (2012), which conclude that the size of firm variable has a negative effect on the dividend payout ratio. Based on these explanations, research hypothesis 3 (H3) can be formulated, namely: H2 : Firm size moderates the relationship between capital structure and dividend policy. Research Methodology The sample in this study were 26 companies from 65 populations of Manufacturing Companies in the Basic Industry and Chemical Sector which were listed on the Indonesia Stock Exchange in 2011-2015, which were determined by purposive technique. There are 130 observational data. The observed variable is the capital Firm Size As A Moderation Factor: Testing The Relationship of Capital Structure With Dividend Policy structure of the independent variable, measured by the debt to total asset ratio (DAR), which is the ratio between total liability and total assets, Irham Fahmi (2016). Dividend policy, as a dependent variable as measured by the dividend payout ratio (DPR), is the ratio between dividend per shre and earnings per share, Harmono (2011). The moderating variable is firm size as measured by ln total assets, Riyanto (2011). The analysis tool uses moderating regression analysis (MRA). Descriptive statistics The results of descriptive statistical analysis were processed using the SPSS 2 statistical program2 for each variable in 2011-2015 is presented in the following table: Source: processed data In general, of the 3 variables analyzed, namely DAR, DPR, and FZ, seen from the comparison of the mean and standard deviation, almost all of them show that the mean value is greater than the standard deviation, except for DPR with a mean value of 35.84 which is lower than the standard deviation. is at 46.98. Meanwhile, seen from the comparison between the maximum value and the mean value, almost all of them show a maximum value that is greater than the mean value. Classic Assumption Test Normality test uses Kolmogorov Smirnov, by transforming it into a natural logarithm (Ln) as a form of treatment for data that is not normally distributed, the Kolmogorov-Smirnov test results in table 4.3 show the Asymp value. Sig has a value of 0.183> 0.05. This shows that the resulting regression model is normally distributed. Multicollinearity test using tolerance and VIF parameters shows that all independent variables do not have a very strong direct relationship (correlation). The multicollinearity test results can be seen from the tolerance value and the Variance Inflation Factor (VIF). Where the tolerance value is> 0.10 and VIF <10, so it can be said that there are no symptoms of multicollinearity in the resulting regression model (Table 2). Heteroscedasticity test using the Glejser parameter, the results of the SPSS output in table 4.5 clearly showIn the regression model, there is no variance inequality of the residues from one observation to another. This can be seen from the significance probability which is above the 5% confidence level, so that there are no symptoms of heteroscedasticity in the resulting regression model. The autocorrelation test using the durbin-waston parameter above shows that the Durbin Watson statistical test value is 1.759. Meanwhile, the lower limit (dL) is 1.6623, and the upper limit (du) is 1.7589. Thus the results obtained (1.7589 <1.759 <2.2411) were in the dL to dU area (dU <d <4-dU) or were in the area not rejected, so there was no autocorrelation symptom in the resulting regression model.Linearity test, the results of the SPSS output in table 4.7 show the value of 20.41 <154.302 or C2 count <C2 table, it can be concluded that the resulting regression model is linear. Then the regression model can be used in this study (Table 2). Hypothesis Test The effect of the debt to assets ratio on the dividend payout ratio The results of the hypothesis test (table 4.8) produce the t value = -0.702 with a significance probability of 0.484 indicates a value greater than the value at the level the predetermined significance is 0.05 so that Ho is accepted or Ha is rejected. These results indicate that the Debt to Asset Ratio (DAR) has a negative and insignificant effect on the Dividend Payout Ratio (Table 3). The effect of the debt to asset ratio on the dividend payout ratio, which is moderated by the firm size The moderate variable which is the interaction between DAR * Firm Size, the t value is -4,271 with a probability value = 0,000 which is smaller than the specified significance level of 0.05, H3 Ho is rejected. These results indicate that firm size negatively and significantly moderates the relationship between capital structure and dividend policy. Discussion The results of these studies did not match the expectations of the researchers. Results are not relevant to agency theory theory, Jensen et. al (1992) in Dewi (2008), that most of the profits will be allocated as reserves to pay off debts, the use of too high a debt will cause a decrease in dividends. Hasil also contradicts Dewi (2011), Sumiadji (2011, Vo and Nguyen (2014), and Ramachandran and Veeramuthu (2010) which proves that the divident payout ratio has a negative effect on the capital structure (table 3). The results of the research match the expectations of the researchers. The results are also relevant to the pecking order theory, Smith and Warner, 1979 in Hadianto (2015), Youssef and El-Ghonamie (2015), Osarentin and Chijuka (2011), who prove that a larger company size uses equity is higher than debt (table 4). The results of the study are also relevant to the signaling theory in Damayanti (2015), that the funds obtained from retained earnings are not paid out as dividends but more funding for investment activities in large companies with high growth rates. The results also correspond toNurhayati (2013), Lanawati and Amilin (2015), (2001), Damayanti and Achyani (2006) and Sulistyaningsih (2012), which prove that the size of firm has a negative effect on the dividend payout ratio (table 4). Conclusion Increasing the debt ratio will not automatically increase the dividend ratio because of the low tax savings that are generated. Tax savings are an incentive for increasing the dividend ratio, because of the potential to increase company profits, which is a considered component in calculating the dividend ratio. The implication of this result is that despite the increase in interest expense and debt repayments, an increase in the debt ratio will not always have an impact on reducing the company's cash ability to pay dividends. Debt may increase, but due to low tax savings, an increase in the debt ratio does not provide a strong incentive to increase the dividend ratio. The company's capital and operational expenditures are increasing as the size of the company increases, but it tends to use sources of funding from internal company funds, thereby reducing the debt ratio. The larger the size of the company, the higher the need for funding. Meanwhile, the source of funding needs comes from retained earnings so that more available cash is used to increase investment and company operations. This policy will reduce the dividend ratio. The implication of these results is that companies tend to choose sources of funding from the capital market (stocks) rather than from debt. The larger the size of the company, the reason is that costs are more efficient and the level of information asymmetry is low because debt is considered to increase the risk of financial distress. The results also imply that there is a tendency that the greater the size of the company, the greater the need for funds for investment and for operations. The cheapest and least risky source of funding to fulfill it is an internal source of funds, namely retained earnings. However, there are consequences that will reduce the dividend ratio. Limited research uses one proxy for each observed variable in this study, so there is no information if other proxies are used. Future research should use more than one proxy for each research variable.The results of the research cannot be generalized for all sectors listed on the Indonesia Stock Exchange, because they have just reviewed them in Sub Basic Industry and Chemical Sector. Subsequent research expands the coverage of its population in all sectors listed on the Indonesia Stock Exchange.
4,915.4
2020-12-27T00:00:00.000
[ "Business", "Economics" ]
Conditions for finiteness and bounds on moments of record values from iid continuous life distributions We consider the standard and kth record values arising in sequences of independent identically distributed continuous and positive random variables with finite expectations. We determine necessary and sufficient conditions on the type of record k, its number n and moment order r so that the rth moment of the n value of kth record is finite for every parent distribution. Under the conditions we present the optimal upper bounds on these moments expressed in the scale units being the respective powers of the first population moment. The theoretical results are illustrated by some numerical evaluations. Introduction The notion of records in probabilistic setup was defined by Chandler (1952). Given a sequence of random variables X 1 , X 2 , … , we say that the upper record value occurs in the sequence at time j and becomes equal to X j if X j > X i , i < j . Using the notion of order statistics X 1∶j ≤ … ≤ X j∶j arising by ordering X 1 , … , X j in the ascending order, we obtain the record at time j if X j∶j > X j−1∶j−1 . Dziubdziela and Kopociński (1976) generalized the notion introducing so called kth records which appear when the new value of kth maximum becomes greater than the previous one. Formally, for n ≥ 1 the sequences of consecutive kth record times T (k) n and kth record values R (k) n are defined as follows For k = 1 the above definitions describe the classic record times and values of Chandler (1952). If the joint distribution of X 1 , X 2 , … does not admit the repetitions almost surely (e.g., X i are iid with a continuous parent distribution), then the sequences of kth records are infinite. The kth records with k ≥ 2 , although seemingly less natural and intuitive than the first ones, attract increasing interest of statistical investigators of extreme events. The main reason is that they occur much more often. Indeed, if a new first record appears in the sequence, the all the kth records change: the old kth records become new (k + 1) th records. If a new observation becomes the th maximum, then we get new kth records for k ≥ , but the other kth records k < remain unchanged. The theory of records is best developed in the model of iid continuously distributed random variables. In particular, the distribution function of the nth value of kth record is the composition G (k) n • F , where and F is the parent distribution function of the original observations. Moreover, if F has a density function f with respect to the Lebesgue measure, then the density function of R (k) n exists and has the form g (k) n • F ⋅ f , where The above establishments can be found, for example, in the monographs by Arnold et al (1998) and Nevzorov (2001), devoted to the record value theory. In the paper we consider the classic model of iid random variables X 1 , X 2 , … with a continuous distribution function F. We additionally assume that X 1 is positive and has a finite expectation > 0 . Our main results are following. For every type k ∈ ℕ and number n ∈ ℕ of the record value R (k) n we determine the necessary and sufficient conditions on the raw moment order r > 0 such that R (k) n r < ∞ for all parent distribution functions F satisfying the assumptions. Our conditions are following: r ≤ k if n = 1 , and r < k if n ≥ 2 . Moreover, under these conditions we establish the sharp upper bounds on R (k) n r expressed in the scale units being the respective powers r of the mean of the parent distribution which are valid for all continuous F supported on ℝ + . The bounds are presented in Sect. 2. In particular, these results provide the sufficiency proof of our conditions for finiteness of moments of record values. The necessity of the conditions are proven in Sect. 3. In Sect. 4 we present and briefly discuss some numerical examples of the bounds determined in Sect. 2. Some conclusions of our results are presented in Sect. 5. The problem of existence of kth record values has been hitherto solved only for some special cases of k and n. For k = n = 1 it is trivial because R (1) 1 = X 1 . For k = 1 < n the solution was presented by Nagaraja (1978), Sect. 2). It follows that both R (1) n r < ∞ for all baseline distributions with finite mean iff r < 1 . For n = 1 < k , the conclusion can be deduced from Sen (1959) who considered order statistics. This was explicitly stated by Papadatos (2021) Here we focus on the remaining cases k, n ≥ 2 . To the best of our knowledge, the most general sufficient conditions were presented by Cramer et al. (2002), Theorem 2.4 and the following Remark (v). They proved that finiteness of X 1 assures that R (k) n r < ∞ if r < k and r ≤ n. The first result concerning the bounds on the expectations of record values was presented by Nagaraja (1978) who determined the sharp evaluations of the expectations of the standard record values from the population means expressed in the population standard deviation units. Raqab (1997) derived analogous bounds for the kth records. These results were generalized by Raqab (2000) and Raqab and Rychlik (2002), respectively, by considering more general scale units based on various central absolute moments of the original variables. Klimczak (2007) considered kth records from bounded populations expressing the expectation bounds in terms of the length of the support interval. There are known some optimal mean-variance bounds on the expectations of record values from restricted families of distributions. The most general results are due to Bieniek (2007) and Goroncy (2019) who considered the families with monotone generalized failure rates. We also mention the bounds on differences of record values studied in Danielak (2005), and Danielak and Raqab (2004). Evaluations of record raw moments different from the first ones have not been presented by now in the literature. We can only mention analogous results for the order statistics determined in Papadatos (2021). Bounds In this section we present the sharp bounds on the ratios R (k) n r r for r < k ≥ 1 and n ≥ 2 . We exclude from our considerations the first values of kth records, because these are minima of the first k observations. The bounds for the moments of order statistics, and sample minima in particular were determined by Papadatos (2021). He proved that For r < k the bound is attained by any degenerate parent distribution (see Papadatos (2021), Remark 4). If r = k , we get the equality in Eq. (3) iff the parent distribution is supported on two points 0 and p with probabilities 1 − p and p for arbitrary 0 < p < 1 (see Papadatos (2021), Corollary 3). A classic result by Sen (1959) says that if r > k , then there exist parent life distributions such that for iid X 1 , … , X k we have X 1 < ∞ and X r 1∶k = +∞ . Since in our model continuity of random variables is desired, we can replace X r 1∶k by [R (k) 1 ] r in the above relations, but the attainability conditions should be modified. The bounds are attained in the limit by sequences of continuous parent distributions converging weakly to the corresponding optimal discrete ones. In the sequel, we use the following lemma. Lemma 1 Fix k, n ≥ 2 and 1 ≤ r < k . Then the function is maximized by the unique point 0 < = (k) n (r) < 1 − exp − n−1 r+k−2 that is the unique solution to the equation We denote the maximal value of function Eq. (4) by The middle formula follows from Eq. (5). The last one provides the explicit form of it. The proof of Lemma 1 is postponed to the Appendix. We first take into account the moments of record values of orders 0 < r < 1 for k, n ≥ 2 . To this end we use a simplified version of Theorem 1 in Moriguti (1953) (see Lemma 2) and the Hölder inequality (see Lemma 3). Lemma 2 Suppose that a real function g defined on [a, b] has a finite integral. Let g denote (the right-continuous version of, say) the derivative of the greatest convex We use the above construction for the density functions Eq. (2). For k, n ≥ 2 the derivative of the greatest convex minorant of the antiderivative Eq. (1) of Eq. (2) has the form n (1) is the unique solution to Eq. (5) with r = 1 . The greatest convex minorant is equal to the antiderivative G (k) n (see Eq. 1) on the interval (0, (k) n (1)) and less than G (k) n on ( (k) n (1), 1) . These facts were established and applied in a number of papers, see, e.g., Raqab (1997) and Raqab and Rychlik (2002). The Hölder inequality can be found, e.g., in Mitrinović (1970). Theorem 1 Let X 1 , X 2 , … , be an infinite sequence of iid continuous positive random variables with a mean 0 < < ∞ . Then for k, n ≥ 2 and 0 < r < 1 we have Lemma 3 Let g and h be non-negative non-zero elements of the Banach spaces The bound in Eq. (8) is attained in the limit by continuous parent distributions tending weakly to the distribution function n −1 denotes here the inverse of the increasing part of Eq. 2) with the atom of size 1 − (k) n (1) at the right end-point of the support, and absolutely continuous part between 0 and the atom. The proof of Theorem 1 can be found in the Appendix. Now we consider the cases k = 1 with n ≥ 2 and 0 < r < 1 . The idea of getting the bounds is analogous to that of the previous theorem. However, for k = 1 we can obtain explicit formulae and we decided to present them below. The reason is that in this case the method presented in Lemma 1 generates g (1) (2) for k = 1 is then increasing, its antiderivative is convex, and so it coincides with its greatest convex minorant. This substantially simplifies the evaluations. Theorem 2 For X 1 , X 2 , … being iid random variables with an expectation 0 < < ∞ , and the respective classic records with k = 1 , n ≥ 2 and 0 < r < 1 we get We postpone the proof of Theorem 2 to the Appendix. If n−r 1−r is an integer number m, say, i.e. r = i+1 n+i for some i = 0, 1, … , then the right-hand side of Eq. (11) has yet nicer form [(m−1)!] n−1 m−1 (n−1)! . We now concentrate on evaluations of rth moments of kth record values for r ≥ 1 . At the beginning, we exclude from our investigations the classic moments R (1) n , n ≥ 2 , because there exist parent distributions such that X 1 = < ∞ and R (1) 2 = +∞ (see Nagaraja (1978)). Otherwise we use the following lemma which was presented in Papadatos (2021), Corollary 4. and the equality is attained if F is a two-point distribution function supported on 0 and a positive number. Observe that for r = 1 relation Eq. (12) becomes a trivial equality with no assumptions on F. Theorem 3 Let X 1 , X 2 , … satisfy the assumptions of Theorem 1. For n, k ≥ 2 and 1 ≤ r < k we have where with (k) n (r) determined by the Eq. (5). The equality in Eq. (13) is attained in the limit by continuous parent distributions tending weakly to the two-point distribution The proof of Theorem 3 is presented in the Appendix. Conditions for moment finiteness Here we present the necessary and sufficient conditions for finiteness of rth moments of nth values of kth records for arbitrary continuous life distributions of the baseline sequence with a finite mean. As we mentioned in the introduction, the conditions are known in some special cases. If n = 1 then R (k) 1 = X 1∶k , and the result is immediately concluded from the classic paper by Sen (1959) on order statistics (see also Papadatos (2021)). If r ≤ k , then for every parent distribution with X 1 < ∞ we have [R (k) 1 ] r < ∞ as well. For r > k there exist parent distribution functions such that X 1 < ∞ and [R (k) 1 ] r = ∞ . An example of such distribution function is (see Papadatos (2021), Remark 3). If n ≥ 2 and k = 1 , the respective condition r < 1 was established in Nagaraja (1978). In Lemma 2.1 of the paper he proved that finiteness of X 1 and [R (1) n ] r < ∞ for all n ≥ 2 and 0 < r < 1 . On the other hand, for the distribution function Eq. (14) we have X 1 = 2e < ∞ and R (1) n = ∞ for all n ≥ 2. x > e, In Theorem 4 below we treat all the remaining cases with n, k ≥ 2 . Cramer et al. (2002) proved that finiteness of X 1 = assures that R (k) n r < ∞ if r < k and r ≤ n . Our conditions are essentially weaker. Theorem 4 Let X 1 , X 2 , … be positive iid random variables with a continuous marginal distribution and finite mean. The necessary and sufficient condition for finiteness of R (k) n r for all parent distributions and some n, k ≥ 2 is r < k. We present the proof of Theorem 4 in the Appendix. It consists in delivering an example of a single absolutely continuous life marginal distribution of the elements of the sequence X 1 , X 2 , … , such that X 1 < +∞ and R (k) n r = +∞ for all k ≥ 2 , n ≥ 2 , and r ≥ k . We finally present a simple generalization of the results of this section. Corollary 1 Consider a sequence of positive and continuous iid random variables X 1 , X 2 , … . Then for arbitrary parent distribution with finite pth moment we have R (k) n r < ∞ iff either n = 1 and r ≤ pk or n ≥ 2 and r < pk. Numerical results Here we illustrate the results of Sect. 2 by some numerical examples. In Tables 1, 2 and 3 we present the bounds on the ratios of rth moments of kth records and the rth powers of the population means for some values of the first, second and third records, respectively. In all the cases we consider the second, third, fifth and tenth values of kth records. Table 1 is based on Theorem 2, and contains the evaluations of moments of orders r = 0.25 , 0.5, and 0.9. In Tables 2 and 3 we present the bounds for the moments of orders r = 0.5 i , i = 1, … , 2k − 1, and r = k − 0.1 , where k = 2 and 3, respectively. The results of the first columns of Tables 2 and 3 are concluded from Theorem 1, and the other ones are calculated with use of Theorem 3. All the bounds are greater than 1, which is a consequence of the sharp bounds 3 Conditions for finiteness and bounds on moments of record values… proved by Papadatos (2021). We also note that the bounds are increasing in n and decreasing in k, which is also obvious, because we have the relations R (k) n < R (k) n+1 and R (k+1) n > R (k) n following from the definitions. Moreover, they increase with respect to r and tend to infinity as r approaches k. In the case k = 1 we can apply Eq. (11) with the factorial approximation of the Gamma function and Stirling formula in order to obtain The right-hand side expression converges to 1 as n increases. We cannot provide such approximations for with r → k ≥ 2 , because (k) n (r) do not have analytic representations, and the bounds, dependent on them, do not have explicit formulae. Anyway, we can see that for k = 2, 3 and r = k − 0.1 the bounds B (k) 10 (r) are greater than 10 9 . Conclusions We consider the moments of kth upper record values in the classic model of sequences of independent and identically continuously distributed positive random variables. Our purpose is to determine the necessary and sufficient conditions for the moment orders r > 0 such that the rth moment R (k) n r of the nth value, n ≥ 1 , of kth record, k ≥ 1 , is finite for all parent distributions with a finite mean. Since the solution is known in the literature for the particular cases k = 1 ≤ n and n = 1 ≤ k , we focus on the remaining cases k, n ≥ 2 proving that the necessary and sufficient conditions are r < k for arbitrary n ≥ 2 . The necessity proof consists in constructing the parent distribution with a finite expectation such that R (k) n r = +∞ for all n ≥ 2 and r ≥ k ≥ 2 . Instead of the sufficiency proof, we provide a stronger result: for every n ≥ 2 and r < k ≥ 1 we determine the sharp upper bounds on R (k) n r over all the parent distributions with finite means, expressed in the scale units being the rth powers of the mean of the single observation. Numerical exemplary evaluations show that the bounds are extremely large when the moment orders r are close to the border value k even for moderate n. Our findings allow the researchers not to bother about existence of moments of kth records of orders less than pk if they consider sequences of random variables with arbitrary parent life distributions with finite pth moments. We hope that the tools presented in the paper shall be useful in determining the necessary and sufficient conditions for finiteness of moments in other models of ordered random variables, e.g., for progressively type II censored order statistics and generalized order statistics. Appendix The Appendix contains the proofs of Lemma 1 and Theorems 1-4. Proof of Lemma 1. The function Eq. (4) is positive and satisfies h(0) = 1. By the de l'Hospital rule and The numerator N(x), say, has the derivative 3 Conditions for finiteness and bounds on moments of record values… Since we have which means that it is negative for 0 < x < = (k) n (r) = 1 − exp − n−1 r+k−2 < 1 , and positive for < x < 1 . It follows that N(x) is first decreasing and then increasing. Since N(0) = r ≥ 1 , and N(1) = 0 , the function is positive on some interval [0, ) , = (k) n (r) < (k) n (r) , and negative elsewhere. The same concerns Eq. (16), because its denominator is always positive. Since h(0) = 1 and Eq. (15) holds, we conclude that Eq. (4) is increasing on [0, ) , and decreasing on ( , 1) . Obviously its unique maximum = (k) n (r) is the only point that satisfies Eq. (5) (cf Eq. 16). It is also less than (k) n (r) = 1 − exp − n−1 r+k−2 . ▪ Proof of Theorem 1. We have The first inequality holds by Lemma 2 due to the facts that [F −1 (x)] r is nondecreasing, and Eq. (7) holds. The latter the an application of the Hölder inequality with parameters p = 1 r > 1 and q = p p−1 = 1 1−r > 1 . Note that changing the variables twice y = − ln(1 − x) and z = k−r 1−r y , we obtain Conditions for finiteness and bounds on moments of record values… By Lemma 1, the equality in the former inequality occurs if the only value of F on (0, ∞) different from 1 is (k) n (r) . By Lemma 4, for r > 1 , we get the equality in the latter inequality if F is supported on 0 and a positive number. Combining the conditions, we deduce that the only distribution that attains the equality in Eq. (13) is one that assigns probability (k) n (r) to 0 and 1 − (k) n (r) to a positive number. This distribution satisfies the moment condition if this positive number amounts to . This ends the proof. ▪ Proof of Theorem 4. The sufficiency of the condition is deduced from Theorem 3. To verify its necessity we should present examples of the parent distributions with finite means such that R (k) n r = ∞ for given n ≥ 2 and r ≥ k ≥ 2 . In fact, it suffices to construct distributions such that R (k) 2 k = ∞ for k ≥ 2 . Our construction works for all k ≥ 2. We take a sequence of positive numbers e j , j = 0, 1, … , tending to ∞ by the recursive relation and a family of disjoint intervals Consider a mixture of distributions ∑ ∞ j=0 j U j , where U j is the uniform distribution over the interval I j , j = 0, 1, … , and the weights are defined as The mixture is a proper probability measure, because e 0 = 1, e j = e e j−1 , j = 1, 2, … , I j = e j , e j + 1 , j = 0, 1, … . The measure is obviously absolutely continuous with respect to the Lebesgue measure, and its density function has the form Suppose that the sequence of iid life random variables X 1 , X 2 , … has the common density function Eq. (22). It has a finite expectation For proving that R (k) 2 k = ∞ we need to calculate for j = 1, 2, … . Remind that the corresponding distribution function of the second value of kth record is The distribution function of Eq. (23) satisfies (23) , j = 1, 2, … we have and We are now in a position to prove that It suffices to show that the summands do not tend to 0 as j → ∞ . We have Note that for all p ∈ ℝ , because Therefore 1 − F e j k = 1 j 2k (e j − e j−1 ) k , − ln 1 − F e j = 2 ln j + ln(e j − e j−1 ), = e k j [1 + 2k ln j + k ln(e j − e j−1 )] j 2k (e j − e j−1 ) k − e k j [1 + 2k ln(j + 1) + k ln(e j+1 − e j )] (j + 1) 2k (e j+1 − e j ) k . x (see Eq. 22) for arbitrary > 0 we obtain X 1 = and R (k) 2 k = ∞ . 3 Conditions for finiteness and bounds on moments of record values… The integer part of e 5 has approximately 10 1.012558⋅10 1656520 digits. It follows that the support intervals I j , j = 0, 1, … , are very dispersed on the positive half-axis, and the corresponding distribution function has a very heavy tail. One could decrease the dispersion by replacing e in the construction of above proof by some 1 < a < e , especially one close to 1 but this would generate more sophisticated formulae. Anyway, for any a > 1 the tails are heavier than these of the distribution functions with ln m x denoting m-fold composition of logarithm functions and any integer m ≥ 2 and > 0 . It can be checked that these distribution functions have finite expectations and satisfy m, R (k) n k < ∞ iff n ≤ k and m, R (k) n r = ∞ for all r > k with n ≥ 2 and r=k with n ≥ k + 1.
5,896.8
2022-11-09T00:00:00.000
[ "Mathematics" ]
Asymmetric Little–Parks oscillations in full shell double nanowires Little–Parks oscillations of a hollow superconducting cylinder are of interest for flux-driven topological superconductivity in single Rashba nanowires. The oscillations are typically symmetric in the orientation of the applied magnetic flux. Using double InAs nanowires coated by an epitaxial superconducting Al shell which, despite the non-centro-symmetric geometry, behaves effectively as one hollow cylinder, we demonstrate that a small misalignment of the applied parallel field with respect to the axis of the nanowires can produce field-asymmetric Little–Parks oscillations. These are revealed by the simultaneous application of a magnetic field perpendicular to the misaligned parallel field direction. The asymmetry occurs in both the destructive regime, in which superconductivity is destroyed for half-integer quanta of flux through the shell, and in the non-destructive regime, where superconductivity is depressed but not fully destroyed at these flux values. I. LITTLE-PARKS OSCILLATIONS IN 5 DEVICES Figure S1a-e shows data and fit of Little-Parks oscillations in five different devices. Parameters used for the fit can be found in Table S1 as well as their corresponding errors. The error bars are rough estimates changing only one parameter at a time, while keeping the other three fixed. All devices are fitted with the Little-Parks model with good agreement. Figures S1a,b correspond to Device 1 and 2 analyzed in the main text. To obtain a good fit, the parameter A ⊥ is chosen as a fitting parameter. In order to get an insight of the origin of A ⊥ , we investigate this parameter versus junction length for the five devices shown in Fig. S1f. However, no correlation between the two parameters is found as expected [S1]. Figure S1g presents the correlation between the d * /ξ ratio and the switching current at the first half-integer flux quantum I Φ=Φ 0 /2 sw . Here, d * is the diameter of a cylinder with the same area as the two hexagonal nanowires (effective diameter) and ξ the coherence length extracted from the fits. It is observed that devices with small ratio exhibit destructive Little-Parks (Devices 2 and 3), while those with larger ratio exhibit non-destructive Little-Parks oscillations, in agreement with theoretical work [S2] and previous experiments in nanowires [S3, S4], even though our device cross section is ellipsoid and not circular. S1. Model parameters used to fit data all devices in Fig. S1. From left to right, coherence length (ξ), effective perpendicular flux parameter (A ⊥ ), effective parallel flux area (A ), and ratio of shell thickness (ts) to effective single cylinder diameter (d * ). Device ξ (nm) . Dependence of critical current on field angle and perpendicular field (a,b) Colormaps of differential conductance, dV /dI, plotted as a function of bias current, I, and (a) angle between the vector magnetic field B and coil X, ϕ, and (b) magnetic field applied perpendicular the double nanowires with misalignment θ = 4.4 • , B θ ⊥ . In (a), the critical current shows clear modulations and it reaches its minimum (maximum) when φ is such that θ = 90 (θ = 0), reflecting the phenomenological finding that the area threaded by parallel magnetic flux is smaller than the effective perpendicular parameter (A < A ⊥ ). The critical perpendicular field found from the measurement in (b) is significantly smaller than the parallel critical field (0.1 T against 0.95 T), supporting the data and interpretation in (a). The asymmetry against ±B θ ⊥ (ϕ = 2.2, 5.34 rad) in both (a) and (b) is attributed to coil remanence. The data was fitted with a calculation of the critical current, Ic (dashed lines), using the single hollow superconducting cylinder model. Reasonably good fits are obtained, with the fit quality decreased due to asymmetries. Data obtained from Device 1. Here we analyze the dependence of I c as a function of a rotating magnetic field of magnitude B r = 0.1T, showing that the two parameters A ⊥ and A defined in the main text have to be different in order to have a modulating critical current versus a rotating magnetic field. Figure S2a shows such a measurement from Device 1, where the black dashed-line is the model we use based on Eqs. 1,2 of the main text. As the applied magnetic field is weak, we have n = 0 and the last term of Eq. 2 (α ) is negligible. As a result, α and α ⊥ would be identical if the two parameters are the same. A calculation from Ref. [S1] for a solid cylinder found a factor of 2 between the parameters A and A ⊥ (A ⊥ /A =2), while we experimentally find a factor of A ⊥ /A ≈ 2.5 − 5 for our elliptical hollow cross-sectional (13 nm shell) full-shell nanowires agreeing with the calculation on A ⊥ being larger than A . Theoretical work on our specific system is needed to understand if these findings are consistent with theory. Figure S2b presents an I − B measurement of Device 1 as a function of dV /dI at an angle perpendicular to nanowire axis.The black dashed-line is the fit, showing good agreement to the data, using the same parameters as in Fig. S2a and in Fig. 2 of the main text. Figure S3 presents Little-Parks oscillations of I c and T c of Device 3 to justify Eq. 4 of the main text. The relation between the zero-field and finite-field critical current and critical temperature noted in Eq. 4 of the main text is verified in the data. Explicitly, at B θ = 0 we measure I sw (B θ = 0) = −24 µA and T c (B θ = 0) = 1.3 K, and at B θ such that III. TEMPERATURE DEPENDENCE OF LITTLE-PARKS OSCILLATIONS Negative values of current are chosen to capture the switching current as the sweep direction is from positive to negative. For these values we verify the relation between I c /I c0 and T c /T c0 in Eq. 4 of the main text within 5%, assuming that I sw = I c . The lack of additional lobes beyond the ones at Φ Φ 0 = ±1 is due to a large angle misalignment deduced from the fit in Fig. S1d, which effectively reduces the critical magnetic field of the device. IV. MAGNETIC FIELD REMANENCE In this section we discuss the finite magnetic field remanence from the X and Z coils that were used to probe the Little-Parks effect in our devices. Figures S4a,b show measurements of I c as a function of B x swept upwards and downwards respectively, as noted by the red arrow. Note that the maximum value of I c shifts by ≈ 10 mT between the two panels. Figures S4c,d exhibit the same effect for the Z coil B z . This effect explains the small inconsistencies -which are mostly visible near zero magnetic field -between datasets, when the magnetic field sweep direction changes. S4. Evidence for finite remanence in coils X and Z. (a,b) Colormaps of differential conductance, dV /dI, plotted as a function of field in (a,b) coil X, Bx and (c,d) coil Z, Bz and positive bias current, I. Arrows on the horizontal axis indicate field direciton. The pattern of destruction of superconductivity is asymmetric and hysteretic on the direction of Bx and Bz, with a field-direction dependent small offset of < 20 mT seen in both cases.
1,731.8
2021-06-02T00:00:00.000
[ "Physics" ]
Expression and metabolism profiles of CVT associated with inflammatory responses and oxygen carrier ability in the brain Abstract Aim As the main type of stroke, the incidence of cerebral venous thrombosis (CVT) has been rising. However, the comprehensive mechanisms behind it remain unclear. Thus, the multi‐omics study is required to investigate the mechanism after CVT and elucidate the characteristic pathology of venous stroke and arterial stroke. Methods Adult rats were subjected to CVT and MCAO models. Whole‐transcriptome sequencing (RNA‐seq) and untargeted metabolomics analysis were performed to construct the transcriptome and metabolism profiles of rat brains after CVT and also MCAO. The difference analysis, functional annotation, and enrichment analysis were also performed. Results Through RNA‐seq analysis, differentially expressed genes (DEGs) were screened. 174 CVT specific genes including Il1a, Ccl9, Cxxl6, Tnfrsf14, etc., were detected. The hemoglobin genes, including both Hba and Hbb, were significantly downregulated after CVT, compared both to the MCAO and Sham groups. Metabolism analysis showed that CVT had higher heterogeneity of metabolism compared to MCAO. Metabolites including N‐stearoyltyrosine, 5‐methoxy‐3‐indoleaceate, Afegostat, pipecolic acid, etc. were specially regulated in CVT. Through the immune infiltration analysis, it was found that CVT had a higher immune response, with the abundance of certain types of immune cells increased, especially T helper cells. It was important to find the prevalence of the activation of inflammatory chemokine, cytokine, NOD‐like pathway, and neutrophil extracellular trap. Conclusion We explored and analyzed the gene expression and metabolomic characteristics of CVT, revealed the specific inflammatory reaction mechanism of CVT and found the markers in transcriptome and metabolism levels. It points out the direction for CVT early diagnosis and treatment. | INTRODUC TI ON Stroke is one of the leading causes of death and severe disability worldwide, can be classified as arterial or venous, according to the vascularity of the lesion.Ischemic strokes caused by arterial vascular occlusion/stenosis account for 87% of arterial strokes. 1 Cerebral venous sinus thrombosis as the main type of venous stroke is a particular subtype that is caused by interruption of venous blood flow due to thrombosis of the venous vessels, accounting for 0.5%-1% of strokes but accounting for 14%-20% of strokes in young adults. 2portantly, the incidence of CVT has been rising with the advent of diagnostic techniques. Compared to arterial stroke, CVT has the following characteristics: (1) low incidence rate, currently estimated at 13.2 to approximately 15.7 per million per year 3,4 ; (2) high prevalence in youth, female dominance, male to female ratio up to 1:3.5 5 ; (3) the clinical manifestations are complex and varied, with great individual differences; (4) early diagnosis is difficult and often delayed or even missed, with a median delay of 7 days and a 73% missed diagnosis rate 6-8 ; (5) risk factors for the disease are different from those for arterial stroke; (6) the drug regimens for the two stroke subtypes differ, with anticoagulation being the first-line treatment option for CVT.These differences lead us to speculate that there may be fundamental differences in the molecular pathology of arterial and venous stroke.So far, extensive, and comprehensive mechanisms research has been performed on arterial stroke, but there are few studies on venous stroke, which limits the recognition of venous stroke and the development of related drugs. Multi-omics technologies offer the possibility of a more accurate elucidation of diseases by integrating the many interconnected and interacting components of biological systems to study the mechanisms of complex biological processes.Among them, transcriptome + metabolome research protocols have become popular in recent years, which can help us filter out key genes, metabolites, and metabolic pathways from a huge amount of data. In the study of intestinal inflammation, Liu et al. characterized the intestinal toxicity of BPF exposure by GC-MS untargeted metabolomic and transcriptomic study methods and explored the possible pathogenesis. 9By combining targeted metabolomic and transcriptomic studies of brain tissue samples to identify abnormalities in multiple metabolic networks associated with transmethylation and polyamine pathways in Alzheimer's disease, the study significantly increases the overall understanding of the metabolic basis of AD pathogenesis and provides insights into new targets for disease-modifying therapies. 10 investigate the characteristic pathological mechanisms of venous stroke and arterial stroke, we compared the pathological mechanisms of the two-stroke subtypes by transcriptomic and metabolomic techniques using an MCAO rat model to simulate arterial stroke and a CVT rat model to model venous stroke.Further, we explored the specific associations between transcriptomics and metabolomics through association analysis using bioinformatics technology, which contributes to a comprehensive understanding of the heterogeneity of arterial and venous strokes. | Animals In the present study, adult Sprague Dawley (SD, aged 8-10 weeks, each weighting approximately 250-300 g) rats were purchased from Vital River (Beijing, China).Each rat is housed in a conventional environment.Twelve-hour light/dark cycle with free access to water and food for 1 week in the animal experiment center of Capital Medical University.The weight of each animal was controlled (300-350 g) during experiments.All animal experimental protocols were approved by the Institutional Animal Care and Use Committee of Beijing Capital Medical University. | The permanent MCAO rat model MCAO surgery was performed according to the reference, 11 Rats were deeply anesthetized with 4% of isoflurane and subsequently maintained between 1.5% and 2% during the surgical procedure using an isoflurane vaporizer (RWD, Shenzhen, China).Under anesthesia, the rats were placed in a supine position, using iodophor as a preoperative skin disinfectant, and a midline incision was made across the sagittal plane on the anterior neck, separating down the gland until the sternocleidomastoid was visible.Separating down the sternocleidomastoid, common carotid artery, external carotid artery, and internal carotid artery was exposed, isolating common carotid artery, external carotid artery, and internal carotid artery and their branches.The branch of the artery was ligated and cut.Artery clips were placed on the common carotid and internal carotid arteries, and the external carotid artery was ligated and cut to create a stump. A filament was inserted into the external carotid artery stump and then advanced into the middle cerebral artery which after removal of the artery clipped on the internal carotid artery until resistance was felt.Filaments were fixed at the external carotid artery stump and removal of the artery clip on the common carotid artery.The incision was sutured and sterilized again, and rats were housed in a separate cage after surgery. | The CVT rat model The CVT rat model was performed as the previous method. 12Under anesthesia, the rats were placed in a prone position, using iodophor as preoperative skin disinfectant, and an incision was made in the middle of each rat's head, and the subcutaneous tissue was separated to expose the skull.A longitudinal cranial window, which exposed the SSS and bilateral cortex, using a high-speed dental drill through microscopic observation.During the drilling process, the drill tip was continuously cooled with normal saline to avoid thermal injury in the dura mater and cortex.after exposing the SSS and bilateral cortex, semi-ligated the SSS rostrally and caudally using an 8-0 polyamide suture, and then thrombin was injected into SSS (200 μL/3 min).The surgical site was irrigated with normal saline, followed by sealing of the incision and sterilizing again.Once the surgery was completed, the wounds were closed using a 6-0 surgical suture with a simple continuous pattern.Rats were allowed to recover under the heat lamp, and housed in a separate cage and antibiotic ointment was applied over the skin incision, following 5 consecutive days.All efforts were made to minimize the number of animals used and their suffering.In the sham group, the animals received only skull fenestration. | Sample preparation The rats were randomly divided into three groups with nine rats in each: sham group, MCAO group, and CVT group.48 hours after postsurgery, rats in all groups were anesthetized with 1% pentobarbital sodium (40 mg/kg, intraperitoneal), then perfused with phosphate buffer saline.The brain was then removed from rats in the sham (n = 3), MCAO (n = 3), and CVT (n = 3) groups.After the material was taken, it was rinsed with 0.9% saline to remove blood and dirt, and the tissue types such as connective tissue and adipose tissue were removed and the liquid on the surface of the material was aspirated.Tissue samples were divided into small pieces of approximately 50 mg, quick-freeze in liquid nitrogen, and stored in sterile storage tubes at −80°C until transcriptome sequencing and metabolomics analysis were performed. | RNA sequencing Rat brain tissue was processed for total RNA extraction using PicoPure RNA Isolation Kit (# KIT0202 Arcturus, CA, USA) in accordance with the manufacturer's instructions, and genomic DNA was eliminated using a DNase treatment step in the kit.The amount and quality of the extracted total RNA were then examined using Agilent 2100 Bioanalyzer (NYSE: A, Palo Alto, USA) to verify the RNA samples.To create the sequencing library, only high-quality RNA samples (OD260/280 = 1.8-2.2,OD260/230 = 2.0, RIN = 6.5, 28S:18S = 1.0, >2 mg) were employed.RNA-seq transcriptome library was prepared following ABI StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, MA, USA) using 1 mg of total RNA. According to the principle of A-T base pairing between magnetic beads with oligo (dT) and the polyA at the 3′ ends of mRNA, mRNA can be separated from total RNA for transcriptomic analysis.At the same time, we introduced a fragmentation buffer to break up big mRNA fragments into 300 bp fragments at random.Then the mRNA was reverse-transcribed into cDNA, and after being connected to the adaptor, the mRNA was purified and amplified by PCR to obtain the final library.Finally, the paired-end RNA-seq sequencing library was sequenced with the DNBSEQ (SE50). | Read mapping The raw data contains reads of low quality, reads with adaptor sequences, and reads with high levels of N base.Those reads were filtered before the data analysis for reliability analysis results.Then we use HISAT to align the clean reads to the reference genome. The Rattus_norvegicus genome (version of the reference genome: GCF_000001895.5_Rnor_6.0)was chosen as the reference genome, and the total mapping genome ratio ranged from 94.89% to 95.24%, the ratio of the uniquely mapping genome ranged from 83.99% to 85.92%. | Identification of differentially expressed genes Using the transcripts per million mapped reads (TPM) method, we determined the expression level of each transcript to determine which genes were expressed differently in each sample.R statistical package software DEseq2 (http:// www.bioco nduct or.org/ packa ges/ stats/ bioc/ edgeR /, version 3.14.0)was utilized for differential expression analysis.In addition, GO annotation analysis and KEGG function enrichment analysis (p-adjust ≤0.05) are performed by Clusterprofiler (version 4.6.0). | Inflammation response analysis The mouse MSigDB gene set (HALLMARK_INFLAMMATORY_ RESPONSE) was used for the ssGSEA analysis.The inflammatory response score of each sample was quantified and completed using the R packages "GSVA" and "GSEABase."36 immune cell type fractions were calculated using ImmuCellAI-mouse (http:// bioin fo.life.hust.edu.cn/ ImmuC ellAI -mouse/ ) in each sample which applied a hierarchical strategy to classify the 36 cell types into three layers. | Untargeted metabolomics analysis A total of 50 mg of each sample was accurately weighed and transferred to a 2 mL centrifuge tube containing one small steel ball.Then 400 mL of the extraction (solvent methanol/water: 4/1, v/v) and 20 mL of the internal standard (2-chloro-Lphenylalanine) at a concentration of 0.02 mg/mL were added to the centrifuge tube.Data pre-processing, statistical analysis, metabolite classification annotations, and functional annotations were performed using the was performed using MetaboAnalyst. 13The multivariate raw data is dimensionally reduced by PCA (Principal Component Analysis) to analyze the groupings in the data set (whether there is an abnormal sample). | Statistical analysis Common tests for normality including the Shapiro-Wilk test were conducted.These tests evaluate the differences between the data sample and a normal distribution to determine if the data follow a normal distribution.If the data fail the normality test, non-parametric equivalents such as the Wilcoxon rank-sum test or Mann-Whitney U test were used for analysis.The significance was p < 0.05.ANOVA (analysis of variance), also known as "variance analysis," is used to test the significance of differences among CVT, MCAO, and Sham. Metabolites that differed significantly (adjusted p < 0.05) were identified as differential metabolites. | Identification of differentially expressed genes Transcriptomic analysis was performed on the brain of rats, and we sequence nine samples using DNBSEQ SE50 platform, averagely generating about 1.19Gb bases per sample.The average mapping ratio with reference genome is 95.07%, and the average mapping ratio with gene is 70.29%; 19,980 genes were identified. After data quality control, a total of 214,526,500 clean reads and 10,726,325,000 clean bases were obtained.The proportion of each sample with quality scores ≥Q20 and ≥Q30 exceeded 97% and 93%, thus indicating the high quality of reads (Table S1). Principal component analysis (PCA) of gene expression profiles was performed, and the result showed good separations between Sham, MCAO, and CVT groups, indicating high correlation within the group and great differences between the Sham, MCAO, and CVT groups (Figure 1A).A total of 1372 different expressed genes (DEGs, FDR adjusted p-value <0.05, fold change >1.5 or <0.67), including 1149 upregulated genes and 223 downregulated genes, were obtained in CVT versus Sham, and 4666 DEGs, including 2793 upregulated genes and 1873 downregulated genes, were obtained in MCAO versus Sham, respectively (Figure 1B, Table S2).The interaction result showed that the number of upregulated genes of CVT versus Sham and MCAO versus Sham was 1042, and the number of downregulated genes was 156.In total, 174 CVT-specific DEGs were identified in the CVT group, of which 107 genes were only upregulated and 67 genes were only downregulated in CVT versus Sham, but not in CVT versus Sham (Figure 1C,D).Among Spp1, Csf2rb, Gfap, Xpnpep3, and Mki67 were upregulated in CVT versus Sham significantly, importantly those genes have been reported to be associated with immune cell activation, proliferation, and glial cell activation (Figure 1D).Tubb2b, Hspb1, Trh, Timp1, and Fbln2 were upregulated DEGs of MCAO versus Sham, and those genes have been reported to be associated with extracellular mechanism degradation and cell apoptosis (Figure 1E).The expression values of 100 CVT-specific DEGs are shown in Figure 1F, | Pathway and function enrichment of DEGs We performed a detailed analysis of the DEGs of the two groups, including Gene ontology (GO) and the KEGG pathway.In the CVT versus Sham group, the top 15 significantly enriched pathways were screened by KEGG enrichment results, including cytokine-cytokine receptor interaction, viral protein interaction with cytokine and cytokine receptors and pathways in cancer (Figure 2A).The top 20 of GO enrichment showed that in the molecular function category, most DEGs were associated with binding, catalytic activity, and molecular transducer activity.In the cellular component category, the dominant subcategories were cell, cell part, and organelle. Among the biological processes, the cellular process, single-organism process, and biological regulation were most enriched (Figure 2C). In the MCAO versus Sham group, the top 15 significantly enriched pathways were screened by KEGG enrichment results.Including cytokine-cytokine receptor interaction, viral protein interaction with cytokine and cytokine receptors, and osteoclast differentiation (Figure 2B).And the top 20 of GO enrichment were consistent with the results of CVT versus Sham (Figure 2D). | CVT special genes and pathways Enrichment of functions and pathways analysis were performed on 174 DEGs based on GO and the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, the result showed that CVT-specific genes were mainly enriched in African trypanosomiasis, cytokinecytokine receptor interaction and Malaria pathway(Figure 3A,C), and the MCAO group was mainly enriched in oxygen carrier activity, oxygen binding, molecular carrier activity and iron ion binding pathway (Figure 3B,D). Gene level analysis showed that the expression values of interleukin family genes IL1a, Ccl9 and Cxcl6 were significantly higher in the CVT (Figure 3E).IL1a is mainly involved in various immune responses, inflammatory processes, and hematopoiesis.On chromosome 2, this gene and eight other genes from the interleukin 1 family form a cytokine gene cluster.These genes' polymorphisms have been linked to rheumatoid arthritis and Alzheimer's disease, according to some research. 14These genes play important roles in innate and adaptive immunity and are involved in T cell-related responses.It was also found that the expression values of Tnfrsf8, Tnfrsf9, Tnfr, and sf14 of the tumor necrosis factor receptor superfamily had no significant difference between the MCAO and Sham groups while they tended to be higher in the CVT group. Except for Tnfrsf18, the expression values of Tnfrsf8, Tnfrsf9, and Tnfrsf14 genes in CVT group were significantly higher than those in the MCAO group (Figure 3E).This superfamily member is mainly related to T cell and B cell expression and participates in T cell and B cell-related inflammatory immune response.On the contrary, expression values of genes related to the family of hemoglobin Hba-a1, Hba-a2, Hba-a3, Hbb-b1, Hbb-bs, and Hbb were lower in the CVT group, and significantly lower than in the MCAO group (Figure 3E).The genes of this family are mainly involved innate immune system and O 2 /CO 2 exchange in erythrocytes and cellular responses to stimuli.GO annotations related to these genes include iron ion binding and oxygen binding.Using the ssGSEA method, the hallmark inflammatory GSEV z-score was quantified and it showed that both CVT and MCAO mice had a higher inflammatory response (Figure 4A).ImmuCellAI-mouse (Immune Cell Abundance Identifier for mouse) was used to deconvolute the abundances of 36 types of cells, showing the differences in the CVT, MCAO, and Sham groups (Table S3). Figure 4B GSEA analysis showed that CVT had a significant activation of the immune pathways, including cytokine−cytokine receptor interaction, chemokine signaling pathway, NOD-like receptor signaling pathway, and neutrophil extracellular trap formation, necroptosis, and apoptosis (Figure 4E). | Metabolites analysis In this study, non-targeted metabolomics was used to analyze the differential metabolites (Table S4).PCA analysis was also performed for metabolism (Figure 5A).The histogram shows the up-and downregulation of differential metabolites in CVT versus Sham and MCAO versus Sham (Figure 5B, Tables S5 and S6).The Venn showed that compared with the Sham group, the number of different metabolites in CVT groups was 110 significantly upregulated ones and 63 downregulated ones (Figure 5C,D). The number of different metabolites in MCAO groups was 244 significantly upregulated ones and 80 downregulated ones (Figure 5C,D).The volcano plot of metabolites shows the updown of differential metabolites in CVT versus Sham and MCAO versus Sham (Figure 5E,F).One-way ANOVA test was also performed to select the different metabolites in three groups (Figure 5G, Table S7, one-way ANOVA analysis, p < 0.05).To explore the potential metabolic pathways affected, we further analyzed all DMs based on KEGG annotations.The most enriched pathways in the CVT group compared with the Sham group were alanine, aspartate, and glutamate metabolism cysteine and methionine metabolism; tryptophan metabolism; Starch and sucrose metabolism; and d-glutamine and d-glutamate metabolism (Figure 6A).The most enriched pathways in the MCAO group were purine metabolism; pyrimidine metabolism; histidine metabolism; beta-alanine metabolism; and phenylalanine, tyrosine, and tryptophan biosynthesis (Figure 6B).Based on oneway ANOVA analysis, the metabolism levels of N-stearoyltyrosine, 5-methoxy-3-indoleaceate, 3-hydroxy-n-[(3s)-2-oxotetrahydro-3-furanyl]octanamide miglitol, afegostat, pipecolic acid, 5-hydroxyindoleacetate, hypotaurine, l(−)-carnitine and Asnval, etc., are significantly increased in the CVT group, but not in MCAO(Figure 6D,E).Figure S2 shows the test results for these metabolites.It has been demonstrated that indole-3-acrylic acid reduces DNA damage and lipid peroxidation, protecting neurons from ischemia-induced damage. 15Tacrine-8-hydroxyquinoline hybrid has the potential to treat AD-related brain damage and may have a significant beneficial effect on neurodegenerative diseases. 16A class of compounds known as 8-hydroxyquinoline has been found to have the potential to treat a number of neurodegenerative diseases. 17 Currently, it has been little known about the CVT mechanism, especially the immune response.[21][22][23][24] This study found a more significant state of immune activation in brain tissue after venous stroke compared to arterial stroke models in rat models, which is consistent with previous studies. 25better understanding of the mechanism after CVT is beneficial for early prevention and treatment.In the present study, we investigated the transcriptional profiles of rat brains after CVT and also MCAO and found the upregulated DEGs and downregulated DEGs.These CVT-specific regulated DEGs were obviously enriched in inflammatory response, and oxygen carriers.In addition, cell enrichment analysis and immune infiltration analysis also found several inflammatory cells, such as NK cells, macrophages, and T cells, especially T helper cells, were more significantly infiltrated into the brain after CVT.Accordingly, our results demonstrated that several genes involved in the inflammatory response, including well-known proinflammatory genes Il1a, IL23, Cxcl6, Cxcl1, and genes specially expressed in immune cells such as Tnrsf9 and Tnrsf11b were higher of the NLRP3 inflammasome, and then played a proinflammatory role. 25The present study also found significant changes in the inflammatory response after venous stroke in a rat CVT model, complementing the understanding of the inflammatory response after CVT. Studies have shown that hypoxia is a major post-stroke crisis in stroke patients, and continuous provision of oxygen to improve the poor microenvironment may effectively protect neurons from ischemic brain damage. 26Wang et al. developed a nanophotosynthesis biosystem to generate oxygen and promote angiogenesis and brain tissue repair, thus saving ischemic neurons and achieving the purpose of stroke treatment. 27Early studies have shown that hyperbaric oxygen therapy can significantly improve neurological function in stroke patients. 28Hyperbaric oxygen therapy is used to treat various neurological disorders associated with hypoxia, such as AD, ischemia, and traumatic brain injury. 29The previous study suggests a connection between the neuroactive ligand-receptor interaction and α-synuclein, which plays a role in the brain dysregulation of ion miRNAs. 30uroactive ligand-receptor interactions were found to be a potential target of Tao-Hong-Si-Wu treating middle cerebral artery occlusion therapeutic target. 31Over 100 genes involved in neuroactive ligand-receptor interaction and cytokine-cytokine receptor interaction were found to undergo significant pathway changes in research on the mechanism of tauopathy in autism. 32Cytokine-cytokine receptor interaction plays a crucial role in immune and inflammatory responses to diseases.In the brain, cytokine action is also influenced by the interaction of cytokines with hormones, neurotransmitters, peptides/neuropeptides, and neurotransmitters.During health and disease, cytokines act as immunomodulators and neuromodulators in the central nervous system, and their direct effects on neuronal cells can increase or decrease neuronal activity. 33Under certain pathophysiological conditions, cytokine cascades can lead to neurotoxicity and neurodegeneration. 34The cytokine-cytokine receptor interaction pathway was found to be enriched in the study of major influence pathways in neurodegenerative diseases. 35 addition, our results revealed that hemoglobin proteins, including both Hba and Hbb, were significantly downregulated after CVT, compared both to MCAO and Sham groups.These results also suggest that the mechanisms of post-CVT pathology may be involved in oxygen binding and carrier and system inflammation, and need to be explained in the future. Changes in the metabolome have been linked to several neurodevelopmental and neurodegenerative disorders.The current study includes a comprehensive profiling of the metabolites from CVT, MCAO, and Sham samples.We observed that specific metabolites and metabolic pathways including alanine, aspartate, and glutamate metabolism and cysteine and methionine metabolism were altered in CVT, and suggest significant heterogeneity in venous stroke relative to arterial stroke in terms of metabolic characteristics. In summary, the present study provided an extensive analysis of DEGs and DEMs and revealed a series of targets and pathways involved in the inflammatory response and oxygen binding and carrying, and metabolism.These findings add significant insights into the pathogenesis mechanism of CVT and expand our understanding of the heterogeneity of venous stroke. After 6 min of grinding in a frozen tissue grinder (−10°C, 50 Hz) and 30 min of low-temperature ultrasonic extraction (5°C, 40 KHz), the sample was allowed to stand at −20°C for 30 min.Finally, the sample was centrifuged for 15 min, and the supernatant was transferred to a sample vial for LC-MS detection.This project uses LC-MS/MS technology for untargeted metabolomics analysis, using high-resolution mass spectrometer Q Exactive (Thermo Fisher Scientific, USA) to collect data from both positive and negative ions to improve metabolite coverage.LC-MS/ MS data processing was performed using The Compound Discoverer 3.1 (Thermo Fisher Scientific, USA) software, which mainly included peak extraction, peak alignment, and compound identification. summarizes the abundance profiles of 7 major immune cells of the first layer derived from ImmuneAImouse.In detail, CVT and MCAO exhibited substantially higher levels of T cell and macrophages and decreased proportions of NK cells.The abundance of T cell subtypes F I G U R E 1 (A) Plots of Principal component analysis (PCA) of gene expression profiles of the three groups of CVT group, Sham group, and MCAO group.(B) The number of differentially expressed genes (DEGs, FDR adjusted p < 0.05) in the CVT group versus Sham group and MCAO group versus Sham.(C) Upset chart shows the intersections of the up and down DEGs between the two comparisons of the CVT group versus Sham group and MCAO group versus Sham genes.(D, E) The volcano plot shows DEGs of the CVT group versus the Sham group (D) and MCAO group versus Sham (E).Green dots indicate the upregulated genes and red ones are downregulated genes.(F) Heatmap of the CVT-specific DEGs expression TPMs in three groups.Red rectangles represent the upregulated genes and blue ones are downregulated genes.The color depth means the significance of the difference.including T helper cells was higher in CVT (Figure 4C,D).The abundance of the other types of cells is shown in (Figure S1A,B). F I G U R E 2 The top 15 enriched KEGG pathways of DEGs in the CVT group versus Sham group (A) and MCAO group versus Sham (B).The color means the adjusted p-value and the area of bubbles shows the number of DEGs enriched.Gene Ontology (GO) enrichment analysis of DEG in CVT group versus Sham group (C) and MCAO group versus Sham (D) of the three GO categories.The dot color means the adjusted p-value and the area of bubbles shows the number of DEGs enriched.The dot shapes represent the three GO term categories.| 7 of 12 KUI et al. F| 11 of 12 KUI I G U R E 4 (A) Violin plot indicates the inflammation differences in three groups, showing the median, quartile, and kernel density estimations for the immune response score.(B, C) The relative proportion of seven major immune cells (B) and T-cell subtypes (C) in each sample.(D) Mean value for each cell subset including, NK cells, Macrophages, T cells, and T helper cells, was calculated for each group and compared (one-way ANOVA analysis, Tukey post hoc tests, *p < 0.05; **p < 0.01, ***p < 0.001).(E) GSEA analysis reveals the enriched KEGG pathways related to inflammation response in CVT. in CVT, which will provide potential diagnosis and treatment targets for CVT.It was also found that there was a substantial upregulation of the immune pathways in CVT, cytokine−cytokine receptor interaction, chemokine signaling pathway, NOD-like receptor signaling pathway, neutrophil extracellular trap formation, necroptosis, and apoptosis.Recent study in mouse CVT models found that the levels of phosphorylated-NF-κb p65, ROS, and TXNIP are distinctly raised post-CVT, which synergistically conduce to the activation F I G U R E 5 (A) Plots of Principal component analysis (PCA) of metabolism profiles of the three groups of CVT group, Sham group, and MCAO group.(B) The number of differential metabolites (DEMs, p-value <0.05, Fold Change >1.2 or <0.83) in the CVT group versus the Sham group and MCAO group versus Sham.(C) UpSet chart shows the intersections of the up and down DEMs between the two comparisons of the CVT group versus Sham group and MCAO group versus Sham genes.(D, E) The volcano plot shows DEMs of the CVT group versus Sham group (D) and MCAO group versus Sham (E).Green dots indicate the upregulated metabolites and red ones are downregulated metabolites.(F) Heatmap of DEMs derived from one-way ANOVA test of the three groups(one-way ANOVA analysis, p < 0.05).Red rectangles represent the upregulated metabolites and blue ones are downregulated metabolites.The color depth means the significance of the difference.et al. F I G U R E 6 Pathway enrichment analysis of DEMs in CVT group versus Sham group (A) and MCAO group versus Sham (B) and CVTspecific DEMs (C).(D) Heatmap of metabolites increased in CVT derived from ANOVA one-way test of the three groups.Red rectangles represent the upregulated metabolites and blue ones are downregulated metabolites.The color depth means the significance of the difference.(E) Mean value for DEMs specific in CVT group versus Sham group (one-way ANOVA analysis, Tukey post hoc tests, *p < 0.05; **p < 0.01, ***p < 0.001).
6,517
2023-10-30T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Tetrahydrobiopterin deficiencies: Lesson from clinical experience Abstract Objectives The present study describes clinical, biochemical, molecular genetic data, current treatment strategies and follow‐up in nine patients with tetrahydrobiopterin (BH4) deficiency due to various inherited genetic defects. Methods We analyzed clinical, biochemical, and molecular data of nine patients with suspected BH4 deficiency. All patients were diagnosed at Ege University Faculty of Medicine in Izmir, Turkey and comprised data collected from 2006 to 2019. The diagnostic laboratory examinations included blood phenylalanine and urinary or plasma pterins, dihydropteridine reductase (DHPR) enzyme activity measurement in dried blood spots, folic acid and monoamine neurotransmitter metabolites in cerebrospinal fluid, as well as DNA sequencing. Results Among the nine patients, we identified one with autosomal recessive GTP cyclohydrolase I (ar GTPCH) deficiency, two with 6‐pyruvoyl‐tetrahydropterin synthase (PTPS) deficiency, three with sepiapterin reductase (SR) deficiency, and three with DHPR deficiency. Similar to previous observations, the most common clinical symptoms are developmental delay, intellectual disability, and movement disorders. All patients received treatment with l‐dopa and 5‐hydroxytryptophan, while only the ar GTPCH, the PTPS, and one DHPR deficient patients were supplemented in addition with BH4. The recommended dose range varies among patients and depends on the type of disease. The consequences of BH4 deficiencies are quite variable; however, early diagnosis and treatment will improve outcomes. Conclusions As BH4 deficiencies are rare group of treatable neurometabolic disorders, it is essential to diagnose the underlying (genetic) defect in newborns with hyperphenylalaninemia. Irreversible brain damage and progressive neurological deterioration may occur in untreated or late diagnosed patients. Prognosis could be satisfying in the cases with early diagnose and treatment. Despite the differences in biochemical properties, similar clinical features are seen in variable severities at patients. Poor sucking, floppy baby, hypotonia of the trunk, hypertonia of the extremities, swallowing difficulties, myoclonic seizures, oculogyric crises, behavioral problems, intellectual disability, poor response to phenylalanine (Phe) limited diet can be seen. Hyperphenylalaninemia (HPA) occurs in the deficiency of autosomal recessive (ar) GTPCH (OMIM: 233910), PTPS (OMIM: 261640), DHPR (OMIM: 261630) and PCD (OMIM: 264070) whereas deficiency of SR (OMIM: 612716), autosomal dominant (ad) GTPCH (OMIM: 128230) and some ar GTPCH (OMIM: 233910) are presented without HPA. 5,6 The primary objective of this retrospective study is to describe the demographics, diagnosis, clinical, biochemical, and molecular data of treatment and follow-up of patients with BH4 deficiency. We believe that our study supports an expanding phenotypic spectrum and optimization of treatment in BH4 deficiency. The patients' information was not included in any prior study. | PATIENTS AND METHODS The presented retrospective single-center study, included patients with BH4 deficiency, followed up from 2006 to 2019. Demographic, clinical features, laboratory and radiological findings, treatment and follow-up information of nine patients with BH4 deficiency were enrolled. Anthropometric measurements were compared with healthy children of the same age and gender. 7 First admitting Phe levels (measured by an high-performance liquid chromatography [HPLC]) method (Immuchrom GmbH), where HPLC separation was performed by an isocratic method and UV dedector, blood pterin profile, 8 DHPR activity in dried blood spots, 9 plasma prolactin levels (measured by electrochemiluminescence immunoassay "ECLIA" [Roche Diagnostics GmbH]), and cerebrospinal fluid (CSF) pterins and biogenic amine analyses 8,10 were evaluated. Neurotransmitter metabolites 5-hydroxindolacetic acid (5-HIAA) and homovanillic acid (HVA) were measured to determine disease severity and monitor therapy. BH4 loading test was performed in patients with blood Phe concentration >10 mg/dL at baseline; by measuring Phe levels at the 0, 4th, 8th, and 24th hours after ingestion of 20 mg/kg sapropterin. 4 The patients' diagnoses were confirmed with genetic mutation analysis. Information related to the signs and symptoms of BH4 deficiency, treatment outcomes and adverse effects, Synopsis It is essential to evaluate both newborns presenting with hyperphenylalaninemia and patients with unknown neurological origin in terms of BH4 deficiencies. magnetic resonance imaging (MRI) and electroencephalography (EEG) reports were collected. Intellectual assessment was assessed by Wechsler Intelligence Scale for Children-Revised (WISC-R), physical development status assessed using the Ankara Developmental Screening Inventory (ADSI). 11 Genomic PCR and Sanger sequence analysis of genomic DNA isolated from blood samples was performed. We tested all exons of the corresponding genes, that is, GCH1, PTS, SPR, and QDPR plus their flanking intronic regions, with the following genomic reference sequences (cDNA in parenthesis) ENSG00000131979 (ENST000004-91895.7), ENSG00000150787.3 (ENST00000280362.3), ENSG00000116096 (ENST00000280362.3), ENSG0000-0151552.6, respectively, and cDNA reference sequences, ENST00000280362.3, ENST00000234454.6, and ENST-00000281243.5 respectively. | Patients Nine patients (8/1-F/M) from eight families were included in the study. Seven (77%) individuals in our cohort were born to consanguineous parents. One (11%) with ar GTPCH deficiency, two (22%) with PTPS deficiency, three (34%) with SR deficiency, and other three (34%) patients were diagnosed with DHPR deficiency. The most frequent defects were SR and DHPR deficiencies. All three SR deficiency patients were offspring of a consanguineous first cousin marriage, that is, two siblings and their cousin coming from the same big family so they carried the same mutation. All our patients with PTPS and DHPR deficiency had severe phenotypes. None of them had a history of prematurity. Mean birth weight was 2804 g (SD ± 471). All patients had normal head circumference and height at birth. Three patients had low birth weight, one with PTPS deficiency, and two with DHPR deficiency. Mean age at diagnosis was 23.50 months (n = 6 SD ± 41.82) for BH4 deficiency with HPA and median age at diagnosis was 11.5 years (min: 1 to max: 19.6) for SR deficiency. Mean age at first clinical symptoms for BH4 deficiency with HPA was 3.75 months (SD ±2.53) while median age for SR deficiency was 8 months (min: 6 to max: 9). Newborn screening (NBS) was performed in all patients except patient (P) 6. HPA was detected in six of eight patients by NBS and referred to our center. In all six patients with HPA, high initial Phe levels were confirmed by HPLC. The first admission Phe levels determined in our center were presented in Table 1. The three of six patients with HPA were diagnosed before the age of 6 months and treatment was started. Two patients (P3, P8) were diagnosed with routine blood pterin analysis at the first admission. Differential diagnosis of PKU was performed to P2 due to suspicious BH4 test. P7, P9, and P1 were diagnosed at the age of 18 months, 9 years, 7 months, respectively. The detailed clinical characteristic features of the patients were presented in Table 1. P7 and P9 did not continue to follow-up after first admission due to newborn screening. P7 was referred by general practitioner at the age of 18 months due to unable to walk. At last visit, she has slight intellectual disability. P9 was referred by a neurologist at the age of 9 when she developed neurological symptoms. She was diagnosed at 9 years old. She had global developmental delay, inability to stand, walk or even sit without assistance. At last visit she had axial hypotonia, hypertonia at the extremities and was fed via gastrostomy tube due to swallowing difficulties. She was on antiepileptic drugs. The patient had unilateral movement restriction at the right arm. P1 who was detected with HPA during newborn screening; however, he applied for the first time after the onset of neurological symptoms at the age of 7 months. He had severe hypotonia, seizures, oculogyric spasms, unable to hold head on admission. At last visit he had hypotonia, unsupported sitting, decreased oculogyric spasms. Hypersalivation, apathy, dysarthria, gait disturbance were exacerbated in P8 when blood Phe levels was high. Clinical symptoms worsened when the blood Phe levels was over 6 mg/dL. All three patients with SR deficiency were diagnosed after appearance of neurological symptoms. P4 had oculogyric spasms, generalised hypotonia, hypersomnia, unable to unsupported seating and hold her head at first admission. Clinical symptoms worsened in the evening before treatment. P5 and P6 had movement disorders. The most common clinical symptoms were developmental delay, intellectual disability, and movement disorders. Clinical symptoms were reported in Figure 1. At last visit, anthropometric measurements of seven (77.7%) of nine patients, including height, weight, and head circumference, were normal. Microcephaly was evident in P1 and P9 (22.2%). 1 mg/kg/day L-dopa were initiated as four divided doses and dose increase of 1 to 2 mg/kg/day was made in 2 to 4 weeks intervals. c 1 mg/kg/day 5-HTP was initiated as four divided doses and dose increase of 1 to 2 mg/kg/day was made in 2 to 4 weeks intervals. | Biochemistry Mean blood Phe levels for BH4 deficiencies with HPA at the time of first admission was 11.4 mg/dL (SD: ±6.5, min: 2.9, max: 22.1). Before the start of the treatment, blood pterin levels and CSF analysis were performed. CSF 5-HIAA and HVA levels were detected significantly low in CSF analysis. Molecular analysis were performed in all patients. In ar GTPCH deficiency patient, low biopterin and neopterin levels were detected in both plasma and CSF. In PTPS deficiency high neopterin and low biopterin levels were detected in CSF analysis of one patient. In SR deficiency patients, biopterin and neopterin levels were normal in CSF and HVA and 5-HIAA concentrations were significantly low in both two patients. In DHPR deficiency, all patients had normal pterin levels. DHPR activity was absent in two of three patients and very low in one. Due to the suspicious decrease in BH4 loading test, blood pterin were analyzed and revealed high neopterin, low biopterin, normal DHPR activity in P2 during the presymptomatic period. Neurological symptoms developed during the follow-up. CSF could not be performed due to the lack of patient approval. All patients had high prolactin levels before treatment. Biochemical and molecular data are tabulated in Table 2. | BH4 loading test BH4 loading test, was performed in five patients. The patient with ar GTPCH deficiency (P6) had a 75% decrease in blood Phe levels 8 hours after oral BH4 administration. Two patients with PTPS deficiency had a 96.4% and 98.1% decrease in blood Phe levels, respectively, after BH4 administration during the same time period. Blood Phe levels were decreased by 51% and 55% in two patients with DHPR deficiency, respectively. | Central Nervous System Evaluation Cranial MRI was performed in all patients. In patients with BH4 deficiency with HPA, the most common finding in cranial MRI was hyperintensity in T2-Fluid attenuated inversion recovery sequences in the bilateral cerebral white matter. In addition, diffusion restriction was detected in P1. MRI was normal in all patients with SR deficiency. When EEGs of eight patients were evaluated, nonspecific EEG abnormality was detected in seven (87.5%) patients. Mild intellectual disability was detected in one patient whom were evaluated with WISC-R and variable degrees of developmental delay were detected in four patients who were evaluated with ADSI. | Medical and diet treatment All patients received L-dopa (combination with benserazid) and 5-hydroxytryptophan (5-HTP). One patient with ar GTPCH and three patients with DHPR deficiency received Phe restricted diet. Diet therapy was not required in patients with PTPS deficiency. All BH4 deficiency patients with HPA, except P7 and P9 received sapropterin treatment. About 15 mg/d folinic acid were administered to the all patients with DHPR deficiency. Treatments and doses of patients are detailed in Table 3. Optimal therapeutic level was arranged according to the clinical response and plasma prolactin levels. One month after the start of treatment, P4 able to hold her head and unsupported seating. The significant decrease in the oculogyric spasm and hypotonia was noted. Melatonin was administered to the patient due to sleep disturbance for 1 month. Six months after the treatment, she was able to walk alone. At last visit she had mild speech delay. Although P5 and P6 with SR deficiency L-dopa dose range was 2 to 4 mg/kg/d, dopa-induced dyskinesia was observed. Since plasma prolactin levels were high, dopamine agonist (amantadine) was added to the treatment without decreasing the dose of L-dopa. Dramatic improvement was observed in the dyskinesia of patients after amantadine. With L-Dopa and 5-HTP, vomiting occurred in one patient and diarrhea developed in one patient. Vomiting disappeared after 2 weeks without any intervention. Symptoms regressed with dietary changes in the patient with diarrhea. | DISCUSSION BH4 deficiencies are a heterogeneous group of pediatric neurometabolic disorders. The estimated incidence of BH4 deficiencies in the world is 1 to 2% of all patients with HPA (1:10 000). 12,13 The incidence of HPA is 1/4500 and BH4 deficiency is 2% of all patients with HPA in Turkey due to high rates of consanguineous marriages. 14,15 In our study, consanguinity was reported in six of eight (75%) families. Similarly, Coskun et al reported consanguinity rates as 85.7% (12/14) in BH4 deficiency patients with HPA in Turkey. 16 There are geographical differences in the frequency of occurrence. Ye et al reported, among 256 cases of BH4 deficiency; 96% PTPS deficiency, 2.4% DHPR deficiency, and 1.6% AD GTPCH deficiency were observed. 17 Opladen et al reported that, PTPS deficiency is the most frequent of all BH4 deficiencies (54%), followed by DHPR deficiency (33%). 10 In contrast to international studies, the most common BH4 deficiency in Turkey is DHPR deficiency. 10,16,17 Turgay et al reported that DHPR deficiency accounts for approximately 75% of the biopterin deficiencies in Turkey. 16 In our study, the frequency of DHPR deficiency (34%) was the same as for SR deficiency (34%) (all patients were from the same family two siblings and their cousins). Literature reports high incidence of prematurity and low birth weight in severe form of PTPS deficiency. 10,18 However, in our study, two of three low birth weight patients were DHPR and one was PTPS deficient leading to idea that severe form might have intrauterine effects. As Opladen et al previously reported that the most common symptoms were developmental delay, abnormal muscle tone, and convulsions. 13 The findings of our cohort (developmental delay, intellectual disability, and movement disorders) supported these data. Interestingly, in our cohort, microcephaly was detected in two patients, [ar GTPCH (P1), DHPR (P9) deficiencies] as it was reported by the literature. 6,16,17 However, according to the literature, microcephaly is more common in PTPS and DHPR deficiency. It is known that diagnosis in the neonatal period is crucial and the age of treatment initiation is determining the prognosis. 19 In late diagnosed or untreated BH4 deficiencies, progressive neurological deterioration developed from the infantile period. Benign PCD deficiency is the exception of those. Coskun et al reported that the two cases showed a poor clinical course despite early diagnosis and treatment, and one of them died within 2 years. 16 It is not yet known whether early treatment can completely prevent developmental delay in all patients with BH4 deficiency. In our study, developmental outcome is relatively poor in those treated after 6 months of age. Therefore, in the presence of unexplained neurological findings, cerebral palsy or developmental delay BH4 deficiency should be considered. To date prognosis is satisfying in the cases with early diagnose and treatment. Symptoms might have been exacerbated by increased levels of phenylalanine in BH4 deficiency patients with HPA. High phenylalanine levels affect the transport of neurotransmitter precursors through membranes and/or results in tyrosine and tryptophan hydroxylases competitive inhibition. Hypersalivation, apathy, dysarthria, and gait disturbance were exacerbated in DHPR deficiency when blood Phe levels was high. The patient's blood Phe levels are monitored at close intervals. Diurnal fluctuations can be seen in symptoms, seem to worsen in the evening. 13,20 Clinical symptoms of SR deficiency worsened in the evening before treatment. Aiming to expand phenotype-genotype correlation, we should share that; in ar GTPCH novel variant as a novel missense mutation c.614 T > C resulting in a V205G substitution has been identified in the GCH1 gene. However, this patient had a low biopterin and neopterin levels in both plasma and CSF known as typical pattern. The homozygous mutations c.200C > T (p. Thr67Met) and c.84-3C > G previously reported as severe form mutations from Albania, Italy, and Iran were detected in our PTPS deficiency patients who had normal neopterin and low biopterin levels in plasma. 21,22,23,24 In our cohort, all SR deficiency patients had normal CSF pterin but HVA and 5-HIAA were significantly decreased. Homozygous c.448A > G (p.Arg150Gly) mutations previously reported as a common in Mediterranean region was detected in our patient. 25 No clear genotypephenotype correlation is apparent in SR deficiency. DHPR deficiency, confirmed by c.105C > G (p. Trp35Cys), c.661C > T (p.Arg221Ter) and c.291delC (p. Lys98Serfs*9) homozygous mutations detected at QDPR gene. The c.105C > G (p.Trp35Cys) was not previously published. The mutation c.661C > T (p.Arg221Ter) was previously suggested to be common in patients of Mediterranean origin and also in Turkey. 26,27,28 The c.291delC (p. Lys98Serfs*9) homozygous mutation identified in this study is predicted to cause premature termination of protein synthesis. Patient with c.661C > T (p.Arg221Ter) mutation has the most severe clinical features such as severe developmental delay, growth retardation, microcephaly, focal neurological deficit, swallowing difficulties, and epilepsy. However, due to the late diagnosis, it will not be appropriate to evaluate a genotype-phenotype correlation. Epilepsy is reported more frequently in DHPR deficiency than in other biopterin deficiencies. 2,13 Supporting these data, two from our three late diagnosed DHPR patients showed severe developmental delay. Although not usually associated with clinical impairment, abnormal EEGs can reveal the presence of CNS dysfunction, so EEG evaluation is an important assessment of these diseases. 10,29 The most common finding in our patients MRI was hyperintense lesions on T2-weighted images (5/9), correlated with reports from the literature about hyperintense lesions at the periventricular white matter. 30,23,31,32 Interestingly, in our cohort, calcifications in basal ganglion were not detected even in DHPR deficiency particularly accompanied with it. 29,30 In spite that Freidman et al reported that some of SR deficiency patients might have had hyperintense lesions before treatment, cranial MRI were generally normal in most patients with SR deficiency similarly to our MRI reports. 33 All patients received L-dopa and 5-HTP. BH4 was used for the treatment in ar GTPCH and PTPS deficiencies. We administered BH4 treatment at four (66%) of six patients with HPA. DHPR deficiency has a special position because there were insufficient data to support the efficacy and safety of BH4 for DHPR deficiency patients due to the accumulation of 7,8-dihydrobiopterin. 2,4 Repeated CSF analysis for HVA and 5-HIAA is recommended for therapeutic monitorization; however, CSF sampling is invasive and often difficult to perform. In contrast, Jaggi et al reported that did not find correlation between CSF 5-HIAA, HVA values and clinical outcome. 19 Replacement therapy for dopamine is more difficult than adjusting the dose of serotonin due to the short half-life and adverse effect. Monitoring serum morning prolactin levels as it was recommended combined with clinical findings was a successful way for optimising the L-dopa dosage. 34,35,36,37 This may indicate the importance of monitoring CSF folates and folinic acid substitution in patients with PTPS deficiency. Folinic acid was started in one of our PTPS deficiency at the age of seven; due to movement disorders, despite high dose L-dopa and 5-HTP treatment. After the folinic acid was started, the patient's movement disorder relatively decreased. The limitations of our retrospective study include loss of follow-up, missing data and small number of cases. The overall outcome in patients with BH4 deficiency is quite variable. BH4 metabolism disorder should be considered in the presence of a developmental delay, tonus abnormalities, intellectual disability, and movement disorders as it is also recommended in the guideline. Our experience in early treated cases suggests that clinical response may be satisfactory in BH4 deficiency. It is essential to evaluate both newborns presenting with hyperphenylalaninemia and patients with unknown neurological origin in terms of BH4 deficiencies.
4,554.2
2021-02-01T00:00:00.000
[ "Biology", "Medicine" ]
Machine Learning Methods in Drug Discovery The advancements of information technology and related processing techniques have created a fertile base for progress in many scientific fields and industries. In the fields of drug discovery and development, machine learning techniques have been used for the development of novel drug candidates. The methods for designing drug targets and novel drug discovery now routinely combine machine learning and deep learning algorithms to enhance the efficiency, efficacy, and quality of developed outputs. The generation and incorporation of big data, through technologies such as high-throughput screening and high through-put computational analysis of databases used for both lead and target discovery, has increased the reliability of the machine learning and deep learning incorporated techniques. The use of these virtual screening and encompassing online information has also been highlighted in developing lead synthesis pathways. In this review, machine learning and deep learning algorithms utilized in drug discovery and associated techniques will be discussed. The applications that produce promising results and methods will be reviewed. Introduction Advancements in computational science have accelerated drug discovery and development. Artificial intelligence (AI) is widely used in both industry and academia. Machine learning (ML), an essential component in AI, has been integrated into many fields, such as data generation and analytics. The basis of algorithm-based techniques, such as ML, requires a heavy mathematical and computational theory. ML models have been used in many promising technologies, such as deep learning (DL) assisted self-driving cars, advanced speech recognition, and support vector machine-based smarter search engines [1][2][3][4]. The advent of these computer-assisted computational techniques, first explored in the 1950s, has already been used in drug discovery, bioinformatics, cheminformatics, etc. Drug discovery has been based on a traditional approach that focuses on holistic treatment. In the last century, the world's medical communities started to use an allopathic approach to treatment and recovery. This change led to the success of fighting diseases, but high drug costs ensued, becoming a healthcare burden. While quite diverse and specific to candidates, the cost of drug discovery and development has consistently and dramatically increased [5]. As illustrated in Figure 1, the generalized components of early drug discovery include target identification and characterization, lead discovery, and lead optimization. Many computer-based approaches have been used for the discovery and optimization of lead compounds, including molecular docking [6,7], pharmacophore modeling [8], decision forests [9], and comparative molecular field analysis [10]. ML and DL have become attractive characterization, lead discovery, and lead optimization. Many computer-based approaches have been used for the discovery and optimization of lead compounds, including molecular docking [6,7], pharmacophore modeling [8], decision forests [9], and comparative molecular field analysis [10]. ML and DL have become attractive approaches to drug discovery. The applications of ML and DL algorithms in drug discovery are not limited to a specific step, but for the whole process. In this article, we review the ML and DL algorithms that have been widely used in drug discovery. Figure 1. The general steps in drug discovery. Machine learning (ML) and deep learning (DL) algorithms may participate in each of the four steps listed, e.g., by mining proteomic in target discovery, discovering small molecules as candidates in lead discovery, developing quantitative structure-activity relationship models to optimize lead structures for improved bioactivity, and analyzing massive assay results. ML Algorithms Used in Drug Discovery ML algorithms have significantly advanced drug discovery. Pharmaceutical companies have greatly benefited from the utilization of various ML algorithms in drug discovery. ML algorithms have been used to develop various models for predicting chemical, biological, and physical characteristics of compounds in drug discovery [11][12][13][14][15][16][17][18][19]. ML algorithms can be incorporated in all steps of the process of drug discovery. For example, ML algorithms have been used to find a new use of drugs, predict drug-protein interactions, discover drug efficacy, ensure safety biomarkers, and optimize the bioactivity of molecules [20][21][22][23][24]. ML algorithms that have been widely used in drug discovery, which include: Random Forest (RF), Naive Bayesian (NB), and support vector machine (SVM) as well as other methods [25][26][27]. ML algorithms and techniques are not a monolithic, homogeneous subset of AI. There are two main types of ML algorithms: Supervised and unsupervised learning. Supervised learning learns from training samples with known labels to determine labels of new samples. Unsupervised learning recognizes patterns in a set of samples, usually without labels for the samples. The data are usually transformed into a lower dimension to recognize patterns in high-dimensional data using unsupervised learning algorithms prior to recognizing patterns. Dimension reduction is useful, not only, because unsupervised learning is more efficient in a low dimension space but also because the recognized pattern can be more easily interpreted. Supervised and unsupervised learning can be combined as semi-supervised and reinforcement learning, where both functions can be utilized for various data sets [28]. Expansive volumes of data are critical for the development, evolution, and viability of successful ML algorithms in every step of the drug discovery process. The reliance on big high-quality data and known, well-defined training sets are even more essential in precision medicine and therapies within drug discovery. Precision medicine requires a comprehensive characterization of all related pan-omic data: Genomic, transcriptomic, proteomic, etc., to assist in developing genuinely effective personalized medicines. The widespread use of high-throughput screening and sequencing, online multi-omic databases, and ML algorithms, in the past two decades, have created a flourishing environment for many aspects of data generation, collection, and maintenance required for drug development. The advancements of data analytics have successfully attempted to describe and interpret the generated data. This endeavor, supported with ML techniques and integrated databases through multiple software/web-tools (Tables 1-3), is now regularly used for all steps in drug discovery. The ability of new data analytics to synergize with classical approaches and prior hypotheses to produce novel hypotheses and models has proven itself to be useful in applications of repositioning, target discovery, small molecule discovery, synthesis, Figure 1. The general steps in drug discovery. Machine learning (ML) and deep learning (DL) algorithms may participate in each of the four steps listed, e.g., by mining proteomic in target discovery, discovering small molecules as candidates in lead discovery, developing quantitative structure-activity relationship models to optimize lead structures for improved bioactivity, and analyzing massive assay results. ML Algorithms Used in Drug Discovery ML algorithms have significantly advanced drug discovery. Pharmaceutical companies have greatly benefited from the utilization of various ML algorithms in drug discovery. ML algorithms have been used to develop various models for predicting chemical, biological, and physical characteristics of compounds in drug discovery [11][12][13][14][15][16][17][18][19]. ML algorithms can be incorporated in all steps of the process of drug discovery. For example, ML algorithms have been used to find a new use of drugs, predict drug-protein interactions, discover drug efficacy, ensure safety biomarkers, and optimize the bioactivity of molecules [20][21][22][23][24]. ML algorithms that have been widely used in drug discovery, which include: Random Forest (RF), Naive Bayesian (NB), and support vector machine (SVM) as well as other methods [25][26][27]. ML algorithms and techniques are not a monolithic, homogeneous subset of AI. There are two main types of ML algorithms: Supervised and unsupervised learning. Supervised learning learns from training samples with known labels to determine labels of new samples. Unsupervised learning recognizes patterns in a set of samples, usually without labels for the samples. The data are usually transformed into a lower dimension to recognize patterns in high-dimensional data using unsupervised learning algorithms prior to recognizing patterns. Dimension reduction is useful, not only, because unsupervised learning is more efficient in a low dimension space but also because the recognized pattern can be more easily interpreted. Supervised and unsupervised learning can be combined as semi-supervised and reinforcement learning, where both functions can be utilized for various data sets [28]. Expansive volumes of data are critical for the development, evolution, and viability of successful ML algorithms in every step of the drug discovery process. The reliance on big high-quality data and known, well-defined training sets are even more essential in precision medicine and therapies within drug discovery. Precision medicine requires a comprehensive characterization of all related pan-omic data: Genomic, transcriptomic, proteomic, etc., to assist in developing genuinely effective personalized medicines. The widespread use of high-throughput screening and sequencing, online multi-omic databases, and ML algorithms, in the past two decades, have created a flourishing environment for many aspects of data generation, collection, and maintenance required for drug development. The advancements of data analytics have successfully attempted to describe and interpret the generated data. This endeavor, supported with ML techniques and integrated databases through multiple software/web-tools (Tables 1-3), is now regularly used for all steps in drug discovery. The ability of new data analytics to synergize with classical approaches and prior hypotheses to produce novel hypotheses and models has proven itself to be useful in applications of repositioning, target discovery, small molecule discovery, synthesis, etc. [29][30][31]. The information generated within the medical and multi-omic fields is multidimensional. The data is often noisy and heterogeneous in character and source. Using ML methods, like generalized linear models through NB, the issues of analysis and interpretation of multidimensional data may be unburdened. Other ML techniques and models commonly used in these areas of analysis include regression, clustering, regularization, neural networks (NNs), decision trees, dimensionality reduction, ensemble methods, rule-based methods, and instance-based methods [31,32]. Random Forest (RF) RF is a widely used algorithm explicitly designed for large datasets with multiple features, as it simplifies by removing outliers, as well as classify and designate datasets based on relative features classified for the particular algorithm. It is commonly trained for large inputs and variables and accessibility based on data collection from multiple databases. It is beneficial in different aspects, such as attributing missing data, working with outliers, and estimating characteristics for classification [25]. The underlying mathematical process of RF consists of several uncorrelated decision trees as an ensemble; each tree is responsible for determining one prediction. The one that constitutes the most votes is considered the best fit ( Figure 2a) [36]. Although false positives may happen in any statistical analysis, RF, along with SVM and NB, has been suggested to make the least amount of errors compared to other algorithms. With multiple decision trees, individual errors are minimized due to their assemblies of several predictions rather than solely focusing on one prediction. In drug discovery, RFs are mainly used either for feature selections, classifiers, or regression. Cano et al. utilized RF methods to improve affinity prediction between ligand and the protein by virtual screening through selecting molecular descriptors, based on a training data set for enzymes, such as ligands of kinases and nuclear hormone receptors. Some of the essential factors accompanying RF in drug discovery are: It expedites the training process, uses fewer parameters, imputes missing data, and incorporates nonparametric data [37]. Rahman et al. utilized multivariate RF by including information relating to genomic sequencing, which helped sustain error and achieve drug responses based on genomic characterizations. Multivariate RFs specialize in limiting error by calculating several error estimates techniques within the system. The computational framework inputs the data that incorporates genetic and epigenetic characterization combinations, allowing the framework to predict the mean and confidence interval of the drug responses. An important quality essential for analyzing any drug to be processed in clinical trials [38]. Rahman et al. endeavored to combine the modeling framework with functional RF for improving the prediction based on the response profile. They tried to combat the difficulties observed in individuals related to finding appropriate compounds depending on individual tumors. RF was incorporated for the generation of the regression tree node and leaf nodes. It acquired the data points of dose-responses. The leaf nodes in the algorithms are responsible for making predictions about the dose-response profile, simultaneously storing the functional data. The model recorded data is comprised of the genome sequences and their characteristics [39]. RF algorithms have also been implemented as a method of classification and regression in a quantitative structure-activity relationship (QSAR) modeling used in lead discovery processes [40,41]. Molecules 2020, 25, x FOR PEER REVIEW 5 of 18 votes is considered the best fit (Figure 2a) [36]. Although false positives may happen in any statistical analysis, RF, along with SVM and NB, has been suggested to make the least amount of errors compared to other algorithms. With multiple decision trees, individual errors are minimized due to their assemblies of several predictions rather than solely focusing on one prediction. There are multiple features that the computational queries look for in both target and drug. When there is a compatibility match, it proceeds to the next step to match additional features. A series of datasets is inputted into the query, and each tree is responsible for computing a prediction. The prediction picked by most trees is used for the next step. The system of using many decision trees is intended to minimize errors queries look for in both target and drug. When there is a compatibility match, it proceeds to the next step to match additional features. A series of datasets is inputted into the query, and each tree is responsible for computing a prediction. The prediction picked by most trees is used for the next step. The system of using many decision trees is intended to minimize errors mathematically. (b) SVM utilizes similarities between the classes, called support vectors, to distinguish between the classes based on the trained features. It formulates hyperplanes that separate two classes (can be multiclass, if needed). SVM incorporates multiple training sets depending on the classifiers and formulates compounds' status (active or inactive). During the process, compounds are separated into three sections: Non-selective compounds (active), selective compounds (active), and in the margin are inactive compounds. Although non-selective compounds are active, they are not selective towards the protein of interest. In contrast, selective compounds are active and selective towards the protein of interest. Naive Bayesian (NB) NB algorithms are a subset of supervised learning methods that have become an essential tool used in predictive modeling classification. Standard NB algorithms work to classify features of datasets, and depending on the input characteristics, factor correlation, and dimensionality of the data, it can be one of the most efficient techniques for the task [42][43][44]. The effectiveness of NB alongside decision tree algorithms for the use of text mining has not been determined. These techniques enhance the accuracy of retrieved data sets, which generally originate in large, muddled sources [45,46]. Classification of biomedical data is crucial in the drug discovery process, especially in the target discovery subset. NB algorithms have shown great promise as classification tools for biomedical data, often filled with non-related information and data, known as noise [47]. NB techniques could also serve important roles in predicting ligand-target interactions, which could be a massive step forward in lead discovery [48]. Recently, researchers have been able to incorporate NB techniques into diverse applications within the drug discovery process. In a study, Pang et al. used NB models and additional techniques as classifiers for active and inactive compounds, with possible activity as antagonists for estrogen receptors in breast cancer [49]. The researchers utilized the ability of NB algorithms to process vast quantities of information while having a unique tolerance to random noise. The technique, in combination with other tools such as extended-connectivity fingerprint-6, was able to collect excellent outputs. In a recent study, Wei et al. utilized a combinational technique of NB and support vector machine algorithms to predict possible compounds that could be active against targets of human immunodeficiency virus type-1 and the hepatitis C virus generated from multiple QSAR models [50]. Their model utilized NB as a classifier technique paired alongside two different descriptor systems, one also being extended-connectivity fingerprint-6. The utilization of NB, combined with other systems and techniques, has shown to be useful in incorporating drug discovery processes. Support Vector Machine (SVM) SVMs are supervised machine learning algorithms used in drug discovery to separate classes of compounds based on the feature selector by deriving a hyperplane. It utilizes the similarities between classes to formulate infinite numbers of the hyperplane. For linear data, it trains by separating classes consisting of compounds based on selected features and projects them into chemical feature space. An optimal hyperplane attained by maximizing margin between classes in N-dimensional space (N is the number of features); it is denoted by a hyperplane, which is used to classify data points by setting decision boundaries [51]. SVM is crucial to drug discovery because of its capability of distinguishing between active and inactive compounds, ranking compounds from each database (shown in Figure 2b), or training regression model. Regression models are vital in determining the relationship between the drug and ligand, as it employs a query for datasets to predict [52][53][54][55]. When several active compounds are screened against a single protein of interest, SVM can be attributed in various scenarios. SVM classification has a subset binary class prediction that could differentiate between active from inactive molecules. For drug discovery, it could rank compounds from different databases based on the probability of being active for any computational screening. SVM can be extrapolated in different ways to attain results, with a main focus to distinguish between active and inactive compounds. The process could be manipulated by training the algorithm using various descriptors for feature selectors such as 2D fingerprints, and target protein. A class label is formulated, negative or positive, depending on the direction where the compound is placed from the hyperplane, thereby ranking compounds from the most selective to least selective [55,56]. However, for non-linear data, kernel functions are utilized to optimize the results. Kernel functions plot the data in a higher-dimensional space, where the separation between classes is feasible. For drug-target interaction, it is specifically designed for integrating ligands and proteins of interest information as an essential component for SVM modeling [51]. Wang et al. investigated drug-target interactions and integrated information obtained from published research of various source to enhance the prediction. They used kernel function to incorporate information on drug pharmacological and therapeutic effects, drug chemical structures, and protein genomic information to characterize the drug-target interactions. Generally, results from the different sources were all promising, and kernel function for the prediction of pharmacological and therapeutic effects displayed the most potential [57]. SVM are also frequently used in predicting drugs that could have multiple bioactivities. For example, Kawaii et al. used SVM classifiers to construct a query where drugs were set against hundreds of targets to establish different biological pathways targeting their bioactivities [58]. In another study, a similar process was used to determine the bioactivities for antihypertensive drugs. The information about the drug activity was obtained from the Market Driven Demand Response database, and a multi-label SVM was employed to produce the query that shows the bioanalysis of drugs [59,60]. Drugs were discovered to be dual inhibitors against both angiotensin-converting enzyme I and neutral endopeptidases. Limitations ML algorithms have been an essential component of drug discovery. These methods increase efficiency and explore thousands of combinations that would have been impossible without this technology. As stated earlier, algorithms are trained with inputted data, but there are a few constraints with this technique. Although ML has been around for quite some time now, the biological pathways/targets being discovered are still novel. Information for the particular protein of interest might be limited, resulting in not much-extrapolated data. Free Energy Perturbation method is a platform where biological information regarding the protein is generated based on computational screening [61]. Data gathered from this method is utilized for training algorithms; however, not all the information is collected from a wet lab, rather computer-generated prediction is utilized. The accuracy of the training data might be lower than anticipated. Even though algorithms discussed in this review have a higher threshold for minimizing errors, there are still some categorical errors from training sets [61]. A more concise way to understand this is by the statistical angle. With algorithms prediction, there is always a concern with overfitting or underfitting. Overfitting is when the model consists of lower quality information/technique but generates higher quality performance. It occurs when the model picks up unusual features during the training, resulting in a negative impact on the model [17]. In contrast, underfitting models fail to recognize the data sets' underlying trend and generalize the new data inputted [62]. Both underfitting and overfitting result in inaccurate results. There are several ways to tackle overfitting and underfitting, such as increasing the sample size and cross-validation. Cross validation is an often-used technique used to estimate the accuracy of the ML algorithms' models, by using independent data sets to infer the models. Another concern raised by chem-informaticians is ample chemical space constructed through algorithms [52,63]. The chemical area is a relative set of descriptors, consisting of thousands of compounds within a frame with boundaries generated by ML algorithms [64,65]. The challenge with chemical space is the clustering of compounds with high density, which often leads to avoidance of compounds with some essential properties and compounds. Studies regarding these issues are discussed later, models to augment chemical space coverage to highlight the molecules with properties different from others [19,66]. Deep Learning (DL) Methods DL algorithms are considered one of the cutting-edge areas of development and study in almost all scientific and technological fields. The renaissance of artificial NNs into workable algorithms from their former theorized and predicted applications, first developed in the 1950s, is an essential pillar of DL and the continued success brought by AI-based integration of standard techniques. DL algorithms give computational models the ability to learn a representation of multidimensional data through abstraction [67]. DL has allowed for resolving many challenges faced by standard ML algorithms, including image recognition and speech recognition. In the drug discovery process, DL techniques have become exemplar methods of drug activity prediction, target discovery, and lead molecule discovery [68][69][70]. The basis of DL is often implicated in NN systems, where they are used to create systems that have the capability to complete complex data recognition, interpretation, and generation. The main subsets of artificial NNs used in current drug discovery are deep neural networks (DNNs), recurrent neural networks (RNNs), and convolutional neural networks (CNNs). The utilization of specific NNs from the variations that exist in the subset is dependent on multiple factors. DNNs, a specific type of feed forward neural networks, function with singular path data flow from the input layer through the hidden layer(s) reaching an output layer (Figure 3a). The outputs generated are typically identified using trained supervised learning algorithms. DL algorithms function through neural networks which can incorporate other ML techniques for training. Through supervised and reinforcement learning guided methods, a DNN can be trained to complete complex tasks. A generative DNN can create novel chemical compounds from existing libraries and training sets ( Figure 3a); while, a predictive DNN can predict the chemical attributes of the novel compounds [71,72]. QSAR models are currently being used to find the correlation between these compounds' chemical structure and activity. QSAR analysis is one of the most advanced forms of DL-based AI in current drug discovery and development. It has allowed researchers to take 2D chemical structures and determine physicochemical descriptors related to the molecule's activity. 3D-QSAR has allowed further inquiry of geometric structure impacting ligand-target interactions [33,73,74]. QSAR has been actively used in the pharmaceutical industry to predict on/off-target activities of developed lead compounds on specific targets. These algorithmic approaches to discovery and development are not, by all means, full proof or thoroughly capable. There are always some error sources and imprecision over the multiplicity of studies conducted using these AI algorithms. It has been found that NNs face a few deficiencies in comparison to other ML algorithms in their applications of QSAR studies. The first being the presence of excess descriptors that cause redundancy in NN and eventual clogging of outputs. This redundancy can significantly drop the efficiency of the NN, while also creating non-ideal outputs. Unknown descriptors also pose an issue because they may also affect the output generated. These issues have been alleviated using more specific feature selection algorithms to get a smaller number of higher quality descriptors; however, it will continue to be a problem faced by NN-based QSAR. The second issue with these NN-based assays is implementing ideal network parameters without overfitting [74]. Remedies to this issue have been proposed and implemented, but it persists to be a recurring issue without the necessary adjustments [75]. as a structure feature extractor from drugs found in DDI, learning low-dimensional representations (vectors) of the features from the DDI networks. The information is then taken to the DDN model which served as the actual predictor; the ability of the model to take the feature vectors and link them with corresponding feature vectors of possible drug combinations allowed it to produce the interaction prediction. Encouragingly, the predictions using their method outclassed popularly used state-of-the-art-methods [94]. (a) DNN consists of an input layer followed by several hidden layers and an output layer. In this case, the input layer utilizes feature vectors generated by a convolutional network. The progression of the NN follows a single path through hidden layer 1 (HL1) to HLn, indicating the feedforward nature of the NN. The generated outputs are often processed using supervised learning techniques for the identification and collection of sensible interactions. (b) RNN begins with a seed, S, which is inputted into the system. Through the use of algorithmic processing, the seed is turned into a (a) DNN consists of an input layer followed by several hidden layers and an output layer. In this case, the input layer utilizes feature vectors generated by a convolutional network. The progression of the NN follows a single path through hidden layer 1 (HL 1 ) to HL n, indicating the feedforward nature of the NN. The generated outputs are often processed using supervised learning techniques for the identification and collection of sensible interactions. (b) RNN begins with a seed, S, which is inputted into the system. Through the use of algorithmic processing, the seed is turned into a reference vector, V1, which is used by the HL to generate a vector output, V2. V2 is subsequently optimized through input training sets and creates the output, O. The generation of these outputs eventually leads to the creation of a gatherable data set. In the meantime, the HLs feed forward to provide information from previous steps. One example is chemical structure generation using SMILE string characters as seeds; hence the desired gathered outputs would be a string of SMILE characters that would be the desired structure. The dataset created in the figure is gathered and analyzed into the resultant molecules. Once the initial work of target discovery is complete and better understanding is developed for target-molecule interaction, chemical synthesis and characterization become a priority in the pipeline. An important note in this process is using descriptive simplified molecular-input line-entry system (SMILES) nomenclature in much of the algorithms regarding de novo drug design and discovery. RNNs, which are a type of NN that utilize a system of self-learning through generational processing of the inputs and developing hidden layers. The subset RNN-type long short-term memory have become a reliable, standardized method for generating novel chemical structures. RNNs are unique in their ability to use neurons connected in the same hidden layer to form a functioning cycle of processing inputs and outputs compared to DNNs and feedforward neural networks (Figure 3b), which have no connections within the same layer and only push outputs. These generative RNNs have shown promising results in the generation of sensible, structurally correct, and feasible, novel SMILE structures that were not included in the original SMILE training sets [76][77][78][79]. A recent study by Segler et al. used generative RNN models to develop possible molecular structures that could have activity against Staphylococcus aureus (S. aureus) and Plasmodium falciparum (P. falciparum). Their models were given small sets of molecular structures that had known activity against these target organisms; from these inputs, the model generated 14% of the 6051 potential molecule candidates for S. aureus that has been developed by medicinal chemists. The model also generated 28% of the existing compounds developed for P. falciparum [80]. Traditionally, the generation and implementation of chemical synthesis routes have been the sole responsibility of chemists. However, this role is evolving to include more and more computational based synthesis, also known as computer-aided synthesis planning (CASP), with the emergence of AI [81][82][83]. The Monte Carlo tree search (MCTS) based through NN techniques have been used in current studies to generate CASP workflows. The MCTS technique is ideal for this purpose because of the simulation's ability to perform random continuous step searches without branching until optimal conditions and solutions are met [82,83]. In a groundbreaking study conducted by Segler and Waller [84], an MCTS method using three NNs alongside 12.4 million transformation rules, retrieved through AI-based data mining, from all the available chemical synthesis literature at the time to generate a sensible workflow for CASP. The first NN, an expansion node, retrospectively searches for new transformations to create the molecule; it also predicts the feasibility of applying the transformation from the 12.4 million transformation rules. This allows the expansion node to select the best, as in most feasible and high yielding, transformations from the literature. The second NN, a rollout node, filters the inputs to include only the most frequently reported transformation rules to enable the best possibilities of successful transformations. The update node then incorporates the new pathway into the search tree. This algorithm was able to solve 80% of retrosynthesis problems in just 5 s, and >90% of problems in 60 s [82][83][84]. Various studies have been conducted to optimize AI-based chemical synthesis and reaction routes [85][86][87]. Through the further implementation of AI-based chemical synthesis and characterization, it will be possible to move drug discovery further from the bench to in silico and increase the time and cost-efficiency of discovery and development. CNNs are a subset of DNNs that take inputs, assign weights to specific parts of the input, then build the ability to differentiate the data. While traditional DNNs are limited in their ability to function correctly on higher-dimensional datasets, CNNs serve as a gleaming solution to tackling this issue with their ability to preserve input dimensionality. The training required for a CNN model is significantly less than DNNs, and RNNs would need to function with reasonable accuracy and efficacy. These advantages have allowed it to become a prominent learning algorithm for image recognition, surpassing other standard ML algorithms. In the process of drug discovery, CNNs have become efficient tools used in target discovery, lead discovery and characterization, in silico target-lead interaction screening, and protein-ligand scoring [68,[88][89][90]. Combinations of these DL techniques, such as CNNs, have also been very successful in identifying gene mutations and disease targets [91,92]. The incorporation of CNNs into drug development is not merely limited to target discovery; it has also been widely used in later-stage development. One such use of CNNs in this manner to assist in the generation of motility models of cancer cells responding to treatment [93]. In a recent study, Feng, Zhang, and Shi demonstrated the use of deep learning based drug-drug interaction (DDI) predictors [94], with the aim to address a wet lab issue during the drug discovery, which is often costly and time consuming. The researchers developed a new method utilizing graph convolutional networks and DNN models. In their design, the graph convolutional network served as a structure feature extractor from drugs found in DDI, learning low-dimensional representations (vectors) of the features from the DDI networks. The information is then taken to the DDN model which served as the actual predictor; the ability of the model to take the feature vectors and link them with corresponding feature vectors of possible drug combinations allowed it to produce the interaction prediction. Encouragingly, the predictions using their method outclassed popularly used state-of-the-art-methods [94]. Examples of Drug Discovery (Paper Summaries and Relevance to Topic) ML is already being used to develop novel molecules that could be used as future antibiotic candidates. In a recent, groundbreaking study conducted by Stokes et al., the researchers demonstrated the utility and capability of ML techniques in the drug discovery process [95]. They specifically capitalized on the use of DNNs to create novel molecules with broad-spectrum antibacterial activity. These discovered candidates were also identified to be structurally distinct from any known antibiotics. The researchers utilized a training set of 2335 molecules for a DNN model to predict the growth inhibition of Escherichia coli, followed by the running of the model on greater than 107 million molecules from several chemical libraries. This gave the researchers the ability to identify potential lead compound candidates that may have similar bioactivity. Through scoring generated by the model, the researchers were able to identify a list of sensible candidates that meet a predetermined score threshold and various other eliminative criteria. The researchers' efforts proved fruitful, and they were able to identify a c-Jun N-terminal kinase inhibitor, halicin, that is distinct from known antibiotics. This antibacterial candidate was also discovered to be a potent growth inhibitor of Escherichia coli, and had shown efficacy against Clostridioides difficile and Acinetobacter baumannii infections in murine models [95]. In a study conducted by Fields et al., ML algorithms, including NNs-based techniques and SVM models, were used to discover novel antimicrobial peptides, also known as bacteriocins, from bacteria could ultimately be used as compelling antibiotic candidates [96]. Discoveries such as that of the bacteriocins are the outcomes of the machine-learning algorithm's ability to build and function as complex processing systems. In the study, a positive and negative training set of 346 bacteriocins was used to train the algorithm. These input bacteriocins were represented as complicated vector sums. The machine-learning algorithm then took the inputs and generated new vector structure outputs that preserved the original inputs' key features. These outputs were translated into 676 bacteriocins that were not identical to the input bacteriocins. From the output bacteriocins, 28,895 peptides were generated using a sliding window algorithm; these peptides spanned 20-mers and were placed through biophysical parameters. Fields et al. then selected 16 of the highest affinity peptides from the biophysical filtration for in vitro testing. Their finding indicated that the peptides had significant antimicrobial activity against Escherichia coli and Pseudomonas aeruginosa [96]. The utility of ML-based mining has proved to be extremely advantageous with the advent of high throughput data generation and collection. These algorithms have been extensively used alongside the vast data generated utilizing high-throughput sequencing to enhance the target discovery process [15,97]. The innovation of algorithm-assisted data collection and manipulation has already been implemented in emerging research; recently, it has been used to find novel molecular therapeutic targets for aggressive melanoma. Researchers were able to use unsupervised learning techniques through GeneCluster to identify groups of cell lines, one was a primary melanoma group, and the other was an aggressive melanoma group. Through further analysis using supervised learning techniques, the researchers were able to identify invasion-specific genes related to aggressive melanoma [98]. One of the many challenges with cancer treatments is detecting response profiles designed primarily for individual patients. Sakellaropoulos et al. built a network-based framework. They trained a database containing 1001 cancer cell lines, from the Genomics of Drug Sensitivity in Cancer, using DNN to predict drug responses based on gene expressions. The results were evaluated in several clinical cohorts. DNNs are observed to outperform several others in silico screening due to their capability to embrace biological interactions and create models that can capture the biological complexities and accurately predict clinical response with the help of cancer cell baselines. Their model incorporated RF and elastic net (Enet) algorithms to evaluate the DNN model's results. This framework was only tested on five patients; thus, not much coverage was obtained through this model; therefore, they expanded their study to a more massive sample size. They utilized response data for two drugs: Cisplatin and paclitaxel, and analyzed it with gene expression profiles and patients' responses to those two drugs gathered from different clinical trials. The study was done on a small scale, implementing DL network training sets and ML algorithms, with a limited amount of knowledge. It is believed that ML could essentially be a powerful tool to assist within the medicinal field, as more data and information are retrieved on patient response profiles [99]. The diseases discussed have been around for a long time, but the emergent need for a treatment for Coronavirus disease 2019 (COVID-19) has stirred up the research world. The pandemic outbreak has caused detrimental effects around the world, but the COVID-19 virus (SARS-CoV-2) is a novel strain of the same species of virus causing the 2003 Severe acute respiratory syndrome (SARS-CoV-1); thus, several studies are incorporating earlier information into supervised ML to quickly find a remedy for this virus [100]. Researchers worldwide are exhausting all available resources, and ML has helped narrow down the drug candidates and minimize clinical trial failure. Kowalewski and Ray developed ML models to help identify effective drugs against 65 human proteins (target) studied to interact with SARS-CoV-2 proteins. As the virus is known to target the respiratory tract, including nasal epithelial cells and upper airway and lungs, they deduce it from inhaling therapeutics to directly target the damaged cells. They assembled 14 million chemicals from ZINC databases and utilized ML models to predict vapor pressure and mammalian toxicity to rank the chemicals and find drugs that share the same chemical space. Their main goal was to establish a short term and long-term pipeline for future purposes. They utilized SVM and RF to create models that could predict drugs and their efficacy. Although most of the researchers focus on a single protein responsible for replication and host entry, it might only allow short term repair. In the long term, Kowalewski and Ray proposed to look into multiple drugs that could potentially target various proteins with diverse biological pathways [101]. Conclusions ML-based techniques seek to revitalize the development of drugs. These methods are based on separate applications in target discovery, lead compound discovery, synthesis, protein-ligand interactions, etc. ML applications are paving the way for algorithm-enhanced data query, analysis, and generation. One such example is ML incorporated into target discovery, based heavily on the refinement and search of existing omics and medical data. Through AI integration using ML techniques, viable targets can be found using data clustering, regression, and classification from vast omics databases and sources. Lead compound discovery, e.g., using QSAR, is currently frequently used to develop sensible molecular candidates based on training inputs. Lead compound synthesis has also been expedited with NN-based retrosynthesis algorithms alongside best-chance trees with the input of vast amounts of accumulated data and rules to develop algorithms that can generate synthesis pathways with greater than 90% accuracy in 60 s. Applications of ML in the processes of drug development have been used for some time now. These applications have proven to be steps above previous methods; the development of ML and DL techniques are not all brand new. They have been carefully crafted and developed through decades of research. This curation of function and utility to ML algorithms and techniques has allowed for the continued success and development in drug discovery. Owing to more precise algorithms, more powerful supercomputers, and substantial private and public investment into the field, these applications are becoming more intelligent, cost-effective, and time-efficient while boosting efficacy. Conflicts of Interest: The authors declare no conflict of interest; the funders had no role in the study design, nor in collection, analysis, or interpretation of the data. The funders had no role in the writing of the manuscript or in the decision to publish the results.
9,240
2020-11-01T00:00:00.000
[ "Computer Science", "Biology" ]
The MKID Science Data Pipeline We present The MKID Pipeline, a general use science data pipeline for the reduction and analysis of ultraviolet, optical and infrared (UVOIR) Microwave Kinetic Inductance Detector (MKID) data sets. This paper provides an introduction to the nature of MKID data sets, an overview of the calibration steps included in the pipeline, and an introduction to the implementation of the software. INTRODUCTION Microwave Kinetic Inductance Detectors (MKIDs) are photon detectors comprised of an array of RFmultiplexed inductor-capacitor resonators capable of measuring both the arrival time and wavelength (R ∼ 4 − 40) of individual photons without read noise or dark current (Day et al. 2003;Szypryt et al. 2017).To date, MKID-based instruments have placed new constraints on pulsar timing (Strader et al. 2016;Collura et al. 2017) and the orbital decay of compact binaries (Szypryt et al. 2014).Additionally, their ability to discriminate photons at the 10 −5 s level is crucial to improving contrast limits for exoplanet direct imaging: it enables realtime starlight suppression via an extreme adaptive optics feedback loop (Fruitwala 2021) and separation of faint companions from diffracted starlight via the statistical analysis of photon arrival times with stotochastic speckle discrimination (Meeker et al. 2018;Walter et al. 2019;Steiger et al. 2021). Each pixel in an MKID detector is a lithographed lumped element superconducting inductor-capacitor resonator circuit.When a photon is absorbed by the inductor, the energy breaks Cooper pairs causing a change of inductance that can be measured as a change in the resonator's phase.Room temperature readout electronics Corresponding author: John I. Bailey, III<EMAIL_ADDRESS>These authors contributed equally monitor each resonator's phase with 1 MHz frequency (Fruitwala et al. 2020) yielding microsecond timing precision.Since the amplitude of this phase change is proportional to the number of Cooper pairs broken, the energy (wavelength) of the incident photon can also be determined to within 5-10%.See Mazin et al. (2012) and Szypryt et al. (2017) for more information. MKIDs produce a raw data stream that differs from typical semiconductor based astronomical detectors and requires significant post-processing before it is effectively accessible to the broader astronomical community.To this end we have created the MKID Pipeline 1 , a Python package to provide an open-source extensible data reduction pipeline for MKID data.This pipeline takes raw MKID data as input and processes it into either a traditional form (i.e.FITS cubes) to be used with existing astronomical analysis packages, or a unique MKID data product suitable for advanced analysis tailored to the detector's unique abilities. The MKID Pipeline is based on the development and use of the only three optical/near-infrared astronomical MKID instruments to date: ARCONS (Mazin et al. 2013), DARKNESS (Meeker et al. 2018), and the MKID Exoplanet Camera (MEC, Walter et al. 2020).MEC is a new user instrument located behind the Subaru Conoragraphic Extreme Adaptive Optics System (SCExAO) at the Subaru Telescope on Maunakea (Jovanovic et al. 2015).Though largely targeted to MEC and our current laboratory development, the pipeline is intended to be easily extensible to future instruments. In this paper, we first briefly describe the contents of a typical MKID observing data set ( §2).We then discuss how data is processed in §3, beginning with a description of the contents of raw MKID data before diving into the specific calibration algorithms in depth.Finally, in §4 we end with a discussion of how these steps are implemented in software and how a user would perform basic data reduction. MKID OBSERVING DATA SETS MKID detectors take data by recording the time, location, and phase response for each detected photon.For this reason, all time binning is performed in post processing.An MKID observation or 'exposure' therefore refers not to specific exposures determined during a night of observing, but to time ranges where the object of interest is on the detector at an intended position.The resulting total observational data set consists of some number of science observations, associated observatory and instrumental metadata, and necessary calibration data. Science observations consist of a single time range, target, sky position, and associated calibration data sets.Due to the current high level of detector defects (e.g.cold/dead pixels), it is common to take dithered data suitable for reconstruction of a sky mosaic (Hook & Fruchter 2000).In MEC, a tip/tilt mirror is used for this purpose.This dither then consist of a series of science observations and corresponding tip/tilt mirror positions that are combined in post processing to generate a single output image (see §3.3.1). Calibration data consists of a series of uniform monochromatic laser exposures relatively evenly spaced across the wavelength coverage of the detector.These exposures are used for wavelength calibration ( §3.2.3) and can also be used for flat-fielding ( §3.2.6), though sky flats may also be taken and used instead.Dark observations (intervals obtained with a closed shutter or on blank sky) may be included to remove instrumental or astrophysical backgrounds.Finally, support for observations of an astrometric reference is provided to calibrate the final output products to reflect real on-sky coordinates ( §3.2.7), though this data is not routinely required. Both science and calibration data has associated observatory and instrumental metadata (e.g.observatory and telescope status information, detector temperatures).The instrument control software records all of this data periodically in a machine readable format for later use by the pipeline.A subset of this data must be provided either via these logs or specified by the user when defining data to ensure proper reduction. DATA PROCESSING Raw MKID data consists of per-resonant-frequency (an analog to pixel) time series of photon-induced phase shifts.These are associated with individual pixels, converted to tabulated photon event data for each observation, and calibrated via the pipeline diagrammed in Figure 3. In brief, the telescope and instrument logs (along with user overrides) are first used to create an associated metadata time series for for each observation to properly carry out later steps and determine eventual FITS and output header keys.Cosmic rays are then identified within the photon list.A linearity calibration may be performed which calculates a weight for each photon to statistically correct for missing photons caused by a detector-imposed dead time inherent to the MKID readout.This dead time prevents the recording of a photon that arrives too close the the tail of the preceding photon and causes non linear responses at high count rates ( 5000 photon pixel −1 , exceeding current instrument limitations).A series of monochromatic exposures is next used to determine the relationship between phase shift and wavelength for each pixel.Pixels that exhibit too strong (hot), too weak (cold), or no (dead) response to incoming photons are then masked and ignored in further analysis.Inter-pixel variations are next corrected by using an uniform polychromatic exposure or set of monochromatic exposures to determine a spectrally dependent flat weight for each pixel.Finally, an astrometric reference can be used to determine the pixel to world coordinate system (WCS) mapping for the instrument to yield physical output units for both the spatial and spectral dimensions of the output. The resulting calibrated data is then used to create output products such as spectro-temporal FITS cubes, calibrated tables of photons, and movies.This section describes the algorithmic details of each calibration step outlined above.For details on the implementation of the pipeline itself, see §4. Data Format MKID detectors are read out via frequency multiplexing sets of pixels that share a microwave feedline.Photon arrival locations are therefore discriminated by frequency rather than detector position.This means the resulting raw MKID data is a per-resonant-frequency time-series of photon-induced phase shifts.Due to the potential for data rates up to 40 M B s −1 kpix −1 the M etadata Attachment Cosmic data is recorded in a packed binary format (Fruitwala et al. 2020).This, coupled with the environmental sensitivity of MKIDs, necessitates the occasional determination of a optical beam position to pixel frequency mapping.At the start of pipeline processing this mapping, or "beammap"-which also contains information about malformed, inoperative pixels-is used to ingest this packed binary data and produce a tabulated photon list for further processing. Metadata Attachment During observing, the instrument captures a record of telescope and instrument status information in addition to the photon data from the detector.After photon table construction, this data is parsed for records within the observing interval as well as the record immediately prior, forming a metadata time-series for each.These series, supplemented with any user specified values, are attached to the photon table. Cosmic Ray Rejection Cosmic rays incident on an MKID detector excite phonons in the detector substrate causing the majority of pixels to register photon events near-simultaneously for a brief duration.The cosmic ray rejection step identifies intervals where these false photons are recorded for use in later analysis.This is done by splitting observations into ∼ 10 µs time bins and using one of two techniques to compute a count rate above which a cosmic ray event is flagged. Photon time stream where the red arrows denote locations of identified cosmic rays.Excluding all photon data obtained in a 10 µs window around each event would eliminate a total of 0.00195 s, < 0.01% of the exposure.Any missed cosmic rays would contribute no more than a single photon per pixel in an astrophysical source. The first approach assumes that count rates should follow Poisson statistics and employs scipy.stats to generate a count rate threshold (Virtanen et al. 2020).First, a cumulative density function (CDF) is determined which is defined by the number of standard deviations away from the mean that a given count rate needs to be for that time bin to be classified as containing a cosmic ray.A percent point function is then evaluated on that CDF at the average count rate to generate the threshold value.The second method calculates the standard deviation of the count rates using the total binned time stream, excluding data that falls outside of three standard deviations from the mean.The threshold is then defined as a user input number of those standard deviations above the mean value. In both cases, bins that exceed the computed threshold are flagged as cosmic ray events and their time intervals, total and average counts, and peak count rates recorded in the photon table's header.Due to the microsecond timing resolution of MKIDs the total time lost due to cosmic rays in a typical data set is less than 0.01% of the total observation time.In contrast to a CCD detector, missed events would only add a single count to each pixel.For this reason, cosmic ray rejection is presently implemented in a way that does not alter the original photon time stream and removal is not merited unless a particular analysis is sensitive to false counts at the 10s of photons level.Figure 2 shows an example MKID photon time stream with cosmic rays identified. Wavelength Calibration The wavelength calibration calculates the relationship between the phase response of each pixel and the wavelength of each incident photon via phase pulse-height histograms generated from a series of monochromatic laser exposures.These exposures are typically generated by using a series of lasers spanning the wavelength sensitivity range of the particular instrument coupled with an integrating sphere to ensure a uniform illumination on the array. The phase histograms are fit using one or more of a series of models.Current supported models are a Gaussian signal plus a Gaussian background, and a Gaussian signal plus an exponential background.If more than one model is specified then all are attempted and the best fit one used.When provided, a dark observation is used to subtract a background count rate from the phase histograms to yield a better fit. Once the phase histograms are fit, the centers of each histogram are determined and fit as a function of laser wavelength with a linear or quadratic function to determine a final phase-wavelength calibration for each pixel (Fig. 3).The resulting fits constitute a wavelength cal- The small Gaussian bump at low phases is likely due to an IR leak around 2.7 µm in the filter stack of the instrument (MEC) used to take this data. ibration data product that consists of a per-pixel mapping of phase to wavelength, a set of associated calibration quality flags, and general solution metadata.A sample resolution map at 1.1 µm is shown in Figure 4 Individual observations are then calibrated using the appropriate (e.g.user-specified, temporally proximate) solution for a given observation by loading each pixels' phases and feeding them through the associated mapping.The resulting wavelengths, associated flags, and wavelength calibration metadata are then stored in the observation. Pixel Calibration The pixel calibration identifies 'hot', 'cold', and 'dead' pixels to be removed from further analysis.Pixels that register counts a specified number of intervals above a threshold are flagged hot, below a threshold cold.Dead pixels first determined based on the detector's beammap and the array image is then passed through a filter which iteratively replaces the dead pixels with the mean value of pixels in a surrounding box until none remain.This is done before the determination of hot and cold pixels so as to not skew the algorithms.Three algorithms are provided for determining the hot and cold thresholds and associated interval for each pixel. Threshold This method compares the ratio of the image to a moving-box median that excludes both the central pixel and any defective pixels.Ratios greater than some tolerance above/below the peak-to-median Energy Resolution Map (1100 nm) Median The detector array's median count value is used as the global threshold.The tolerance interval is determined by applying a standard deviation moving box filter to the counts image. Laplacian A Laplace filter (scipy.ndimage.filters.laplace) is applied to the image and the result adopted as the count threshold.The standard deviation of the filtered image is used for the tolerance interval. Linearity Calibration Each pixel has a finite dead time, imposed in firmware, that precludes detection of photons arriving within a small time interval following the preceding photon.The exact interval value depends on the quasi-particle recombination time of the superconducting film and the LC time-constant of the resonator.For MEC, this dead time is set in firmware to be ∼ 10µs.As a result, MKID detectors exhibit a nonlinear response that requires correction at high count rates (see Fig. 15 of van Eyken et al. (2015)).This correction is equal to (1−N •τ /T ) −1 where N is the number of detected photons in time T for a pixel with dead time τ .The time T is set by the user and should be small so as to effectively determine the instantaneous count rate for each photon. The need to compute and operate this calibration on per-pixel inter-photon arrival times can result in expensive computation, especially as single exposures may easily contain > 10 9 photons.As the effect is less than one part in 1000 for typical count rates, the use of this step is generally discouraged. Flat-field Calibration Flat-field calibration has two modes: laser and white light.In both modes, a spectro-temporal cube is generated and used to determine the per-pixel wavelength re-sponse weights necessary to achieve a uniform response across the detector array.To calculate this weight, the cube is normalized by the integrated average flux at each wavelength and then a user-specified number of the highest and lowest flux temporal bins are excluded to control for time-dependent contamination of the flat, e.g.radio frequency interference.The average of the remaining temporal bins is then fit as a polynomial function of wavelength and the fit saved as the flat-field calibration data product for later application.Data is flat-fielded by evaluating the polynomial at the wavelength of each photon and incorporating the resulting spectral weight into the photon table. White Light Mode Uses an observation of a uniform continuum source (e.g.twilight, dome) to generate the spectral cube.In this mode, the spectral sampling is determined by the nominal wavelength resolution of the associated wavelength calibration.Laser Mode Generates the spectral cube using a series of monochromatic laser exposures such as the ones used for the wavelength calibration (see 3.2.3).This can be done by either positing that the laser frames are truly monochromatic (i.e.not imposing any wavelength cut on the exposures), or by using the wavelength calibra-tion solution to use only photons within a small window around each laser wavelength.An example of the flatfield calibration using the laser mode applied to a real data set is shown in Figure 6. Astrometric Calibration The astrometric calibration determines the World Coordinate System (WCS) transformation parameters to convert an image from pixel (x, y) to on-sky (RA, Dec.) coordinates.First, a point spread function (PSF) fit is performed to determine the pixel location of each source in each image of the observation.Here an 'image' is defined by any single exposure where the pixel and sky locations of the sources are expected to remain constant (e.g. the telescope pointing does not change, the tip/tilt mirror is in the same position, etc.). Each fit PSF location is then assigned an RA and Dec. through the use of an interactive tool where the user selects the approximate pixel location of the PSF for each source coordinate.The fit position of the nearest PSF to the selected coordinates is then assigned to the corresponding sky coordinate to generate a dictionary of pixel-sky coordinate pairs.When complete, the transformation between pixel and sky coordinate is then determined by solving for the WCS parameters by performing the following. First, the tip/tilt mirror to pixel mapping is determined by fitting a linear model to the PSF centers (p x , p y ) and corresponding mirror positions (c x , c y ). Here, the slopes µ x,y give the number of pixels moved for a given tip/tilt mirror position change in either x or y, and a x,y, is the pixel location corresponding to tip/tilt mirror position (0, 0). Next, the x and y platescales (η x , η y ) are found using the known separation and pixel displacement of the sources.The platescale is calculated for each image and the mean value saved. Finally, an affine transform is applied to the pixel coordinate point consisting of the following steps: 1. Rotation by an angle Φ to account for the detector's rotation with respect to the telescope beam 2. Translation by an amount (µ x c x , µ y c y ) where c x and c y are the tip/tilt mirror positions. Rotation by the telescope position angle (Θ) The (RA, Dec.) telescope offset is then added to the transformed pixel coordinate to complete the mapping.This results in two equations for each image (n im ) and each coordinate pair (n s ) giving a total of n im • n s • 2 equations.Each equation is solved for the last unknown WCS variable, the detector rotation Φ, using scipy.optimize.fsolveand the mean value saved.Values of µ x , µ y , η x , η y , and Φ are all saved within the photon table metadata. Data Products The calibrated photon tables output by the calibration stage of the pipeline consist of rows of individual photons with columns of time, resonator ID, wavelength and weight.The resonator ID is a unique five to six digit identifying number given to each pixel to determine its location on the array in conjunction with the beammap.The weights are the multiplicative combination of the linearity and flat-field calibration steps.These photon tables may be used directly for analyses that rely on photon arrival time information, such as stochastic speckle discrimination (see Fitzgerald & Graham 2006;Gladysz & Christou 2008;Steiger et al. 2021). The pipeline is also able to produce traditional astronomical outputs in the form of spectro-temporal cubes from individual observations or dithered mosaics and movies as are described below.Spectral and temporal FITS cubes with arbitrary wavelength and time bin widths may also be generated from individual exposures. Spectro-temporal Mosaics A common observing strategy with MKID instruments is to dither using a tip/tilt mirror to fill in regions of dead pixels and increase the field of view.A mosaic from these dithered observations may be formed into a spectro-temporal FITS cube by combining each frame onto a common on-sky grid using the DrizzlePac implementation (Gonzaga et al. 2012) of the Drizzle algorithm (Fruchter & Hook 2002).Each frame is mapped onto a sub-sampled output image to generate a single combined image, a spectral cube, a temporal cube, or a spectro-temporal cube with arbitrary wavelength and temporal axes.This allows for the generation of contiguous outputs even with pixel yields of ∼ 75% on active feed-lines (Walter et al. 2020), see Figure 7. As all presently supported MKID instruments operate without an image derotator the sky rotation is generally removed from each frame, resulting in an output where every frame is North aligned.An angular differential imaging mode is offered to facilitate interfacing with the Vortex Image Processing package for high-contrast direct imaging (VIP; Gomez Gonzalez et al. ( 2017)) in which each frame in the sequence is aligned so that the first frame is North aligned but the parallactic angle rotation between frames is preserved. Movies Movies may be output in GIF or MPEG-4 format and come in two types.The first shows subsequent frames with the desired temporal resolution and run time and is well suited to show rapidly changing features, such as diffracted speckle patterns that vary on millisecond timescales (Goebel et al. 2018).The second format integrates the series of frames and is helpful to illustrate how increasing exposure time affects the final output image. MKIDPIPELINE: THE MKID PIPELINE PACKAGE The MKID Pipeline is implemented as the Python 3 package mkidpipeline2 that includes a corresponding conda environment definition file.The package provides a command-line program, mkidpipe, to process observational data and is configured via three YAML files: pipe.yaml, for general and step specific settings; data.yaml,which defines the data; and out.yaml, which specifies output products.Instructions for basic pipeline setup and execution of a sample dataset are provided in the package README.Complex data processing is expected to require direct use of pipeline methods in a user script.The following subsections describe the pipeline implementation.Additional details may be found in the source code. mkidpipeline is composed of the modules pipeline, photontable, definitions, config, and samples, along with the sub-packages steps, utils, data, and legacy.Example data and default configuration files are stored in the data and config directories, respectively. Concept The pipeline steps as outlined in §3 are implemented as modules in steps, with the requirement that each define a FlagSet at FLAGS, a StepConfig (see §4.2), and an apply() method.Steps may also implement fetch() when there is a need to compute a persistent calibration data product (CDP), e.g. a wavelength or flat-field calibration solution file.If implemented, fetch() will be provided a path that is guaranteed to be unique for the input data and step configuration used to generate the CDP.This allows multiple users to use these files from a shared location without duplication of effort. Initialization and Configuration Each step module with settings is required to implement a subclass of config.BaseStepConfig named StepConfig.In its simplest form, this merely consists of a class-member listing of setting names, default values, descriptions and a YAML tag, though support is provided for additional verification of parameters that may have complex inter-dependencies or depend on other settings from other steps. The pipeline places a configuration object for programmatic and interactive use at config.config after initial configuration (e.g. by loading a pipe.yaml).Access to a fully populated, isolated configuration object is available via the PipelineConfigFactory.This allows individual steps to not worry about whether or not the pipeline has been configured via a file and ensures that required step defaults are present.It also means that any accidental mutations of the configuration do not propagate to other steps or processes.The configuration object supports parameter inheritance, however default values for individual steps can result in unexpected behaviour as the existence of a child default will take precedence over an explicitly set parent setting.When imported, the pipeline module loads steps from the steps sub-package, registering any defined configuration classes with the pipeline YAML loader. These are then available to build a config.PipeConfig of top-level and step specific settings via pipeline.generatedefault config() or by loading a config file config.configurepipeline(). In addition to configuration options, the pipeline maintains a set of named flags that may be associated with individual pixels.Flag support is achieved by requiring steps to list any flags they would like to set as a tuple of strings named FLAGS.These are parsed when the steps are loaded and used to build a FlagSet object at config.PIPELINE FLAGS that is capable of converting between flag names and bitmasks.The FlagSet is implemented in such a manner to ensure forward compatibility with pipeline data as new flags are added. mkidpipeline.samplesprovides sample data sets and outputs for both programmatic reference and use by the pipeline to generate default data.yamland out.yamlconfiguration files during initialization.The resulting files provide comprehensive samples with sensible defaults that may be used to test the pipeline.The raw data is not included due to its extremely large size. Data Specification definitions provides classes to manage the description and use of calibration and science data.Data definitions may be created either via class instantiation or via YAML, where support is provided for linking unnested data descriptions by name.For example, an observation may specify a wavecal to use via the name of a top-level wavecal (i.e.not defined explicitly within a different observation) within the same data.yaml.Though possible, it is not generally advised to nest definitions.The MKIDObservingDataset is used to represent collections of data definitions and defines properties to access key groupings of data: <stepname>able (e.g.wavecalable) are definitions that can have the step applied and <thing>s (e.g wavecals) are definitions of that thing. All observational data are sub-classes or collections of MKIDTimerange objects.This object is defined by a name, a UTC start time (as a Unix timestamp), a stop time or duration, an optional nested MKIDTimerange for a dark exposure, and an optional set of header key overrides.It provides support for metadata retrieval from instrument logs, accessing the associated detector beammap, HDF5 path, and convenience methods for accessing the table of photon data (c.f.§4.6). Scientific observations are instances of MKIDObservation, which requires the specification of a wavecal, flatcal, and wcscal.Dithered observations are represented by MKIDDither which has similar calibration requirements to MKIDObservation.The dither, however, takes a single data specification which may be either a list of MKIDObservations, a timestamp within a dither log, or the fully qualified path to a dither log.In the latter two cases, the list of MKIDObservations is built from the dither, specified calibrations, and any extra header information. All calibration data sets (including MKIDWavecal, MKIDFlatcal, and MKIDWCSCal) include the CalibMixin mix-in.This provides support for accessing the input time ranges as well as the creation of unique hash strings to identify calibrations made with specific data and settings.Wavelength calibration data sets are represented by an MKIDWavecal and take a list of MKIDTimeranges named by laser wavelength (e.g.'1000 nm') as its data.Flat-field calibrations (MKIDFlatcal) take either a list of MKIDObservations or the name of an MKIDWavecal as data input.If a MKIDWavecal name is provided then a wavecal duration and wavecal offset must be given.These specify the duration and starting offset relative to the wavecal's photon tables are used to create new, wavelength calibrated tables for the flat-field calibration. Astrometric calibration data is represented by an MKIDWCSCal and takes either a platescale, a, MKIDDither, or an MKIDObservation as data.A pixel ref and conex ref are also required that define a tip/tilt mirror home position and a corresponding pixel location, if applicable.If a dither or observation is used, source locs must list the sky coordinates of the targets. Output Specification Individual outputs are defined by a named MKIDOutput.This class is defined by a name, a data string specifying a MKIDDither or MKIDObservation, and a kind which specifies the type of output (e.g.movie, drizzle).Optional keys include minimum and maximum wavelength bounds (min wave, max wave), exclude flags, a duration, a filename which specifies the name of the output file, units (photons or photons/s), use weights which weights photons by their pipeline weights, adi mode which preserves parallactic rotation between drizzled frames (c.f.§3.3.1), a timestep which will yield a temporal cube if non-zero, a wavestep which will yield a spectral cube if non-zero, and fields that determine which calibration steps will be applied to the output (e.g.wavecal).If a movie is requested, a movie runtime is also required.MKIDOutput provides the pipeline with the properties wants <outputtype> and output settings to help determine what output types are needed and what settings need to be used with output.generate()(c.f.§4.5). The MKIDOOutputCollection manages the outputs and defines relevant properties to be used by outputs.generate().These include to <stepname> (e.g. to wavecal) which gather all of the data definitions needing a particular step given the current configuration, data, and outputs requested.It also provides properties similar MKIDObservingDataset that filter a potentially large data set down to the subset needed for a particular set of outputs. Execution The command-line program mkidpipe provides arguments for help, initialization, input verification, and pipeline execution.On initialization, it creates a commented set of pipeline YAML files in the working directory populated with all available settings and a set of default data and output definitions.Re-invoking mkidpipe will validate the files and begin the data reduction process. On execution, mkidpipe configures the pipeline via config.configurepipeline() and then loads the data and output YAMLs by instantiating an MKIDOutputCollection.The data set and outputs are then validated via outputs.validationsummary and any issues presented to the user for correction.The program then proceeds to call first fetch() and then apply() for each applicable step required for each output.This can be seen diagrammatically in Figure 4.1.Finally, the entire MKIDOutputCollection is fed to outputs.generate().This function executes photontable.Photontable.getfits() for a spectral or temporal FITS cube, movies.fetch()for a GIF or MPEG-4 output, or drizzler.form()for a combined spectro-temporal mosaic FITS cube as needed.Existing outputs are not, by default, overwritten. Core Modules and Libraries Much of this functionality is mediated by the Photontable class described below.The pipeline also depends on AstroPy (Astropy Collaboration et al. 2018), PyTables (Team 2002-), and the python2/3 compatible library mkidcore3 .mkidcore is used for tasks such as logging, flagging, parsing instrument readout information, and managing instrument specific settings.This package ensures compatibility may be maintained across a number of instrument readout systems without editing the pipeline. The photontable module implements Photontable which handles all interaction with underlying photon data, loading data from and manipulating the underlying HDF5 file representation.Key functionality is provided to (un)flag pixels, interact with observing metadata, select subsets of photons by wavelength range, time range, and pixel, and form FITS images and cubes (with associated WCS information, if available).Functionality is generally dependent upon what pipeline processing has been completed. Interactive Use Users are able to import the mkidpipeline package, create data and output definitions programmatically in a similar manner to that done for pipeline initialization in mkidpipeline.samples.Step operations and numerous utility functions are then available to be used interactively on the data from a terminal. SUMMARY The MKID Pipeline is an open-source extensible pipeline for the reduction and calibration of MKID data.It takes binary per-pixel time-series of photon-induced phase shifts as its input and can perform cosmic ray rejection, linearity calibration, wavelength calibration, flat-fielding, bad pixel masking, and astrometric calibration.This results in calibrated spectro-temporal FITS cubes which can be integrated with traditional astronomical tool chains for scientific analysis.Additionally, unique MKID specific data products, such as time tagged photon lists, can be easily accessed and manipulated for the use and development of new post-processing techniques that utilize photon arrival time statistics. The pipeline is designed with automation in mind to allow users to run basic reductions from the command line with unique reductions requiring only the editing of a few configuration files.It also allows future developers to add new algorithms and calibration steps in a modular framework to serve as a base for future MKID instruments and mixed instrument reductions. Figure 3. Single pixel count rate histograms for each laser wavelength as well as the calibration solution fit (bottom right).The small Gaussian bump at low phases is likely due to an IR leak around 2.7 µm in the filter stack of the instrument (MEC) used to take this data. Figure 4 . Figure 4. Resolution image at 1.1 µm for the detector in MEC as of the time of this publication.The median energy resolution (R) across the array at this wavelength is 3.93 excluding dead pixels.This particular detector has three defective feed lines (each containing 2000 pixels) which results in the large strip of dead (R=0) pixels seen to the left of the image Figure 5 . Figure 5. Subset of an MKID array with hot, cold, and dead pixels labeled.The threshold method was used in the determination of the pixel flags with default settings. Figure 6 . Figure6.Top: Percent variability in pixel response before and after applying the flat-field calibration.This is calculated by subtracting and then dividing the median counts registered on the detector from each pixel.The structure seen in this MEC data is dominated by vignetting from the optical system.Bottom: Histograms of the percent variability with the uncorrected pixel response shown in blue and the corrected pixel response shown in orange. Figure 7 . Figure 7. Left: MEC Image of the HIP 109427 system with each dither position color-coded by its order in the sequence.The image of the HIP 109427 (behind a coronagraph), satellite spots, and stellar companion are shown in grey scale to be able to better see the frame boundaries.Right: Exposure coverage footprint for the same data set.Here bright regions have more effective exposure time than darker regions.Current dithering scripts for MEC enforce a rectangular dithering pattern leading to a non-uniform footprint, but future work will optimize this pattern for maximal uniform coverage. . is supported by a grant from the Heising-Simons Foundation.N.Z.was supported throughout this work by a NASA Space Technology Research Fellowship.K.K.D. is supported by the NSF Astronomy and Astrophysics Postdoctoral Fellowship program under award # 1801983.The authors would like to thank Clarissa Rizzo, Joshua Breckenridge, and Xiaofei Zhang who helped with testing at various stages of pipeline development and improved the quality of this work. User inputs are in purple.Calibration data products themselves are shown as blue cylinder faces.Rectangles with the wavy bottom edge represent files where the purple are user input configuration files and the yellow are the files representing calibrated output data products.All calibration steps are optional with the exception of the bolded 'Metadata Attachment' and 'Wavelength Calibration' steps without which product generation cannot occur.The dashed stages of the pipeline may be completely omitted.The bracketed numbers in the 'Calibration Pipeline' boxes denote pipeline stages that must completed to generate the respective CDPs.Flat-fielding can be accomplished via reference to wavelength calibration data, denoted by the dashed arrow.
8,024.8
2022-03-02T00:00:00.000
[ "Physics", "Computer Science" ]
Control of microbiological contamination and content of cations in wastewater of grain processing enterprises in Uzbekistan . The microbiological contamination of wastewater leaving the grain washing equipment of the flour mill of JSC "Galla Alteg" has been studied, and the results of determining ions using the ion chromatographic method are also presented. The relationship between the hydrogeochemical characteristics of the grain growing environment and the chemical composition of wastewater has been studied. It has been established that the ratio of the total concentration of cations in wastewater is specific. This allows you to obtain information about the type of adsorbent that can be used in the future to purify this type of wastewater. Introduction Humidification and washing of grain are the processes of preparing grain for grinding, improving the degree of its food use.During moistening and subsequent defoliation, physical and biological changes occur in the grain, as a result of which the separation of the shells from the grain is facilitated with minor losses of endosperm; When washing, the surface of the grain is cleaned, heavy and light impurities and puny grains are released, and microorganisms are removed.To moisten and wash grain at flour mills, they use: machines in which grain is moistened with cold or warm water in order to change its physical properties during hydrothermal treatment; machines for moistening grain with steam before peeling or flattening when processing various crops into cereals; machines that separate impurities that differ from grain in hydrodynamic properties [1].The industry produces two types of humidifying machines: water-jet for adding water in a dripping state and water-spraying for adding water in a spray, as well as combined washing machines with a vertical squeezing column [2][3][4][5].The use of water jet machines in the flour milling industry makes it possible to accurately dose water in proportion to the amount of grain.However, uniform wetting of its surface is not achieved, and therefore devices are required that allow additional mixing of the moistened grain mixture.More uniform wetting of the grain surface is achieved in machines in which water is added to the grain in a sprayed state [6][7][8].Water consumption in water-jet humidifying machines ranges from 2 to 8 liters per 1 ton of grain, depending on the degree of moisture, and in water-spraying machines -25...50 liters per 1 ton of grain [9][10][11]. Materials and methods In combined washing machines, water serves as a medium for separating impurities that are difficult to separate using the dry method of grain cleaning.Hydro separation is based on the difference in the falling speed of grain and impurities in water.It is advisable to feed grain into the washing bath in the zone of formation of upward flows of water, i.e., against the direction of rotation of the grain augers.When grain enters the downward flow zone, i.e. in the direction of rotation of the augers, a large amount of grain enters the destoning augers.To filter washing water in order to extract grain waste, separators are used, which are installed in the grain cleaning department of a flour mill directly above the press.The separator is fed with waste from several washing machines or humidifying-husking machines.Also, a screw press is used to squeeze water out of washing waste after processing it in the separator.In the press, waste from the outlet pipe through a rubber sleeve enters the receiving housing and onto the auger.The pre-squeezed water passes through a sieve into the pan and is discharged into the sewer [12].Washing waste, pressed to the required moisture content, is discharged through a pipe and is sent through gravity pipes for drying.To dispose of waste, it is necessary to dry it.At flour mills equipped with complete equipment, it is carried out on non-standardized installations created on the basis of commercially produced steam screw dryers.When designing food enterprises, in particular grain processing enterprises, the cost of water spent on production and household needs is one of the main factors determining the economic efficiency of the enterprise [13].An increase in tariffs for housing and communal services affects the retail sales price of food products.The consumption of cold and hot water by an enterprise is determined by the following parameters: daily, annual consumption and consumption per hour of maximum load.The calculation of these parameters is based on consumption rates, which are given per 1 ton of grain, and the results of technological calculations of the cost of hydrothermal treatment.Grain processing enterprises have established water consumption standards, which include all additional water costs for operating personnel, consumers, cleaning of premises and for other needs.This norm has been included in the current document without changes [5][6][7][8].Moreover, the justification for the standards is not given; moreover, over such a long period of time, not only prices, but the very attitude towards water consumption have changed significantly.Therefore, the calculation of the amount of consumption and its structure using the example of a specific enterprise is of undoubted theoretical and practical interest. Results The food industry is directly related to environmental problems, because recently there has been a lot of talk about the production of environmentally friendly products.Intensive processing and large volumes of processed products can have potential environmental impacts.As for the food industry, the focus is on environmental pollution with organic rather than toxic substances.Inadequate pollution control or effective pollution prevention measures can result in contamination of public infrastructure and adverse impacts on local ecosystems.The function of production loss control is to improve production yield and production efficiency while reducing waste and solving environmental pollution problems.Drinking water is of great importance for all life on the planet.In the food industry, a large amount of water is also used for technological purposes, for example, for pre-cleaning of raw materials, washing, decolorization, pasteurization, cleaning of process equipment and cooling of finished products.In particular, obtaining high-quality, environmentally friendly flour with minimal labor and energy costs at grain processing and flour production enterprises is the main task of mill enterprises.The production of flour and cereals is a complex technological process and has various technical means of mechanization.Nevertheless, the issues of grinding or preparing grain before planting, in particular washing grain with water, at small-scale enterprises have not been fully resolved.Currently, due to the complexity of technological processes and the high degree of contamination of raw materials, the water consumption for washing grain mass is on average 2-3 cubic meters per 1 ton of grain, most mills have switched to the dry method.method of cleaning grain mass.But this does not allow the surface of the grain to be completely cleaned and affects its technological properties.An undoubted advantage of washing grain with water is the cleaning of the outer layers of grain from dust and microorganisms, as well as the separation of impurities that differ from the grain in specific gravity.Microorganisms and heavy metal compounds are the main pollutants of mill effluents during grain water purification.The maximum permissible concentration of arsenic, antimony and indium in water is 0.05 mg/l, gallium -0.1 mg/l.Traditional technological schemes for treating wastewater from semiconductor production usually provide for the separation of arsenic compounds from them and do not take into account emissions of gallium, indium and antimony compounds, which correspond to the latter in toxicity.Waste technological solutions generated during the production of semiconductors, as well as containing arsenic and gallium compounds in concentrations of up to several tens of g/l, are neutralized using water treatment technology.The most widely used is the lime-phosphate version of the reagent method, which leads to the formation of an excess volume of sludge, high mineralization of the treated wastewater, and significant consumption of reagents, which subsequently overcomes the above.processes.flaws.Technical processes developed due to the lack of sufficient financial resources at enterprises, for example, due to the possibility of using existing equipment for chemical treatment and recycling of industrial waste, imposed stringent requirements for minimizing capital and operating costs during their implementation.Most wastewater is dangerous to human life, and there is a simple explanation for thispathogenic microorganisms.They can cause many gastrointestinal diseases, including such dangerous diseases as dysentery, cholera, typhoid fever and others.Therefore, to determine the level of danger of wastewater, analysis is used for qualitative and quantitative pollution of one type or another.In the course of our research at the Institute of Microbiology of the Academy of Sciences of the Republic of Uzbekistan, microbiological analyzes of wastewater coming out of the grain washing equipment of the Galla Alteg JSC flour mill were carried out.For microbiological analysis of water samples, methods generally accepted in soil and water microbiology were used. To study the number of main physiological groups in water, samples were taken from benthic and plankton water layers.Microorganisms in the water under study include: ammonifying bacteria -GPA nutrient medium, phosphorus-decomposing bacteria -Pikovsky solid nutrient medium, oligonitrophils and free-living nitrogen-fixing bacteria -Ashby nutrient medium, Chapek micromycete -solid nutrient medium and actinomycetes -starch-ammonia nutrient medium.the environment is planted and studied.The suspension was prepared from water samples taken for microbiological analysis.To do this, 1 ml was taken from a water sample using a pipette and placed in 9 ml of water in a sterilized test tube.This process was continued serially, diluted to 1:100,000 and repeated.1 ml of liquid from a test tube was inoculated onto special solid selective nutrient media in a Petri dish in three replicates: meat-peptone medium with ammonifiers, Pikovsky medium with phosphorus-degrading bacteria, Ashby medium with oligonitrophils and free nitrogen-fixing bacteria, Czapek medium with micromycetes and actinomycetes were cultivated and studied on a starch-ammonia nutrient medium using the "dilution" principle (Table 1).As a result of the microbiological analysis, the number of ammonifying bacteria was 5.3x10 6 HBB cells/ml in 1 ml of water; species of Bacillus and Micrococcus bacteria were found in the GPA nutrient medium.It has been established that the total number of phosphorusdecomposing bacteria is 6x10 3 HBV cells/ml in 1 ml of water.The number of oligonitrophilic microorganisms growing in a nitrogen-free environment was 5.4x10 4 CBB cells/ml, and the number of free-living nitrogen-fixing bacteria was 9x10 1 CBB cells/ml.The total number of micromycetes was 2×10 2 KHB cells/ml.Actinomycetes and yeast bacteria were not detected., 03032 (2024) E3S Web of Conferences https://doi.org/10.1051/e3sconf/202449703032497 Thus, in this microbiologically analyzed water sample, almost all the main physiological groups of microorganisms (except for actinomycetes) were observed.Further studies determined the content of cations and anions using the ion chromatographic method.Controlling the content of ionic forms of toxic and biogenic components in natural water is important for environmental protection.Environmental monitoring is necessary because anthropogenic loads are increasing and the operational characteristics of known deposits are increasing, which leads to a deterioration in the quality of natural drinking water.It is important to establish general patterns of distribution of microcomponents depending on the content of matrix ions, as well as the dynamics of changes in the concentrations of toxic compounds and nutrients in underground drinking water from various deposits.The relevance of solving such problems is due, on the one hand, to environmental problems of water resources, industrial wastewater treatment and reuse, and on the other hand, to issues of compliance of drinking water brands with the model declared by the manufacturer.Conditions conducive to the formation of the chemical composition of wastewater used in the hydrothermal treatment of grain processing enterprises are distributed according to the latitudinal and vertical zoning of grain crops.Such processes are probabilistically determined, limited to a certain number of geochemical situations, and therefore they are predictable.Identifying the specific chemical composition of wastewater after use in a technological process and establishing the features of the natural functioning of mineral springs based on characteristic indicators is an important task when studying the possibilities of using it back in a technological process (Figure 1).In the studies, the content of cations was determined using the ion chromatographic method.The work was performed on an ion chromatograph with a conductometric detector, using Shodex IC YS-50 4.6x125 mm columns, with a mobile phase: 0.23 g of HNO3 per 1 liter, brought to the mark with water (filtered and degassed), with a flow rate: 1.0 ml/min, column temperature: room temperature, sample volume: 20 µl.The optimal ratio of ion concentrations, being one of the criteria for the physiological usefulness of the water itself used for washing grain and its content in the surface of the grain, requires purification before applying hydrothermal treatment to the return flow.The following water samples were analyzed: waste water coming out of the grain washing equipment of the Galla Alteg JSC flour mill.The sample was collected in a plastic container.Samples were not filtered.Chromatography was carried out immediately (on the day of selection or on the day of admission).Before analyzing the samples, calibration was carried out using a combined solution of six anion standards (Table 2).The software identifies and quantifies each analyte by integrating peak areas. Conclusions In conclusion, this study focused on investigating the microbiological contamination of wastewater discharged from the grain washing equipment at the flour mill of JSC "Galla Alteg."Additionally, the chemical composition of the wastewater was analyzed using the ion chromatographic method.The relationship between the hydrogeochemical characteristics of the grain growing environment and the wastewater's chemical composition was explored.The findings revealed a specific ratio of total cation concentrations in the wastewater, indicating the presence of distinctive chemical components.This observation holds significant potential for future wastewater treatment processes, as it provides valuable information regarding the selection of appropriate adsorbents for purification purposes.Understanding the microbiological contamination and chemical composition of the wastewater is crucial for ensuring effective treatment and minimizing environmental impacts.By identifying the specific cation concentrations, suitable adsorbents can be chosen to target and remove the contaminants efficiently.The results of this study contribute to the development of strategies for wastewater management and treatment in the grain industry.Implementing appropriate purification techniques based on the identified chemical components will aid in mitigating the environmental footprint associated with flour mill operations.Further research and experimentation should be conducted to optimize the purification process and evaluate the effectiveness of various adsorbents in removing the specific contaminants identified in the wastewater.By continuously improving wastewater treatment methods, the flour mill industry can enhance its sustainability and environmental stewardship while maintaining high product quality standards. Table 1 . Number of microorganisms of the main physiological group in water samples, in 1 ml of water. Table 2 . Number of microorganisms of the main physiological group in water samples, in 1 ml of Thus, in studies using the ion chromatographic method, the content of cations in the analyzed water sample was observed to be almost all types (except lithium).
3,348.8
2024-01-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Constructive method for solvability of Fredholm equation of the first kind The solvability and construction of the general solution of the first kind Fredholm integral equation are among the insufficiently explored problems in mathematics. There are various approaches to solve this problem. We note the following methods for solving of ill-posed problem: regularization method, the method of successive approximations, the method of undetermined coefficients. The purpose of this work is to create a new method for solvability and construction of solution of the integral equation of the first kind. As it follows from the foregoing, the study of the solvability and construction of a solution of the Fredholm integral equation of the first kind is topical. A new method for studying of solvability and construction of a solution for Fredholm integral equation of the first kind is proposed. Solvability conditions and the construction method of an approximate solution of the integral Fredholm equation of the first kind are obtained. The solvability and construction of the general solution of Fredholm integral equation of the first kind is related to insufficiently explored problems in mathematics. As it follows from [22], the norm K ≤ P, the operator K with kernel from L 2 (S 0 ) is a completely continuous operator which transfers every weakly convergent sequence into strongly convergent.The inverse operator is not limited [17], the equation Ku = f can not be solvable for all f ∈ L 2 .This leads to the fact that a small error in f leads to an arbitrarily large error in solution of the equation (1.1). The famous theoretical results on the solvability of equation (1.1) refer to the case when K(t, τ) = K(τ, t) i.e. equation (1.1) with a symmetric kernel.One of the main results of the solvability of equation (1.1) is a Picard theorem [18].However, for application of this theorem necessary to prove the completeness of the eigenfunctions of symmetric kernel. Thus, solvability and construction of solution of the integral equation (1.1) is a few studied complex ill-posed problem.There are various approaches to solve this problem.Note the following methods for solving of ill-posed problem. • The regularization method [19] based on reducing the original problem to a correct problem.For regularization it is necessary to perform a priori requirements to the original data of the problem.In the works [16,20] the methods for solving of the correct problem after regularization are proposed.Unfortunately, the additional requirements imposed to the original data of the problem are not always held and the methods of solving the correct problem are time-consuming; • The method of successive approximations [15] for solving of the equation (1.1).The method is applicable when K(t, τ) is a symmetric positive kernel in L 2 and it is required definition of the least characteristic number; • The method of undetermined coefficients [21].It is proposed to seek solutions to the equation (1.1) as a series.However, in general, the determination of the coefficients is extremely difficult. The solvability and construction of solution of the equation (1.1) is topical as it follows from the study above. The purpose of this work is to create a new method for construction and solvability of a solution of the integral equation of the first kind. Solvability of Fredholm integral equation of the first kind Problem statement.We consider the integral equation in the form where K(t, τ) = K ij (t, τ) , i = 1, n, j = 1, m is known matrix of n × m order, elements of the matrix K(t, τ) are K ij (t, τ) which are measurable functions and belong to the class L 2 on the set Problem 2.5.Find an approximate solution of the integral equation (2.1). As it follows from the problem statement the solvability and construction of the solution of matrix Fredholm integral equation of the first kind are considered.As well as construction of an approximate solution of Fredholm integral equation of the first kind.The results are correct for the matrix Fredholm integral equation of the first kind, as with non symmetrical, as symmetrical kernel. We consider the solutions of the Problems 2.1, 2.2 for integral equation (2.1).The solutions of Problems 2.1 and 2.2 can be reduced to the study of extreme problem: minimize the functional where Theorem 2.6.Let the kernel of the operator K(t, τ) be measurable and belongs to the class L 2 in the rectangle then the functional (2.2) at condition (2.3) is strongly convex. Proof.As it follows from (2.2) the functional Then the increment of the functional where From (2.9) follows, that J (u) is defined by formula (2.4).Since Then for any u, u + h ∈ L 2 (I 1 , R m ).This implies the inequality (2.5). We show, that the functional (2.2) at condition (2.3) is convex.In fact, for every u, w ∈ L 2 (I 1 , R m ) the following inequality is valid: This means that the functional (2.2) is convex, i.e. the inequality (2.6) holds.As it follows from (2.4), Consequently, J (u) is defined by formula (2.7).From (2.7), (2.8) follows that This means that the functional J(u) is strongly convex in L 2 (I 1 , R m ).The theorem is proved. Theorem 2.7.Let for extreme problem (2.2), (2.3) the sequence {u n (τ)} ∈ L 2 (I 1 , R n ) be constructed by algorithm [5] Then the numerical sequence {J(u n )} decreases monotonically, the limit (ii) the sequence {u n } weakly converges to the set U * , where (iii) the following rate of convergence is valid where J * = J(u * ); (v) in order that the Fredholm integral equation of the first kind (2.1) have a solution it is necessary and sufficient that J(u * ) = 0, u * ∈ U * .In this case the function u * (τ) 1). (vi) if the value J(u * ) > 0, then the integral equation (2.1) has no solution for a given f (t) . .On the other hand, from the inclusion It follows that the numerical sequence {J(u n )} decreases monotonically and lim n→∞ J (u n ) = 0.The first statement of the theorem is proved. Since the functional J(u) is convex at u ∈ L 2 , that the set M(u 0 ) is convex.Then where D is a diameter of the set M(u 0 ).Since M(u 0 ) is a bounded convex closed set in L 2 , it is weakly bicompact.The convex continuously differentiable functional J(u) is weakly lower semicontinuous.Then the set Consequently, on the set M(u 0 ) the lower bound of the functional J(u) in the point u * ∈ U * is reached, the sequence {u n } ⊂ M(u 0 ) is minimized.Thus, the second statement of the theorem is proved. The third statement of the theorem follows from the inclusion From the inequalities follows the estimation (2.10), where m 0 = 2D 2 l.The fourth statement of the theorem is proved. As it follows from (2.2), the value Thus, the integral equation (2.1) has a solution if and only if the value J(u * ) = 0, where not the solution of the integral equation (2.1).The theorem is proved. We consider the case when the original function u(τ) ∈ U(τ) ⊂ L 2 (I 1 , R m ), where, in particular, either Solutions of the Problems 2.3 and 2.4 can be obtained by solving the optimization problem: minimize the functional (2.13) Theorem 2.8.Let the kernel of the operator K(t, τ) be measurable and belong to L 2 in the rectangle S 1 = {(t, τ) ∈ R 2 /t ∈ I, τ ∈ I 1 }.Then: (i) the functional (2.12) at condition (2.13) is continuously Fréchet differentiable, the gradient of the functional Let in L 2 a complete system be given, such as 1, t, t 2 , . . ., and any complete orthonormal system {ϕ k (t)} ∞ k=1 , t ∈ I = [t 0 , t 1 ].Since the condition of Fubini's theorem on changing the order of integration is held, that (see (3.1)) where where a (2) . . . where p N (τ) ∈ L 2 (I 1 , R m ) is an arbitrary function. The proof for finite N can be found in [3]. Conclusion In this work a new method for studying of solvability and construction of a solution for Fredholm integral equation of the first kind is proposed.Necessary and sufficient conditions for existence of a solution for a given right-hand side are obtained in two cases: when the original function belongs to the space L 2 ; the original function belongs to a given set of L 2 .Solvability conditions and the method of construction an approximate solution of the integral Fredholm equation of the first kind are obtained.According to the method with comparison to the well-known methods an approximation solution of the Fredholm integral equation of the first kind can be obtained.Several theorems about solvability of the equation are proved.Further continuation of the research works in this direction and development applications on the base of the method are planned. .10)(iv) if the inequality (2.8) is satisfied, then the sequence {u n } ⊂ L 2 (I 1 , R n ) strongly converges to the point u * ∈ U * .The following estimates are valid 0
2,179.8
2017-01-01T00:00:00.000
[ "Mathematics" ]
An Energy-Efficient Multiwire Error Control Scheme for Reliable On-Chip Interconnects Using Hamming Product Codes We propose an energy-efficient error control scheme for on-chip interconnects capable of correcting a combination of multiple random and burst errors. The iterative decoding method, interleaver, using two-dimensional Hamming product codes and a simplified type-II hybrid ARQ, achieves several orders of magnitude improvement in residual flit-error rate for multiwire errors and up to 45% improvement in throughput in high noise environments. For a given system reliability requirement, the proposed error control scheme yields up to 50% energy improvement over other error correction schemes. The low overhead of our approach makes it suitable for implementation in on-chip interconnect switches. Unfortunately, interconnect links have tight speed, area, and energy constraints, making the use of complex but powerful codes unsuitable.Product codes [17,18], which can be easily implemented by concatenating simple component codes (e.g., Hamming codes), can be protected against both random and burst errors with a relatively small complexity.In this paper, we propose using two-dimensional Hamming product codes to address multiwire errors for on-chip interconnects.Also, an iterative decoding method is combined with a type-II hybrid ARQ scheme [19] to maintain energy efficiency.The remainder of the paper is organized as follows.Section 2 introduces related work on error control for onchip interconnects.Section 3 describes the proposed error correction scheme.Implementation of the proposed scheme is shown in Section 4. Performance evaluation is discussed in Section 5. Further discussions and conclusions are presented in Section 6 and Section 7, respectively. ERROR CONTROL SCHEMES FOR ON-CHIP INTERCONNECT There are three classes of interconnect errors-permanent, intermittent, and transient [2].In this work, we focus on multiple transient errors, which can be multiple random single errors or burst adjacent errors.On-chip communication typically uses one of three schemes for error recovery.(a) Automatic repeat request (ARQ) [7,8,16], wherein if the receiver detects errors, it requests that the transmitter resends the information.(b) In forward error correction (FEC) [9][10][11][12]15], the receiver corrects errors without any retransmission requests.(c) In hybrid schemes (HARQ) [13,14], the receiver corrects errors it can handle and requests retransmission when the errors exceed their errorcorrection capability.The various error control schemes have different strengths.In ARQ, the error detection codes are easy to construct at a minor energy cost; however, retransmission reduces throughput (especially in a persistent noise environment) making it unsuitable for high performance applications.FEC can guarantee a certain throughput, but powerful error correction codes are more complex and consume large energy, a critical constraint for on-chip interconnects.Further, when errors exceed the code's errorcorrection capability, FEC cannot correct the errors and decoder failure occurs.HARQ combines FEC and ARQ to balance reliability, throughput, and energy consumption.Previous work in this area has often focused on oneor two-bit error scenarios.In [7], cyclic redundancy check (CRC) codes are used to detect errors, and retransmission is used once errors are detected.Instead of CRC, Hamming codes are used to detect two single errors [7,8] or to correct a single error [7,9,10].In [10,11], a duplicate-addparity (DAP) code, in which the information is duplicated and an extra parity check bit added, is used to correct a single error.In [12], symbol error correcting codes are used to correct two-bit burst errors.In [13,14], single-error correcting double-error detecting (SEC-DED) codes (e.g., extended Hamming) are used to perform HARQ.In order to improve error resilience against burst errors, interleaving can be used to split a wide bus into smaller groups and encode the groups separately [15,16].The outputs of these small groups can be further interleaved to reduce the probability of multiple errors occurring within the same group.In the case of multiple random and burst errors, previous approaches lose their effectiveness. Product codes [17,18], which can be easily realized by concatenating simple component codes, have a good protection capability against both random and burst errors.In this paper, we propose using product codes to address multiwire interconnect errors.Figure 1 shows the concept of a two-dimensional product code.Given two binary linear block codes C 1 (n 1 , k 1 ) and C 2 (n 2 , k 2 ), where the n i 's, k i 's represent widths of codeword and information, respectively, the product code C pc can be expressed as where [20], which greatly increases the error correction capability.The simplest two-dimensional product codes are singleparity check (SPC) product codes, guaranteed to correct only one error by inverting the intersection bit in the erroneous row and column [20].Multidimensional SPC product codes can be constructed to improve the error correction capability, but a more complex decoding process is required [21]. PROPOSED ERROR CONTROL SCHEME In our approach, we use two-dimensional product codes, in which an SEC-DED (e.g., extended Hamming) code (d min = 4) is used for row encoding and a SEC (e.g., conventional Hamming) code (d min = 3) is used for column encoding.The proposed product code then has a total Hamming distance of d min = 12, able to correct (d min − 1)/2 = 5 random errors.Direct implementation of product codes results in low code rates (because of the large number of redundancy bits) and increased link energy consumption.In order to improve code rate and achieve energy efficiency, we use a modified type-II HARQ, in which redundancy bits are incrementally transmitted when necessary, combined with an iterative decoding method to process Hamming product codes.To the best of our knowledge, our group is the first to combine two-dimensional Hamming product codes using type-II HARQ with iterative decoding for onchip interconnects.Specifically, a message is encoded and transmitted with its row parity check bits.The receiver uses the row parity check bits to correct any single random error and burst errors that are distributed in different rows.If the receiver detects uncorrectable errors (e.g., two errors in a row), it instructs the transmitter to send the column check bits which are formed based on the original message.The proposed encoding process is shown in Figure 2. When the column parity check bits are received, they are used with the original message and the saved row parity check bits to perform iterative decoding of the codes.The effective code rate of the proposed error correction scheme can be expressed in Information width Information width + Row parity bits + A , where A denotes P retransmission × Column parity bits and P retransmission is the probability of retransmission.Because retransmission occurs infrequently for on-chip interconnects, the effective code rate is increased using our proposed error correction scheme. The simplest strategy of product code decoding is the two-step row-column (or column-row) decoding algorithm [20].In this algorithm, the received matrix is first decoded row-by-row using a row decoder; the resulting row-decoded matrix is then decoded column-by-column using a column decoder.Unfortunately, this decoding method fails to correct a rectangular four-error pattern, such as d 1,1 , d 1,3 , d 3,1 , and d 3,3 shown in Figure 3.More powerful soft-input softoutput (SISO) decoding processes can be implemented to decode product codes using the Chase algorithm [18], at the cost of increased latency and power.Moreover, generating soft-input information for on-chip interconnects introduces extra complexity overhead. Compared to two-step row-column decoding, our method properly addresses the rectangular four-error pattern problem by recording behavior of the row and column decoders using row and column status vectors.Instead of only passing coded data between the row and column decoders, the row and column status vectors are passed between stages and used to help make decoding decisions.The realization of the row and column status vectors can be described by two rules, shown in Figure 3.As shown, two status bits are used to record the decoding behavior for each row, and one status bit is used for the column status.An example of the status vector implementation is also shown in Figure 3.The row and column status vectors are first initialized to all zeroes.The details of the algorithm are as follows. Step 1. Row decoding of the received encoded matrix.If the error is correctable, the error bit indicated by the syndrome is flipped.The corresponding row status vector position is set according to the mapping in Figure 3. Step 2. Column decoding of the updated matrix.First, the individual syndromes of each column are calculated in parallel.If a column syndrome is nonzero, there are two possible scenarios, depending on the error position indicated by that syndrome (e syn ) and the row status vector. (a) If the row state corresponding to e syn is not "00" (e.g., if an error occurs in d 3,3 and the third row's state is "10"), e syn is flipped. (b) If the corresponding row state value is "00", e syn may be incorrect (e.g., if there is a double error in the column, as shown in Figure 3).If three or more syndromes in the column decoders have the same value, then e syn is flipped.If only one or two syndromes are the same, no correction is performed and the column status vectors are set to "1" to be used in the next step. Step 3. Row decoding the matrix after changes from Step 2. The syndrome for each row is recalculated.If there are still two errors in one row, the column status vector is used to indicate which columns need to be corrected.If only one error remains in the row, the row syndrome will be used to do the correction. Figure 4 illustrates how the algorithm is used to decode a rectangular four-error pattern.In Step 2, because only two column syndromes are the same and the corresponding row state is zero, the column syndromes are not used, and "1"s are recorded in those column states.In Step 3, each row still detects two errors, so the column status vector is used to indicate which positions need to be fixed.In this way, the system is able to correct rectangular error patterns. A comprehensive simulation of all possible error patterns consisting of five random errors or fewer was performed, verifying that the proposed iterative decoding method operates correctly. IMPLEMENTATION OF THE PROPOSED SCHEME In this section, we present realization of the proposed error correction scheme and iterative decoding of Hamming product codes. Block diagram of proposed error correction scheme Figure 5 shows a block diagram of the transceiver design used in our proposed error correction scheme.In the transmitter, the input message is encoded using a two-dimensional Hamming product code, which is realized by serially concatenating row and column encoders.The original message is first transmitted with row parity check bits; column parity check bits are stored into a transmitter buffer.The column parity check bits are kept in the transmitter buffer until an acknowledgement/negative acknowledgement (ACK/NACK) signal indicating the status of the previous transmission is received.If an NACK signal is received, the stored column parity check bits are sent to the receiver.Triple modular redundancy is implemented to protect the ACK/NACK signal against errors.The transmitter's operation can be described using the flow chart of Figure 6(a).Two column syndromes are the same and meet one of the following conditions: 1) More than a single "10" in row status vector 2) The row, where error occurs, has a state "00" Figure 4: Decoding of rectangular four-error pattern using the proposed iterative decoding scheme. corrects single errors occurring in different rows.Once a transmission is completed, the receiver sends an ACK/NACK.In order to simplify the hardware implementation, only one retransmission is allowed.When the receiver detects that the errors are uncorrectable, it saves the row decoded message with row parity check bits in a buffer and uses the NACK signal to instruct the transmitter to send the column parity check bits, which are formed based on the original message.When these column parity-check bits are received, they are combined with the previously received row parity-check bits to perform the last two stages of iterative decoding.The receiver's operation can be described using the flow chart of Figure 6(b). Transmitter design Figure 7(a) shows hardware implementation of the transmitter.Assume that a K-bit input information is used to construct a C pc (n 1 × n 2 , k 1 × k 2 ) product code.The input message is separated into k 2 groups with group size k 1 bits (each group constitutes a single row).In order to minimize the encoding latency, multiple row encoders are implemented.Each group is encoded using a row encoder and generates n 1 bits as outputs.All k 2 × n 1 outputs of the row encoders are fed into a row column interleaver, which is implemented by hardwire direct connection with the mapping relation in where the n 1 column encoders are saved into a transmitter buffer.MUXes are used to select the outputs of row encoders or the saved column parity-check bits using control signals generated according to the ACK/NACK signal.The required bus width to transmit the encoded information is the maximum value between the first transmission of k 2 × n 1 bits and retransmission of (n 2 − k 2 ) × n 1 column parity check bits.In the proposed method, retransmission bits (n 2 − k 2 ) × n 1 are always less than the first k 2 × n 1 transmission bits; zero padding is used in the retransmission, as shown in Figure 7(a). Block diagram of receiver Figure 8(a) shows the block diagram of the proposed receiver.In the proposed receiver, the coded information is first deinterleaved then presented to the row decoder.The syndromes of each row are calculated for the received data.Because of our use of SEC-DED codes, a single error can be corrected or two errors can be detected for each row.When all errors are correctable, the receiver sends back an ACK signal to the transmitter and saves the decoded message into a decoding buffer.When the receiver detects two errors in any row, it saves the erroneous message in the decoding buffer and sends an NACK signal to instruct the transmitter to transmit the column parity check bits, which are formed based on the original message.When the column parity check bits are received, they are used with the saved row redundancy information and original information to perform an iterative decoding of product codes.Figure 8(b) shows the implementation of the row decoder. Iterative decoding In the proposed receiver design, if a retransmission is required, the total row and column parity check bits are 6 VLSI Design one bit is "1" in the error vector, error vectors are considered to be the same only if they have "1" in the same position.Thus, the comparison is converted into judging whether more than two "1"s exist in the same position.This can be implemented by a ones counter, shown in Figure 10(b). The combinational ones counter can be implemented by two-level logic.The first one is multiple four-bit ones counter, which will generate the ones number for each fourbit input.The outputs of the four-bit ones counter will be combined together using a multilevel merge circuits.Figure 11(a) shows the implementation of the four-bit ones counter, which is realized by modifying cellular threshold circuits in [22].For every four-bit input, the ones counter circuit generates three outputs indicating the number of ones greater than or equal to 1, 2, and 3.The number of ones is equal to the number of error vectors having the same value.Two four-bit ones counters can be combined together using a merge circuit, shown in Figure 11(b).The merge circuit combines two four-bit ones counters and still generates three outputs indicating the number of ones greater than or equal to 1, 2, and 3. Multiple level merge circuits can be used in a tree structure. EVALUATION OF THE PROPOSED ERROR CONTROL SCHEME The performance of the proposed coding scheme was evaluated in terms of complexity, reliability, throughput, power, and energy consumption, for a 64-bit input message arranged into a 4 × 16 two-dimensional matrix.Each row was encoded using an extended H (22,16) Hamming code, obtained by shortening an H (32,26) code.Each column was encoded using an H(7,4) Hamming code.An 88-bit bus equal to the outputs of row encoders was used to transmit the encoded information.The proposed error correction scheme was developed and verified in Verilog HDL.To measure power consumption, the Verilog HDL code was mapped to gate level schematic.The gate level schematic was simulated in Cadence Spectre using predictive technology model (PTM) CMOS 45 nm technology [23].The wire dimension was estimated for 45 nm technology using simple 1/S scaling rule [24]; the values are shown in Table 1 [25].Registers were applied between encoder and links, and also between links and decoder to improve the system clock frequency by allowing the encoder, interconnect links, and decoder to operate in a pipelined manner.Simulation results were compared to forward error correction (FEC) schemes using Hamming code H(71,64), ARQ schemes using standardized CRC-5 with generator polynomial x 5 + x 2 + 1, and HARQ using extended Hamming code H(72,64).In order to improve throughput, Go-back-N retransmission policy [26] was applied to the ARQ and HARQ schemes.Table 2 shows the comparative results for different error control schemes. Delay and complexity Table 2 shows the delay of different error correction schemes.The Hamming encoder was implemented as a simple XOR tree.Instead of using linear feedback shift registers to generate check bits for CRC codes, a parallel implementation method [27] was employed to reduce the large latency of CRC codes at a minor cost of complexity.The decoder delay, typically much larger than encoder delay, is reported in Table 2.As expected, the decoder delay of the ARQ scheme using CRC-5 is the smallest, compared to other error control schemes, because only syndrome is calculated and no error correction is needed in this scheme.In the proposed method, the iterative decoding process was implemented using a three-stage pipelined architecture.The worst delay occurs in the column decoding stage.Compared to Hamming H(71,64) and HARQ, the delay of our proposed error control scheme increases by about 15% and 10%, respectively, primarily because of error pattern comparison overhead. Table 2 also shows the complexity of different error control schemes in term of equivalent two-input NAND gate count.In go-back-N retransmission policy, N flits will be retransmitted if an NACK signal is received.Thus, a transmitter buffer is needed to store these N flits in ARQ and HARQ schemes.The number N is dependent on the round trip transmission delay.In our simulation, N = 4.Besides the transmitter buffer, a receiver buffer was needed in our method to store the row decoded message for iterative decoding, when necessary.The result shows that FEC scheme using H(71,64) has the least equivalent gate count complexity because the encoder and syndrome calculation circuits were implemented as simple XOR trees; also no buffers were needed in this scheme.In the proposed method, the part of the receiver buffer which stores the original message is shared with the routing buffer used for routing and flow control purpose [28].This greatly reduces the buffer size required.The equivalent gate count of our proposed method increases about 2.5 times compared to that of HARQ scheme, because of overhead associated with iterative decoder. Reliability On-chip communication errors can be attributed to voltage perturbations induced by noise from many sources.A simple model proposed in [29] assumes that an error occurs at a certain probability for a single wire when a transition occurs.The error probability of a single wire can be modeled by (see [29]) where V swing is the link swing voltage and σ N is the standard deviation of the noise voltage, which is assumed to be a normal distribution.The model in (4) assumes that the probability of error in each wire is independent.As technology scales, the probability of one event causing spatial burst errors increases.The model can be extended to account for burst errors by assuming that a fault affects its neighboring wires with a certain probability P n .The probability can be obtained by simulating data transmissions across an interconnect link and counting the number of burst errors caused by coupling noise [15].The residual flit-error rate, which is the probability of decoding failure or error, was used to evaluate the reliability of different error control schemes.Hamming codes can only correct one error at a time and if more than one error occurs in a codeword it will lead to uncorrected errors.In ARQ and HARQ scheme, an encoded message is accepted by the receiver only if it either contains no errors or contains an undetectable error pattern.The residual flit-error rate of ARQ and HARQ scheme can be expressed in (see [19]) where P e is the probability of an undetectable error pattern and includes the decoding error probability in an HARQ scheme, and P d is the probability that an error can be detected and is not correctable.In the experiments, x p+2 x p+3 The number of ones ≥1 The number of ones ≥2 The number of ones ≥3 (a) Four-bit ones counter x p x p+1 x p+2 x p+3 x p+4 x p+5 x p+6 x p+7 ≥3 ≥2 ≥1 ≥3 ≥2 ≥1 The number of ones ≥1 The number of ones ≥2 The number of ones ≥3 a comprehensive simulation with different error patterns was performed to decide the P e and P d values. 4-bit ones counter 4-bit ones counter (b) Merge circuit Figure 12 shows the residual flit-error rate of different error control schemes as a function of noise voltage deviation.Two noise scenarios were considered: multiple independent errors and a combination of multiple random and burst errors.A supply voltage of 1 V was assumed in model (4).Figure 12(a) shows the residual flit-error rate of different error control schemes with only multiple independent errors.The simulation results show that the proposed coding scheme achieves several order reduction in residual flit error rate compared to H(71,64) and ARQ CRC-5 scheme because the undetectable error probability P e of the proposed method is much smaller than that of H(71,64) and CRC-5.The proposed method can detect any two random errors and most combinations of three random errors.The HARQ H(72.64) scheme achieves performance close to the proposed method when only multiple independent errors were considered because H(72,64) can also detect two random errors.Figure 12(b) shows the residual flit-error rate of different error control schemes when a combination of multiple random and burst errors was considered.Up to fivebit errors were considered.The results show that the residual flit-error rate of H(71,64) and HARQ decreases significantly because the burst error correction or detection capability of Hamming and extended Hamming codes is poor.The residual flit-error rates of the ARQ CRC-5 scheme and the proposed method are almost the same compared to the case of multiple random errors because both can detect or correct up to five burst errors. Figure 13 shows the residual flit-error rate of different error control schemes, when burst errors of more than five bits were considered.The results show that the residual fliterror rate of both CRC-5 and the proposed method decreases greatly because those larger bursts exceed the burst error correction or detection capability.The proposed method can still achieve a two-order reduction in flit-error rate compared to HARQ H(72,64) scheme. Throughput Another main concern in on-chip communication is the throughput.In our simulations, go-back-N retransmission policy was applied to improve the throughput of ARQ and HARQ schemes.Compared to stop-and-wait retransmission, the transmitter in go-back-N does not wait for an ACK signal after sending a flit.When an NACK is received, the transmitter resends N flits including the erroneous flit and the succeeding flits that were transmitted during the roundtrip delay.Go-back-N achieves efficient bus usage at the cost of hardware complexity.The average clock cycles to successfully transmit a flit in go-back-N can be expressed in (see [19]) where P c is the probability that a received flit contains no error and N is the round-trip delay, which was four cycles in our simulation setup.In our method, retransmission was limited to only one additional flit; thus, the maximum clock cycles to successfully transmit a flit in our method was two.The throughput of the different error control schemes is compared in Figure 14.The throughput was normalized to the throughput in the case of no errors occurring.Figure 14(a) shows the throughput when only multiple independent errors were included.Figure 14(b) shows the throughput when a combination of multiple random and burst errors was considered.As shown, each scheme achieves nearly the same throughput at low noise environments (small σ N ).As σ N increases, the throughput decreases in all cases except for the H(71,64) system, which achieves the same throughput independent of the noise condition at the cost of reliability.The ARQ scheme achieves the lowest throughput because retransmission is the only way for it to correct errors.As the noise environment gets worse, the overhead for retransmission increases.Compared to HARQ H(72,64) scheme, the proposed method achieves better throughput because more errors can be corrected during the first transmission.The throughput of the ARQ scheme and the HARQ scheme decreases by about 8% more when a combination of multiple random and burst errors was considered.The effects of burst errors on the throughput of the proposed method are small.The results show that the proposed method achieves 45% and 10% improvement in the throughput under high noise conditions compared to ARQ and HARQ scheme, respectfully, when multiple independent and burst errors are considered. Power and energy consumption Figure 15 shows the codec power consumption of different error control schemes.The codec power was measured using the 45 nm predictive technology model technology [23] at a supply voltage of 1 V.The clock frequency was 1 GHz.The codec power consumption is the sum of the encoder and decoder power.The results show that Hamming code H(71,64) consumes the smallest codec power because of the simple XOR tree implementation and no buffers needed in the transmitter.ARQ CRC-5 scheme consumes the smallest decoder power because only the syndrome is calculated and no error correction is needed.In comparison to HARQ, the proposed coding scheme consumes about twice the codec power due to the overhead of the iterative decoder. The link power PW L is related to interconnect capacitance C L , the wire transition probability α, the link width W L , the link swing voltage V swing , and clock frequency f clk .PW L can be expressed in where α is assumed to be 0.5 in the simulation and W L depends on the error control scheme.The link swing voltage V swing is decided by the reliability requirement according to (4).For a given reliability requirement, the error control schemes with low error correction capability need a higher link swing voltage compared to that of the error control schemes with higher error correction capability [7,10,14,29,30].Figure 16 shows the link swing voltage of different error control schemes as a function of noise voltage deviation for a given reliability requirement.In the simulations, the residual flit-error probability is assumed to be less than 10 −20 [10].The proposed method has the highest error correction capability compared to other error control schemes.To achieve this reliability requirement, the link swing voltages of the proposed method are about 80% and 63% compared to those of the ARQ CRC-5 scheme and the HARQ H(72,64) scheme, respectively.The selection of the appropriate link swing voltage can be a design-time decision.Also, a voltage converter can be used to dynamically select the proper link swing voltage based on a link quality monitor [8].The energy cost of the voltage converter can be neglected because of the high conversion efficiency [30]. In network-on-chip (NoC) architecture, the link length is the distance between two switches, which is decided by the tile block size.In mesh-or torus-shaped NoC design, the links between two switches are generally a few millimeters long wires [28,[31][32][33].For example, in Intel's 80-tile NoC architecture [31], each tile has an area of 2 mm × 1.5 mm and 2 mm link length is used.In [32], each tile has an area of 3 mm × 3 mm and 3 mm link length is used.In our experiments, three link lengths, 1 mm, 2 mm, and 3 mm, were examined.The corresponding link resistance and capacitance were calculated using the method in [23].The link resistance and capacitance are 86 Ω and 218 fF for the 1 mm link, 171 Ω and 436 fF for the 2 mm link, and 256 Ω and 653 fF for the 3 mm link, respectively.The clock frequency is 1 GHz and the power consumption is measured using Cadence Spectre. Figure 17 compares the link power of different error control schemes for a given reliability requirement.The simulation was performed for a low noise environment (σ N = 0.08) and a high noise environment (σ N = 0.18).The link swing voltages V swing of different error control schemes were obtained by performing a simulation for each noise environment.For the same reliability requirement, the proposed method requires the lowest link swing voltage and thus, the smallest link power of the compared schemes.Figure 17 shows that the proposed method consumes the least link power at each link length and noise environment when the same residual flit-error rate (≤10 −20 ) is required.More link power is consumed at higher noise environments because higher link swing voltages are needed to achieve the same reliability, as shown in Figure 16.Link power is much larger than the codec power from Figure 15 and can dominate the total power consumption of error control schemes [34].As link length increases, link power consumption increases because of the increased link resistance and capacitance.The link power of the proposed method is about 83% and 48% compared to the link power of the ARQ CRC-5 scheme and the link power of the HARQ H(72,64) scheme, respectively.The average energy to successfully transmit one flit, E avg , is used as the metric to measure the energy consumption.E avg can be expressed in where E e1 , E l1 , and E d1 are the energy consumption of the encoder, links, and the decoder in the first transmission.E e2 , E l2 , and E d2 are the extra energy consumption of the encoder, links, the decoder, and buffers when a retransmission is required.P d is the probability that an error can be detected and is not correctable.P d is equal to zero for FEC schemes.Compared to the HARQ scheme and the proposed method, the ARQ CRC-5 scheme has a larger value of P d , which increases of the retransmission energy consumption.For a given reliability requirement, the proposed method consumes the largest codec energy but the least link energy.We evaluated whether such a link energy reduction was beneficial in terms of average energy consumption E avg .Figure 18 compares E avg for different error control schemes at the same reliability requirement.The simulation was performed at different noise environments for link lengths of 1 mm, 2 mm, and 3 mm.The simulation results show that the ARQ CRC-5 scheme achieves the least average energy consumption E avg at low noise environment (σ N = 0.08) because of the relatively smaller codec energy and link energy consumption.As the noise voltage deviation increases, the average energy consumption of the ARQ CRC-5 scheme increases faster than the average energy consumption of the proposed method because the ARQ CRC-5 scheme has larger link energy consumption.At the higher noise environment (σ N = 0.18), the proposed method yields the least energy consumption because the smaller link energy consumption counterbalances the larger codec energy consumption.In that environment, the proposed method consumes 5% and 43% less energy compared to that of the ARQ CRC-5 scheme and that of the HARQ H(72,64) scheme, respectively, for a link length of 1 mm.As the link length increases, the proposed method can benefit more from the smallest link energy consumption.When the link length is 3 mm, the energy consumption of the proposed method is about 13% and 50% less than that of the ARQ scheme and HARQ scheme, respectively. DISCUSSION In the proposed method, the encoded message is separated into two transmissions.The reliability of the proposed method depends on both the error detection capability in the first transmission and error correction capability for the iterative decoding method.In the first transmission, the error patterns with single errors in different rows are corrected and the error patterns with two errors in a row are detected.The proposed method is capable of detecting 75% of all random independent five-error patterns and 100% of error patterns consisting of two burst errors of up to three bits each (e.g., one three-bit burst error and another single-bit random error) in the first transmission.The iterative decoding algorithm can correct up to fivebit errors once the row and column parity check bits are received.Also, our method can correct permanent errors that are distributed in different rows.More complex codes can be used to increase the reliability of on-chip communication.Our primary concern is energy efficiency; combining error correction with retransmission is a good approach that balances energy and reliability.The codec area overhead is relatively small when compared to the millions of transistors integrated in a system-on-chip (SoC) [35]. CONCLUSION In this paper, we presented an error control scheme combining Hamming product codes with a simplified type-II hybrid ARQ for on-chip interconnects.The efficient combination of powerful product codes with retransmission shows a good balance between the reliability and energy efficiency in error scenarios where a combination of multiple random and burst errors is considered.Moreover, an efficient iterative decoding method of Hamming product codes is proposed.The proposed decoding algorithm is easily realized in a threestage pipelined architecture by modifying the conventional row-column decoding algorithm, with a small increase in delay and complexity. The performance of the proposed method was evaluated in terms of reliability, throughput, and energy consumption.Several orders of reduction in residual flit-error rate can be achieved using the proposed method when multiple errors are considered.Compared to an ARQ scheme using CRC codes and an HARQ scheme using extended Hamming codes, the proposed method achieves about 45% and 10% improvement in throughput, respectively, in high noise environments.The high reliability of the proposed method can permit a reduction in the link swing voltage and consequently, reduction in communication energy.The decreased link energy counterbalances the overhead of codec energy.For a given reliability requirement, the proposed error control scheme can achieve up to 50% reduction in energy consumption compared to other error correction schemes in high noise environments. Figure 5 ( Figure5shows a block diagram of the transceiver design used in our proposed error correction scheme.In the transmitter, the input message is encoded using a two-dimensional Hamming product code, which is realized by serially concatenating row and column encoders.The original message is first transmitted with row parity check bits; column parity check bits are stored into a transmitter buffer.The column parity check bits are kept in the transmitter buffer until an acknowledgement/negative acknowledgement (ACK/NACK) signal indicating the status of the previous transmission is received.If an NACK signal is received, the stored column parity check bits are sent to the receiver.Triple modular redundancy is implemented to protect the ACK/NACK signal against errors.The transmitter's operation can be described using the flow chart of Figure6(a). Figure 5(b) shows a block diagram of the receiver.The proposed three-stage iterative decoding is separated into two parts.The first stage row decoding is always performed, and the last two stages are only performed when necessary, as shown in Figure 5(b).The coded information is first deinterleaved and presented to the row decoder.The row decoder errors? flip the positions, in which the value of column status vector is 1 Errors Correction Figure 7(b) demonstrates an example of the mapping relation for a 64-bit input information encoded using C pc (7×22, 4×16 Figure 6 : Figure 6: Flow chart of proposed transceiver operation. Figure 8 :Figure 9 : Figure 8: Implementation of proposed receiver and row decoder. Figure 10 : Figure 10: Implementation of the column decoder in the proposed iterative decoding method. Figure 13 : Figure 13: Residual flit-error rate for up to seven-bit burst error modeled. Figure 14 :Figure 15 : Figure 14: Throughput comparison of different error control schemes. Figure 16 : Figure 16: Link swing voltage for different error control schemes. Figure 17 :Figure 18 : Figure 17: Link power comparison of different error control schemes for different noise environments. Table 1 : Parameters used for link model. Table 2 : Bus widths, delay, and equivalent gate number of different error control schemes.
8,379
2008-12-31T00:00:00.000
[ "Computer Science", "Engineering" ]
Effectiveness of GNSS Spoofing Countermeasure Based on Receiver CNRMeasurements A perceived emerging threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. In this paper, a simple and readily implementable processing rule based on CNR estimates of the correlation peaks of the despread GNSS signals is developed expressly for reducing the effectiveness of such a spoofer threat. Consequently, a comprehensive statistical analysis is given to evaluate the effectiveness of the proposed technique in various LOS and NLOS environments. It is demonstrated that the proposed receiver processing is highly effective in both line-of-sight and multipath propagation conditions. Introduction GNSS satellites are approximately 20,000 km away and transmit several watts of signal power such that at the ground level, the power output of a 3-dB gain linearly polarized antenna is nominally −130 dBm [1].As such, a modest jammer can easily disrupt GNSS signals by increasing the noise floor, making the acquisition of GNSS signals rather difficult.A high processing gain based on a long integration time is one of the possible countermeasures to overcome a noise jammer.Nevertheless, if the GNSS receiver undergoes random motion and is subjected to multipath fading as in a typical urban environment, then the channel decorrelates quickly such that attaining such large processing gains to overcome the jamming is not feasible.However, the noise jammer is at least detectable as the spectral power in the affected GNSS receiver band will be abnormally high.Hence, the jammer can deny service but the user is aware of being jammed, limiting the damage potential of the jammer.Also the jammer is relatively easy to locate with radio direction finding and to potentially disable as its spectrum is significantly larger than the ambient noise [2,3]. A more insidious threat is the standoff spoofer which broadcasts a set of replicas of the authentic SV signals currently visible to the mobile GNSS receiver [2].The unaware receiver computes the navigation solution based on these counterfeit signals which are passed on to the user as being reliable with potentially damaging consequences.GNSSbased location estimates that are inaccurate but assumed to be accurate are potentially more damaging to the user than in the jamming case where at least the user knows that the service is temporarily unavailable.As the receiver processing gain used for suppressing the jammer is not applicable in the case of the spoofer signal, the spoofer transmit power can be orders of magnitude less than that of the noise jammer.This makes the spoofer signal much more difficult to locate and disable. There are essentially two categories of spoofer threats envisioned.The first is the self-intentional spoofer that provides the user a means of compromising its GNSS position.An example is a fishing vessel wishing to enter prohibited areas undetected by a GNSS-based monitoring system.A collocated spoofer could provide counterfeit signals to fabricate navigation solution that falls outside the prohibited area [4,5].Another example is that of an offender required to wear a mandatory GNSS tracker to ensure compliance with travel restrictions [2]. The second type of spoofers is the standoff spoofer (SS) that could be used in urban areas for malicious purposes ranging from sporadic disruptive hacking to sophisticated organized terrorist activities.The SS is illustrated in Figure 1 which covers a target area as a sector of an annulus ring.Multiple SS devices could potentially be used to collectively cover a given area such as an urban downtown core.Based on this, the perceived spoofer threat is a network of terrestrial SSs that can cause widespread disruption of GNSS-based location services in dense urban areas. The SS is of interest in this paper specifically for the scenario of a terrestrial transmitter source that broadcasts replicas of the GNSS signals that are visible in the target area illustrated in Figure 1.Disruption of GNSS services in the target area is achieved by randomly modulating the code phase over a small region of the overall Code-Delay Space (CDS) that is commensurate with the target area.Therefore, at least two correlation peaks will be observed in the CDS.An unsuspecting receiver detects the larger of the correlation peaks which can belong to the spoofer signal.The code phase and the Doppler associated with the spoofer signal are then passed onto the tracking segment and consequently a false navigation solution is generated.Note that, while the target area depicted in Figure 1 has hard boundaries, such boundaries are generally blurry and not well defined.The effectiveness of the SS is considered to drop off outside the depicted annulus sector region with vague boundaries between radii R 1 and R 2 .In a typical scenario, R 1 and R 2 are envisioned to be of the order of about 500 m and 2 km such that each SS covers an area of several square kilometres.A modest network of SS devices can then adequately cover a downtown core area.However, for sake of simplicity, only a single isolated SS will be considered in this paper. The SS is assumed to remain synchronized with currently visible GNSS signals and then synthesize a set of GNSS signals corresponding to the target area.The objective of the SS is not to synthesize a specific counterfeit location for a specific GNSS receiver within the target area.This is not possible as the location of the GNSS receiver is not known to the SS.Furthermore, the objective of the SS is disruption over the general target area rather than affecting specific receivers.As such, the SS transmission signal synthesis does not have to be overly sophisticated.It matches the Doppler offset of the replicated SV signals and adjusts the code phase such that it is commensurate with the intended target region.Note that an urban area is a primarily non-line-of-sight (NLOS) multipath channel.Therefore, the Doppler spectrum as perceived by the GNSS receiver will be spread by an amount commensurating with the magnitude of the receiver velocity but will not be sensitive to direction.Hence, other than the deterministic Doppler offset of the SV to stationary groundbased receiver, no further modulation of the Doppler is required by the SS to ensure a plausible counterfeit signal.The typical handheld consumer GNSS receiver coherently integrates the signal for about 10 to 20 ms.Based on this, the correlation peak in the CDS will have a spread in Doppler of about 100 Hz which is commensurate with the Doppler spread of typical urban traffic (<50 km/hr) [6].Even if the GNSS receiver is equipped with other inertial means such that the receiver velocity vector is known, this cannot be used to discriminate the SS signal as multipath Doppler spreading occurs for both the SS and the authentic signals.The code phase of the SS transmissions matches the nominal code phase of the authentic GNSS signals in the target area.Note that the target area is limited to one or two kilometres and hence, the code phase only differs by several chips from one extreme of the target area to the other.For example, in a 90-degree sector with R 1 = 500 m and R 2 = 1500 m, the average spread is only about four chips.The SS generated code phase will correspond to a random location within the target area generated by slowly and randomly modulating the code phase over a small domain commensurating with the dimensions of the target area.Note that a sophisticated GNSS receiver can potentially discriminate against the SS signal based on the code phase corresponding to an outlier navigation solution.However, as the target region is not very large, the counterfeit SS navigation solutions will be plausible and cannot be easily dismissed as outliers.Furthermore, the typical consumer grade GNSS unit does not possess processing to track multiple candidate navigation solutions let alone discriminate plausible outliers.Also, receiver autonomous integrity monitoring (RAIM) and fault detection and exclusion (FDE) are not effective in detecting such navigationally consistent spoofing signals [4].Finally, it should also be mentioned that typically GNSS receivers tethered to a wireless data service provider will typically provide the user with an aided GNSS (AGNSS) service, significantly reducing the CDS corresponding to a physical area of several square kilometres [7].Hence, there is a diminishing gain for the spoofer attempting to affect an area larger than this. As stated earlier, current consumer-grade receivers are equipped with RAIM and FDE which are not effective in mitigating the navigationally consistent spoofing attacks.A more sophisticated countermeasure to the SS with a random code delay modulation is to carefully tracking all combinations of possible navigation solutions and then dismissing solutions that are less likely based on tracking records spanning several tens of seconds up to the current time.This solution likelihood can be augmented with the use of ancillary sensors and other prior knowledge or belief maps [8].However, the consumer-grade GNSS receivers considered International Journal of Navigation and Observation 3 herein are assumed not to possess this level of sophistication.Rather, the objective is to address a computationally efficient processing method that can be added to relatively unsophisticated consumer grade GNSS receivers and that will be effective in discriminating against the SS.Such processing is based on the received carrier-to-noise ratio (CNR) measurements of the received GNSS signals.CNR measurement is an integrated part of all GNSS receivers as the navigation algorithm heavily relies on determining the weight of the observables based on measuring the instantaneous CNR.A simple discriminant is that if the CNR is implausibly high then an SS is suspected.Such processing is easily implemented with essentially minor firmware changes to the receiver or an in-line filter component [2].However, there is the question of how to optimally set the threshold used for CNR comparison.The optimum threshold is easily determined and justified for LOS propagation with a known antenna gain and orientation.However, for a handheld unit operating in an urban canyon with a compromised multiband antenna that is randomly oriented and potentially shadowed, setting the optimum threshold is no longer deterministic nor trivial.Optimization is necessarily based on a statistical analysis, which is the focus of this paper. The rest of the paper is organized as follows.In Section 2, the system definition and simplifying assumptions are given.A difficulty encountered with the statistical assessment of the SS effectiveness is the plethora of disparate parameters and plausible scenarios encountered.For this paper, a constrained set of idealized parameters and assumptions is necessary to obtain fundamental insights.In Section 3, the effectiveness of the SS and the receiver countermeasures is considered for a variety of LOS and NLOS scenarios.Section 3.5 relates these findings to the plausible physical coverage range of the SS.Finally, Section 4 states the major conclusions. System Description and Assumptions The performance of spoofer detection based on a threshold applied to the CNR in conjunction with a simple decision rule is analyzed for various propagation conditions.To do this in a comprehensive manner that is not obscured by details, it is necessary to use simplifying assumptions and constraints.While these may erode generality, the benefit is a set of insights gained that are applicable to less idealized and more realistic scenarios. It is assumed that the GNSS receiver performs a reduced search over the CDS based on traditional despreading correlation processing for each candidate GNSS signal that is potentially visible to the receiver.Assuming that both the authentic and SS signals are present at the receiver for a given despread GNSS signal, the outcome is a set of two correlation peaks corresponding to the spoofer and the authentic signal.The complex amplitude of the authentic and spoofer correlation peaks is represented as where ρ a0 and ρ s0 are the average CNRs of the authentic and SS signals, respectively.The complex channel gains are denoted by h a and h s with where E denotes the expected value operation.Also w a and w s represent the normalized white Gaussian noise samples distributed according to CN(0, 1) with CN(μ, σ 2 ) denoting a circularly normal multivariate distribution with a mean of μ and a variance of σ 2 .Note that the noise variance is normalized to simplify the expressions to follow.It is assumed that there are nominally two correlation peaks in the CDS hypothesis space that correspond to the spoofer and the authentic signal for a specific GNSS signal with sample-based CNRs denoted as ρ s and ρ a , respectively, namely, (2) There are many variations as to how the receiver implements the correlation search over the CDS; however, this assumption of the correlator structure simplifies the system description and subsequent analysis.Furthermore, the possibility of the authentic signal resulting in two distinct correlation peaks due to resolvable multipath or poor receiver design is not considered.The GNSS receiver cannot determine which correlation peak corresponds to the desired authentic signal.However, recognizing that there are two possible choices from which it suspects spoofer activity, it can impose the following simple heuristic rule for selecting the authentic signal: Choose the larger of the two peaks as the authentic peak if (ρ s < ρ T ) ∩ (ρ a < ρ T ), otherwise choose the smaller peak. Here ρ T is a threshold CNR that ρ s and ρ a will be compared to, which is the subject of some adaptive optimization process.Based on this formulation, the probability of a selection error can be evaluated.An error occurs every time the spoofer correlation peak is selected instead of the authentic peak with the Doppler and code delay coordinates passed on to the navigation solution processor.As such there are two types of errors described as type I error: (3) A graphical aid is introduced in Figure 2 which provides a method of calculating the probability of receiver error as the sum of the probabilities of the two types of errors.This probability will be denoted as P e and is a measure of the effectiveness of the spoofer; that is, the higher P e is over a given target area of the spoofer, the more effective it is, and is therefore a suitable metric for quantifying the effectiveness of the SS.P e depends on the probability density function (PDF) of the CNRs of the authentic and spoofing correlation peaks. Assuming that the authentic and the spoofer CNR samples, {ρ a , ρ s }, are statistically independent random variables, then the joint PDF can be expressed as the product of This assumption is based on the authentic SV original signal and the terrestrial source SS signal coming from different bearings and hence, in a dense urban area, the fast fading and nominal path-loss is independent.As the bearings are sufficiently different, the longer-term fading or shadowing is not correlated [6].Hence, the assumption of independence implied by ( 4) is made herein.However, there are instances where shadowing does become correlated especially if the bearings of the authentic and SS signals are similar.Based on the graphic shown in Figure 2, P e is given by where the simplified notation omits the parameters ρ s0 and ρ a0 which are initially assumed to be known parameters. Using F a (0) = F s (0) = 0, (5) becomes The minimum value of P e can be determined by setting (∂/∂ρ T )P e = 0 such that the condition emerges and reduces to which is then solved for the optimum value of ρ T .Equation ( 8) is mathematically equivalent to A useful observation is that if the PDFs of the authentic and the spoofer signals are scaled versions of each other, that is, F a (ρ T ) = F s (ρ T /c); then (9) holds only if ρ T = 0 and ρ T = ∞, since a cumulative distribution function (CDF) is a monotonically increasing function.This means that a finite threshold other than ρ T = 0 and ρ T = ∞ does not exist.In other words, for the common case when f a (ρ a ) is a monomodal function then it is easily shown that f a (ρ a )/F a (ρ a ) is a monotonically decreasing function.Hence, if f a (ρ) is approximately a translation of the function f s (ρ), then the intersection points of f s (ρ T )/F s (ρ T ) and f a (ρ T )/F a (ρ T ) can only be at ρ T = 0 and ρ T = ∞.This observation will be used in the next section.Note that a threshold of ρ T = ∞ is equivalent to having no threshold rather than applying a nonrealistically large threshold. Performance of Antispoofing for LOS and NLOS Conditions In this section, P e is determined for LOS and NLOS scenarios.This is generally done by first solving for the optimum threshold ρ T and then determining P e . LOS with Additive Noise. As defined in (1), the in-phase and quadrature components of the demodulated signal are normalized such that the additive noise is of unit variance for the in-phase and quadrature Gaussian components.With this, the LOS signal from the authentic signal will have a mean square magnitude of 2ρ a0 .Likewise the LOS from the SS will have a mean square magnitude of 2ρ s0 .Hence, the PDF of the square magnitudes of the correlation peaks corresponding to the authentic and spoofer signals will then be given as f a ρ a ; ρ a0 = χ 2 2 ρ a ; 2ρ a0 , 1 , where χ 2 N (x; μ, σ 2 ) is the noncentral chi-square PDF of variable x with N degrees of freedom (DOF), the noncentrality parameter μ, and the corresponding variance of the Gaussian parameter σ 2 [9].P e is plotted in Figure 3 as a function of ρ T for specific cases where ρ a0 > ρ s0 and ρ a0 < ρ s0 .As stated earlier, when ρ a0 > ρ s0 the optimum threshold is ρ T = ∞, while for ρ a0 < ρ s0 the optimum threshold is ρ T = 0.This is tantamount to selecting the larger of the two peaks if the average power of the authentic signal is larger than the average power of the spoofer.Otherwise, choose the smaller of the two peaks if the average power of the spoofer is larger than the average power of the authentic signal.This trivial conclusion is a manifestation of the assumption that ρ a0 and ρ s0 are known, which is not generally the case.Note that as f a (ρ) is approximately a translation of the function f s (ρ) then the intersection points of f s (ρ T )/F s (ρ T ) and f a (ρ T )/F a (ρ T ) can only be at ρ T = 0 and ρ T = ∞ as observed before. Figure 4 shows a plot of P e for a receiver with no spoofer mitigation, herein denoted by Rx, compared to the P e for a receiver with spoofer mitigation, herein denoted by SMRx, with ρ T = ∞ for ρ a0 > ρ s0 and ρ T = 0 for ρ a0 < ρ s0 .The GNSS receiver with no spoofer mitigation is equivalent to setting ρ T = ∞.As such there is no difference in the performance of the GNSS receivers with and without spoofer mitigation when ρ a0 > ρ s0 .However, for the case of ρ a0 < ρ s0 , the effectiveness of the spoofer mitigation is clearly evident in the reduction of P e . NLOS with Additive Noise. In this section, it is assumed that ρ a0 and ρ s0 are again deterministic and known to the receiver.The PDFs of the magnitude of the correlation peaks corresponding to the authentic and spoofer signals are then be given as where χ 2 2 (x; σ 2 ) is the central chi-square PDF of variable x with 2 DOF, with a variance of each DOF of ρ a0 + 1 for the authentic signal and ρ s0 + 1 for the spoofing signal. Figure 5 shows a plot of P e for a receiver with no spoofer mitigation (Rx) compared to the P e for a receiver with spoofer mitigation (SMRx) with ρ T = ∞ for ρ a0 > ρ s0 and ρ T = 0 for ρ a0 < ρ s0 .Comparing Figure 5 with Figure 4, it is evident that the spoofer mitigation is more effective when a LOS rather than a NLOS scenario is encountered.Hence, when the spoofer and authentic signals are more random as in the NLOS case, distinguishing them based on the sample CNR is more difficult and hence, subject to higher P e .Figure 6 shows P e as a function of ρ T and for various ρ s0 .The effectiveness of the spoofer countermeasure is again evident in the region where ρ a0 < ρ s0 .The same behavior as before occurs, namely, that the optimum ρ T for spoofer power less than authentic power is ρ T = ∞ while for spoofer power greater than authentic power is ρ T = 0, which is again a manifestation of the assumed known average powers. Diversity NLOS with Additive Noise.Assuming a ring or a sphere of scatterers to model a typical urban environment, the signals arriving at antennas with an approximate separation of half a carrier wavelength, are statistically uncorrelated.Consequently, M statistically independent samples of the receiver correlator output can be made available through accumulating M successive samples of the correlator outputs as the receiver is moving.The CNR of each correlation sample is ρ a0 and ρ s0 for the authentic and spoofing signals, respectively, which are again assumed to be deterministic and known to the receiver. A plot of P e based on M = 3 independent samples is shown in Figure 7. Similar to the no diversity case with M = 1, the optimum ρ T for spoofer power less than the authentic power is ρ T = ∞, while for spoofer power greater than the authentic power, the maximum is ρ T = 0. Again, this is reasonable as the spoofer and authentic signal is identically distributed except for the deterministic and known average powers.Clearly, if it is known that ρ a0 > ρ s0 then the larger peak would correspond to the authentic signal more often than the lower peak. Measurement Uncertainty and Unknown Spoofer Average Power.In the previous sections, the outcome was a trivial optimization of ρ T as ρ T = 0 if ρ a0 < ρ s0 and ρ T = ∞ if ρ a0 > ρ s0 , which resulted from the assumption that {ρ so , ρ ao } was known to the receiver.In this section, the more realistic multipath propagation case is considered where the average spoofer CNR is completely unknown.This is reasonable as the spoofer could be of arbitrary transmit power and range from the receiver.However, it will be assumed that ρ a0 is known approximately to the receiver.This is reasonable as the average power of a GNSS SV signal is approximately known in a multipath environment with the exception of factors such as shadowing and building penetration losses.Antenna orientation is typically not a factor as the multipath is distributed across a large angular sector.As ρ s0 is unknown, it is reasonable to assume a uniform PDF for ρ s such that f s (ρ s ) = c s where c s is a constant.Consequently, P e can be found from (6) as Now the optimum ρ T can be found from ∂P e (ρ T )/∂ρ T = 0 which simplifies to Equation ( 13) can be solved to find the optimum ρ T .Figure 8 shows F a (ρ T )−ρ T f a (ρ T ) for M = 1, . . ., 4 based on a Rayleigh fading channel and ρ a0 = 10 (dB).As can be seen from this figure, ρ T = ∞ is optimum for M = 1.This means that a finite threshold does not exist for M = 1 and as such the proposed spoofing countermeasure does not reduce the spoofer effectiveness as ρ T = ∞ is equivalent to a receiver with no spoofing countermeasure.However, as the diversity order increases, an optimum ρ T other than 0 or ∞ can be found from (13).As will be shown in the next section, the optimum value of ρ T reduces P e and as such reduces the spoofer effective range. Relating Observations of Spoofer Effectiveness to Physical Range.Having evaluated P e for various scenarios, it is of interest to determine the spoofer effectiveness as a function of the physical range.The potential target area of the spoofer as illustrated in Figure 1 is conceptually the physical region in which P e is large enough to impact the navigation solution. In this section, an approximation of the physical range of spoofer effectiveness is determined based on the empirical path-loss model of order n as where R 1 is a reference range, d is the spoofer-receiver range, n is the path-loss exponent, and ρ (R1) s0 is the average received spoofer CNR at d = R 1 . For a LOS scenario with measurement errors, the PDFs of the SS CNR and SV CNR estimates are noncentral chi-square with 2M DOF with M denoting the number independent diversity branches used to estimate the CNR.P e can therefore be found by computing ρ T using (13) and substituting it in (6). Figure 9 shows P e for the spoofer mitigated receiver (SMRx) as well as a conventional Rx for various spooferreceiver separations and M. As can be seen from this figure, aSMRx significantly reduces the effectiveness of the spoofer through reducing P e .Also observed is that the higher the diversity order M is, the more effective the spoofer mitigation Figure 12: P e as a function of spoofer-Rx separation in a Rician channel based on ρ a0 = 10 (dB), ρ s0 (R 1 ) = 30 dB, and a path-loss exponent of n = 3. is.For a Rayleigh fading channel, the PDFs of the spoofer and the authentic CNRs are central chi-square with 2M DOF.P e can be found by numerically computing ρ T from (13) and setting it in (6). Figure 10 shows P e for an SMRx as well as a conventional Rx with no spoofing countermeasures.Note that the performance of the SMRx is significantly better than that of a conventional Rx with higher diversity branches resulting in better performance.In addition, Figures 11 and 12 compare the P e of SMRx and Rx under a generalized Rician channel with various K-factors such that [10] f a ρ a ; ρ a0 = χ 2 2M K a K a + 1 ρ 2 a0 , 1 K a + 1 ρ a0 + 1 , where K a and K s are the Rician K-factors associated with the SV and the SS channels, respectively.Similar to the LOS and the Rayleigh channels, a noticable improvement spoofer mitigation is realizable.In order to quantify the reduction in spoofer effective range, a heuristic metric is introduced here as where SRRF denotes the spoofer range reduction factor.The SRRF is computed for various channel scenarios and diversity branches and the results are summarized in Table 1. Conclusions It was shown that a relatively unsophisticated standoff spoofer can effectively disrupt a large physical area.However, processing based on estimating the CNR of the spoofer and the authentic received signals and applying a straightforward threshold rule can significantly reduce the effectiveness of the standoff spoofer.This was shown for LOS, NLOS, and Ricean multipath conditions.If the average spoofer and authentic signal power is known then the setting of ρ T is trivial.However, if ρ s is completely unknown then it has a finite optimum, that is, a function of ρ a and the type of propagation environment detected by the receiver.An expression for computing the optimum ρ T was deduced and applied to various channels.The results demonstrated the effectiveness of the proposed spoofer mitigation technique.A heuristic metric of spoofer effectiveness (SRRF) was proposed.It was shown that SRRF is reduced by up to 75% for LOS, 45% for NLOS Rayleigh M = 2, and 60% for NLOS Rayleigh M = 5 and 70% based on a Rician channel with K a = K s = 1 for M = 2, hence aptly demonstrating the effectiveness of the proposed countermeasure approach. 2 Figure 1 : Figure 1: Standoff Spoofer (SS) illuminating a target area which is a sector of an annulus extending from R 1 to R 2 . Figure 2 : Figure 2: Graphical integration regions for the two error types. Figure 3 : Figure 3: P e as a function of ρ T . Figure 4 : Figure4: P e as a function of ρ s0 for a conventional receiver (Rx) and a spoofer mitigated receiver (SMRx). Figure 5 : Figure 5: Comparison of the conventional and the spoofer mitigation receiver based on 2 DOF in a NLOS Rayleigh fading channel. Figure 6 : Figure 6: P e as a function of ρ T and ρ s0 , for ρ a0 = 10 and NLOS Rayleigh conditions based on 2DOF. Figure 9 : Figure 8: F a (ρ T ) − ρ T f a (ρ T ) as a function of ρ T for various number of diversity branches based on a NLOS Rayleigh fading channel and ρ a0 = 10 (dB).
6,848.8
2012-07-17T00:00:00.000
[ "Computer Science" ]
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices. INTRODUCTION Artificial intelligence (AI) technology is becoming increasingly advanced and widespread in realworld applications, such as computer vision, natural language recognition, healthcare, and pattern classification (Ghahramani, 2015;Mnih et al., 2015;Silver et al., 2016;Guo et al., 2020;McKinney et al., 2020). Advances in AI technology have been achieved through the unprecedented success of deep-learning algorithms. However, based on the von Neumann architecture, conventional digital computers cannot withstand the ever-increasing sizes and complexities of neural networks and tasks, thereby facing barriers in terms of energy efficiency (Merkel et al., 2016;Yan et al., 2019;Ankit et al., 2020). This has necessitated the development of brain-inspired neuromorphic computing, e.g., hardware neural networks (HNNs). In particular, resistive memory (RRAM) is considered a strong candidate for synaptic primitives capable of storing multilevel weights as conductance values (Woo et al., 2016;Yu et al., 2016;Wu et al., 2019;Yin et al., 2020). Especially in an RRAM array, fully-parallel array operations provide the excellent potential to accelerate neural network computations. However, as existing RRAMs can only represent a finite number of conductance states, it poses a significant challenge to achieving high accuracy of HNNs during online training (Li et al., 2015;Gokmen and Vlasov, 2016;Kim et al., 2017;Mohanty et al., 2017;Nandakumar et al., 2020). To store a higher number of bits per weight, several studies have used multiple cells for a synapse in analog neuromorphic systems (Agarwal et al., 2017;Song et al., 2017;Boybat et al., 2018;Liao et al., 2018;Hsieh et al., 2019;Zhu et al., 2019). While there is a benefit, the synaptic unit architecture cannot adopt the conventional parallel updating scheme since multiple devices operate one synapse. The system must determine the device to be updated for each synaptic unit and calculate the weight updates' corresponding amounts. As a result, the synaptic unit architecture's update process requires additional expenses in terms of time and energy. In positional number systems, carry operations must be performed between the combined devices in every synaptic unit (Agarwal et al., 2017;Song et al., 2017), which is not compatible with the parallel updating scheme. Liao et al. proposed a synaptic unit with sign-based stochastic gradient descent training to implement the parallel updating (Liao et al., 2018). However, ignoring the magnitude information of the weight updates decreases the classification accuracy. Thus, for fast and accurate HNN learning, it is crucial to be able to train the synaptic unit architecture using the parallel update method without losing the amount of the feedback information. Therefore, we propose a hybrid synaptic unit using Mo/TiO x RRAMs with a cooperative training method that can accelerate the learning of neural networks with increased precision of the synaptic weight. The remainder of this paper is organized as follows. (1) We explain the importance of high precision of the synaptic element and the parallel updating scheme, which are essential for accelerating neural network training with high accuracy. (2) We present the hybrid synaptic unit consisting of "big" and "small" synapses. We also present the training method to simplify the updating process by separating the role of each synapse in the unit. We train the HNN in two phases wherein, first, a dynamic-tuning phase that only updates the big synapses is followed by a fine-tuning phase that only updates the small synapses in detail. Hence, the HNN can accelerate learning process by using a parallel updating scheme to the target array with simple array selection logic. (3) To implement the hybrid synapse experimentally, we exploit Mo/TiO x RRAM that exhibits promising synaptic properties and the areal dependency of the conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. (4) By considering realistic device parameters, we conduct neural network simulations to confirm the feasibility of the proposed method. We also analyze the optimal gain ratio between the synapses to achieve the highest accuracy. The results demonstrate that hybrid synapse-based HNN with the proposed learning method significantly improves accuracy for handwritten digit datasets, which is 99.66% for training and 97% for the tests. We believe that this work is a meaningful step toward a high-performance RRAM-based neuromorphic system using existing RRAM devices. Synaptic Device The working principle of an HNN is based on parallel signal propagations in crossbar array architecture. For the synaptic weights W ij , the conductance values of the resistive device are the weights indicating the strength of the synaptic connection. Herein, a synapse (G + ij − G − ij ) typically consists of two devices, where G + and G − represent the conductance states of the positive and negative devices, and the subscripts i and j are the crossbar array indexes. After the pre-neurons express voltage signals, these signals are naturally multiplied by the conductance values of the synapses using Ohm's law. Thus, the signals from the pre-neurons are computed in the current form and can propagate parallelly through all synapses to the post-neurons. From the perspective of the post-neuron, all the currents from the connected synapses are accumulated by Kirchhoff 's law, and the neuron fires output signals based on the nonlinear activation function for consecutive propagation in multilayer neural networks as follows. where W l represents the weight matrix in the lth layer, and X l is a vector of neuron activations that is applied to the rows of the crossbar array; f() is a nonlinear activation function of the neuron. Thus, a crossbar array that stores multiple bit weights in each RRAM accurately computes the analog-based VMM in a single step. When the inputs are applied to the first neuron layer, the final layer's output determines the winner neuron after the forward propagation, as shown in Figure 1A. To reduce classification errors between the desired and computed outputs, the calculated errors propagate backward, adjusting each weight to minimize the energy function by gradient descent of the backpropagation algorithm ( Figure 1B). where δ l is the backpropagating error vector of the 1th neuron layer and η is the learning rate parameter. * denotes element-wise product. The amount of weight updates in the lth layer, W l , becomes the outer product of the two vectors. Therefore, for synaptic devices, the conductance states' high precision is critical to ensure optimum neural network convergence by adjusting the weights precisely. Parallel Update Scheme When the weights are updated element-wise or row-wise in the crossbar array, the time complexity proportionally increases with an increase in the array size. Crossbar-compatible and fully parallel update schemes have thus been proposed to accelerate neural network training (Burr et al., 2015;Gao et al., 2015;Kadetotad et al., 2015;Gokmen and Vlasov, 2016;Xiao et al., 2020). For the target crossbar array, by applying update pulses simultaneously to all rows and columns based on the neuron's local knowledge of X and δ, respectively, the parallel updates in each cross point can be executed by the number of pulse overlaps. Therefore, the outer product updates in Eq. (3) are conducted in parallel, as shown in Figure 1B. The pulse encoding method can be implemented in various ways, such as the temporal length, voltage amplitude, and repetition rate. Also, as the update rules can be flexibly adjusted to each system, a parallel updating scheme has been demonstrated in unidirectional phase-change memory (PRAM) arrays (Burr et al., 2015). Therefore, it is vital to employ parallel updating schemes to accelerate neural network training. RRAM-BASED HYBRID SYNAPSE This section explains the concept of a hybrid synapse using the RRAMs and their training method to significantly improve the weight resolution and training efficiency of a neural network even with device imperfections. Here, each device that makes up a hybrid synapse is assumed to be implemented in a different array to increase the crossbar's controllability (Zhu et al., 2019). Hybrid Synapse To investigate the ideal synapse behaviors, we first analyzed the weight changes during the software neural network training. Figure 2A shows the weight changes in all synapses in the hidden-output layer as a function of the training epoch. The weight tuning of the software synapses can be mainly divided into two phases: the dynamic-tuning phase, where the weights are largely updated, and fine-tuning phase, where the weights are slightly updated with high precision. Such tendencies are also observed in the training accuracy, which increases rapidly at the initial stage and is then gradually adjusted to the optimum condition, as shown in the inset of Figure 2A. Inspired by this progressive weight update, we present a hybrid synaptic unit with an additional small synapse g + ij − g − ij to finely tune the weights after the dynamic tuning phase in the big synapse (Figures 2B,C). Here, g represents conductance states of the small synapse, scaled by k times g = G/k . The larger the scale factor (k), the higher the precision of weights that can be expressed. Hence, four devices with different state precisions serve as a synapse, as follows: where G and g represent the conductance states of the low-and high-precision devices, respectively. Figure 2D shows a flow chart and the working principle of the proposed neural network. The learning method is mainly composed of three cycles: inference, error calculation, and weight update. During the forward and backward propagations, all big-and small synapses are used to perform VMM operations. In contrast, weight updates are conducted only with specific synapses depending on the training phase. Initially, training starts from the dynamic tuning phase, which only updates the big synapses by switching off the small synapse arrays. Thus, update pulse vectors of X (t+2) and δ (t+2) corresponding to the neuron's local knowledge of X (t) and δ (t+1) are applied to each row and column of the big synapse array, respectively. As the training proceeds, the increase in accuracy may saturate owing to the limited weight resolution of a single synapse. If the accuracy improvement between epochs is below a certain threshold value (the value of 0.5 is adopted for this operation), the update target is switched to a small synapse. Hence, the small synapse's higher conductance granularity enables finer weight adjustments while the big synapse's weights are fixed. Therefore, a hybrid synapse with the proposed learning method can overcome the physical limitations of an individual device and accelerate neural network training with only simple switching logic. Mo/TiO x -Based RRAM To implement the hybrid synapse, the scale factor k can be realized in various ways by scaling the input voltage signal or adjusting the peripheral circuit's gain. In this work, however, we exploit the switching mechanism of the Mo/TiO x -based RRAM, i.e., area-dependent conductance scaling, to implement the gain at the device level. Previously, we reported a microstructural engineered Mo/TiO x RRAM for electronic synapse applications (Park et al., 2019); the study presented some promising synaptic features of the Mo/TiO x RRAM, such as gradual and linear conductance programming. However, the present expanded work adds significantly more explanatory details regarding the areal dependency of the conductance precision, which is utilized to construct a hybrid synapse. The TiO x -based RRAM was fabricated on TiN bottom electrodes with various active diameters from 30 nm to 1 µm. First, we deposited a 15 nm thick TiO x layer through RF sputtering process by using a ceramic Ti 4 O 7 target at room temperature. Then, 50 nm thick Mo top electrode was deposited by the sputtering system (Park et al., 2019). The device structure and composition of each layer are shown in Figure 3A via transmission electron microscopy (TEM) image and its energy dispersive X-ray spectroscopy (EDS) line profile. The switching mechanism of the RRAM is based on gradual oxygen migration and chemical reactions at the interface between the Mo top electrode and the TiO x layer under an electric field (Park et al., 2019). As shown in Figure 3B, the areal conduction contributes to a gradual increase (or decrease) in the conductance states when a positive (or negative) bias is applied, which are called potentiation and depression, respectively. In Figure 3B, 30 mV step voltage was used for the DC I-V sweep measurement such that 100 sampling points in a single sweep from 0 to 3 voltages. The uniform current density, regardless of device dimensions, demonstrates the interfacial switching of the RRAM. We also confirmed the areal conduction of the Mo/TiO x RRAM using AC pulse measurements. Figure 3C shows five cycling operations for devices with different dimensions, with 100 pulses each for the potentiation and depression processes. Interestingly, as the effective switching area is scaled down, the entire conductance range of the device decreases proportionally. As shown in Figure 3D, the precision in conductance changes per pulse proportionally increases with device scaling, even with an identical operating scheme. Hence, without modifying the operational scheme, high-precision weights can be represented by scaling the device area k times. For example, when the value of k is 10, the small synapse device area is scaled down by a factor of 10 compared to the big synapse. Following the optimization of the operating scheme, we obtained near-ideal programming linearity during 30 cycles; each cycle included 50 potentiation and one reset process, as shown in Figure 3E. Here, we used the linear potentiation process with a strong reset to maximize the online training accuracy, as a pair device with an occasional reset process allows implementation of the depression as well as negative weights (Burr et al., 2015). The probability distribution shows the excellent state uniformities of 10 representative states, whereas the inset shows the programming variability (δ/µ) with standard deviation (δ) and mean (µ) values. In Figure 3F, the heat map shows the cumulative probability of achieving a particular conductance change as a function of the total conductance to demonstrate linear conductance programming. However, a finite number of conductance states (i.e., 50) in a single device cannot accomplish accurate neural network training comparable to floating-point (FP) implementations. In the next section, we demonstrate the improved training accuracy of the proposed learning method using Mo/TiO x RRAM-based hybrid synapse through neural network simulations. RESULTS AND DISCUSSION Simulations were conducted on fully connected neural networks (784-250-10) for pattern recognition of handwritten digits using the Modified National Institute of Standards and Technology (MNIST) dataset. We used 60,000 training and 10,000 test images for the simulations. Also, the mini-batch size was one, and the learning rate was 0.1. Simulation Analysis As shown in Figure 4A, the performance of the proposed neural network is compared with other types of synapse implementations. First, a single synapse with 50 intermediate states of resistive devices is used to show the saturation of the training accuracy. Although the device has good programming linearity, the finite number of states hinders convergence of the entire network to the optimum condition. However, the software network with FP synaptic weights gradually increases up to 99.98% training accuracy, with 97.92% test accuracy. For the proposed method, the results show 99.66% training accuracy and 97.00% test accuracy, even with the device imperfections. Importantly, unlike the case before switching, where the accuracy remains the same as that of the single-synapse implementation, the accuracy after switching improves gradually. To observe the collaboration of big and small synapses, Figure 4B shows the normalized weight update frequency as a function of the epoch. While the update frequency of the single synapse consistently decreases, the update frequency of the proposed synapse abruptly increases when the target synapse is switched to a small synapse. Then, the number of weight updates decrease again as the synaptic weights are adjusted with high precision during the finetuning phase. The convergence of the mean squared error (MSE) of the neural network is analyzed as shown in Figure 4C. After the target update synapse is changed to a small synapse, the stopped MSE reduction starts decreasing gradually. Figure 4D shows the weight history of a single synapse implementation with 50 states of electronic devices and software synapse implementations with FP weights. In contrast to a single synapse with a finite number of states, the FP synapse converges to its optimum state through the fine-tuning process. Figure 4E shows the case of the hybrid synapse for three representatives. In addition, Figure 4F shows the weight history of the behavior of each low-precision and small synapse. It is seen that dynamic weight tuning is conducted on only the big synapses before switching, whereas fine tuning is conducted on only the small synapses after switching. The results thus demonstrate the successful performance of the proposed method using only 50 intermediate states for each device and a simple array selection logic for the update process. Scale Factor (k) The gain of the small synapse plays an important role in determining the performance of the neural network, which controls the granularity of the synaptic update. To analyze the optimal value of k, we evaluated the errors in the weight updates for different k values. As shown in Figure 5A, a FIGURE 5 | (A) Weight states of a single synapse with electronic devices that have finite number of conductance states. During the weight update process, errors may occur due to the low precision. W error denotes the difference between the target and actual weights. (B) Comparison with other cases for k values of 1, 10, and 100; a moderate k value of 10 is suitable for low errors in weight updates. (C-E) Histograms of mean weight errors for different k values. When k is 1 (C), the weight resolution does not increase causing low error convergence. When k is 100 (E), the weights of small synapses are excessively scaled. Thereby, the overly scaled conductance leads to unnoticeable weight changes with saturated error convergence. (F) Classification of error rate as a function of k value. At least 10 must be secured to achieve high accuracy with high-precision synaptic weights. Note that saturated error convergence due to a high value of k can be improved with a larger number of g states in the device for the small synapse. synapse with a limited number of states cannot be adjusted to the exact target weight, resulting in weight errors (W error ). Figure 5B shows the precision of the g states relative to those of the G states for three different cases (k = 1, 10, and 100). When k is 1, the precision of g is as low as that of G. If k increases to 10, the precision of g increases proportionally by 10 times the precision of G, such that each state of G can be expressed as 10 states of g. As a result, after completion of the big synapse training in the proposed neural network, the small synapse can be tuned precisely more than 10 times, thereby further reducing W error . However, when k increases to 100, the weight changes may become unnoticeable owing to the excessively scaled precisions of the g states. The 50 finite states of g can only express as little as half of the G states. Figures 5C-E are histograms for different values of the scale factor showing the distribution of absolute W error values for all synapses in the hidden output layer. As seen in Figure 5D, W error gradually decreases when k is a moderate value of 10 compared to k values of 1 and 100. A low k cannot reduce W error owing to the low precision of the g states (Figure 5C), while an extremely high k renders the weight updates of the g states unnoticeable ( Figure 5E). Therefore, a moderate gain value is important for accurate online training of the network. Figure 5F summarizes the results showing the error rates of the neural networks as functions of k. Notably, the increase in error rate due to excessive scaling of k can be reduced by a higher number of g states. It is seen that the error increases to 5.02% at 100 k and decreases to 3.42% when the number of conductance states of the high-precision device increase to 400. Performance In addition to analysis of the optimal k, we investigated the performance of the neural network reflecting programming variations of the electronic device as well as the number of conductance states. As can be seen from Figure 6A, the hybrid neural network achieves an accuracy of over 93.69% even when the device's conductance levels are reduced to 10. However, the neural network with single-synapse implementation shows a dramatic decrease to 9.8%. Therefore, the proposed hybrid synaptic unit remarkably reduces the required number of states in the electronic device to obtain the target accuracy. Moreover, we simulated the impact of programming variations (δ/µ) on the neural network performance for each synapse implementation ( Figure 6B). The amount of conductance change ( G) is unpredictable, as shown in Figure 3E. To represent the variation of G during the weight update process, we modeled the programming variability by using the mean (µ) and standard deviation (δ) of the G. In the simulation, the variation is assumed to be a random variable with a Gaussian distribution and added to the G in each device during updates. Therefore, we can evaluate the impact of the conductance variation on classification accuracy. Based on the experimental data, variations in our device (0.34) are indicated by the dotted line, where high accuracy of the neural network is still guaranteed. The results show that the neural networks have variational immunity up to one and significantly decrease for variations greater than one. In addition to the programming variation, mean G values can also vary between RRAM devices. In this case, the mean values of large synapses differ so much that there is no overlap between the synapses. Consequently, there would be a limit to the complementary actions between the synapses, leading to severe accuracy loss. Thus, device-to-device variations can play an important role. We have evaluated the impact of device-to-device variations on recognition accuracy in Figure 6C. The network is robust against the variation (δ/G min ) with up to one. It is worth noting that a large ratio of G max /G min is essential to improve the immunity to device-to-device variability. The performances of the different neural networks are summarized in Figure 6D. Compared to the single-synapse implementations with imperfect devices, the hybrid neural networks with the proposed learning method achieve online training accuracies of 97%, which are comparable to FP synapse implementations (97.92%). In particular, the highest accuracy of 97.34% can be achieved when the number of g states increases to 400. Hybrid Synapse for Spiking Neural Networks We further discuss how the hybrid synapse and learning method that we proposed can be extended to spiking neural networks (SNNs). Same as the multilayer perceptron model, SNNs can also benefit from dense crossbar array using nanoelectronics devices (Prezioso et al., 2018). The SNN operates by datadriven event-based activations, which makes it promising for energy-efficient neuromorphic hardware. In particular, RRAM has been regarded as a strong candidate with the advantages of high scalability and low power operation, showing spiketiming dependent plasticity (STDP) functionality (Lashkare et al., 2017). Recently, several groups have reported SNNs utilizing multiple devices as a single synapse to secure a higher multilevel conductance state (Werner et al., 2016;Shukla et al., 2018;Valentian et al., 2019). Meanwhile, SNN has suffered from poor learning performance due to the lack of adequate training algorithms. Many efforts have been made to apply the gradient descentbased backpropagation algorithm to the SNN's learning to compensate for this issue . Also, using analog resistive devices, on-chip training SNNs with backpropagation algorithms has been recently reported (Kwon et al., 2020). Although the SNN model was not covered in this paper, the proposed multi-element-based synapse and backpropagationbased learning methods are the parts that have been studied in SNN application as well. Our work, therefore, strongly encourages studies on online trainable, fast, and high-accuracy SNN hardware with RRAM synapses. CONCLUSION To achieve accurate and fast HNN training using RRAM devices, we presented hybrid synaptic unit and the learning method. The hybrid synapse consists of two synapses with different gains; one for dynamic-tuning by large quantities and the other for fine-tuning in detail. By only updating a specific synapse in the synaptic unit depending on the training phase, the weight update process is simplified and we can accelerate the HNN training with a multi-RRAM synaptic architecture. Moreover, we exploited Mo/TiO x RRAM to experimentally demonstrate the hybrid synapse, implementing internal gain at the device level with proportionally scaled areas. Therefore, the granularity of the synaptic weights significantly increased even with the finite number of conductance states in the device. Through neural network simulations, we confirmed it could achieve the highest accuracy of 97.00%, comparable to FP synapse implementations. Finally, we summarized performances with different device parameters by varying the number of states, programming variabilities. We expect this work contributes to building competitive neuromorphic hardware by using RRAM synapses even with the device's physical limitations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
6,172.6
2021-06-24T00:00:00.000
[ "Computer Science", "Engineering" ]
HANDWRITTEN TAMIL CHARACTER RECOGNITION SYSTEM USING OCTAL GRAPH Machine simulation of human functions has been a very challenging research field since the advent of digital computers. In some areas, which entail certain amount of intelligence, such as number crunching or chess playing, tremendous improvements have been achieved. On the other hand, humans still outperform even the most powerful computers in the relatively routine functions such as vision Arica et al (2001). INTRODUCTION Machine simulation of human functions has been a very challenging research field since the advent of digital computers.In some areas, which entail certain amount of intelligence, such as number crunching or chess playing, tremendous improvements have been achieved.On the other hand, humans still outperform even the most powerful computers in the relatively routine functions such as vision Arica et al (2001). In this Overview, Character Recognition (CR) is an umbrella term, which has been extensively studied in the last half century and progressed to a level, sufficient to produce technology driven applications.Now, the rapidly growing computational power enables the implementation of the present CR methodologies and also creates an increasing demand on many emerging application domains, which require more advanced methodologies.Optical Character Recognition (OCR) deals with the recognition of optically processed characters rather than magnetically processed ones.OCR is a process of automatic recognition of characters by computers in optically scanned and digitized pages of text Pal et al (2004).OCR is one of the most fascinating and challenging areas of pattern recognition with various practical applications.It can contribute immensely to the advancement of an automation process and can improve the interface between man and machine in many applications proposed by Mantas (1986), Govindan et al (1990). Character and handwriting recognition has a great potential in data and word processing.For instance, automated postal address and ZIP code reading, data acquisition in bank cheques, processing of archived institutional records and more.Combined with a speech synthesizer, it can be used as an aid for people who are visually handicapped.As a result of intensive research and development efforts, systems are available for English language Bozinovic et al (1989), Jianying Hu et al (1996).Chinese/Japanese character recognition system was developed by Deng et al (1994), Chang et al (1996), Yamada et al (1988) and handwritten numeral recognition system was proposed by Lee (1996).However, less attention had been given to Indian language recognition.Some efforts have been reported in the literature for Devanagari characters Bansal et al (1999), Tamil Chinnuswamy et al (1980) and Bangla scripts Chaudhuri et al (1997).The need for OCR arises in the context of digitizing Tamil documents from the ancient and old era to the latest, which helps in sharing the data through the Internet. OCTAL GRAPH APPROACH The proposed approach ventures a solution for offline handwritten recognition, which converts the letter written into an octal graph, by representing each pixel of the given character as a node of a graph.Each node has eight fields so termed as octal graph.The graph tries to represent the basic form of a letter independent of the style of writing.Using the weights of the graphs and by the appropriate feature matching with the predefined characters, the written characters are recognized. RECOGNITION PROCEDURE The system uses octal graph conversion to recognize the handwritten characters.The major phases of the recognition are: Figure 4.3 Work Flow Diagram of Octal Graph Approach For the proper working of the algorithm, the following issues are taken under consideration: 1) To convert the pattern to an exact similar octal graph, the normalized image should be cleaned such that cells in a single line do not have more than two set cells as neighbours. 2) For the given input the features of the octal graph such as loops, horizontal lines, and vertical lines should be identified correctly. These factors are taken into consideration while developing the handwriting recognition system. Segmentation Text line segmentation is an essential pre-processing stage for off- Algorithm The segmentation process separates the individual characters from the given input.This is done by following steps: Step 1: The image is checked for inter line spaces. Step 2: If inter line spaces are detected then the image is segmented into sets of paragraphs across the interline gap. Step 3: The lines in the paragraphs are scanned for horizontal space intersection with respect to the background. Step 4: Histogram of the image is used to detect the width of the horizontal lines. Step 5: Then the lines are scanned vertically for vertical space intersection. Step 6: Here histograms are used to detect the width of the words. Step 7: Then the words are decomposed into characters using character. Normalization While normalizing images of various sizes into a single standard size, there may be unwanted pixels that are set in a single stroke.This, if passed to the next stage as such, would result in complications in octal graph construction.The complications may be because two points in a line may be connected by more than a single path.This would result in duplicate linkages in two consecutive nodes of a graph which would result in an unwanted loop, which is a critical feature used in recognition of the letters from the learning set.Step 1: The horizontal and vertical ratios of the corresponding dimensions are found. Step 2: The pixels of these images are grouped into cells with respect to these ratios. Step 3: Then the pixels in each cell are read. Step 4: If any pixel is set in a cell, the corresponding cell is set.If none of the pixels are set in a cell, the cell is not set. Step 5: This normalized image map is subject to cleaning to remove any pixels such that no pixels that form a single line have more than two neighbouring set pixels. Step 6: This is done by looking for pre-defined pattern of pixels and removing them for cleaning.The distance between two nodes should be high enough to represent the features of the letter correctly and also low enough not to take up much memory The number of directions in which linkages are possible is chosen as eight.This is because the number of directions in which the linkages are possible must be high enough to express the curvature of the letters correctly.Also it must be low enough to avoid a highly sparse direction pointer array. Algorithm The normalized image is converted into an octal graph.This is done by: Step 1: Count the number of set neighboring cell of each set cell. Step 2: Mark the connecting points and junction points as node. Step 3: The nodes are connected with respect to the direction of the strokes. Step 4: Connect all the nodes created so far with proper direction linkages. Recognition The various features of the input graph are identified.This includes the height and width of the character, number of loops, number of lines (horizontal and vertical), number of curves, etc.These features help in finding the desired match between the input graph and the character that is in the repository.Match the input graph with that of the character in the repository by considering various features such as loops, horizontal lines, vertical lines, curves etc. Compare the features of the input graph with that of the characters in the repository.Compare the level of confidence for each character.If the confidence level is matched with that of the characters in the repository, then the character is recognized. RESULTS AND DISCUSSION The recognition ranks the letters and displays the top 3 ranked letters.If the letter is in the first spot, then it is 100% success, if it is in the second, it is 75% success, if it is in the third, then it is 50 % success or else its failure.Based on this ranking an evaluation was done for each letter in the learning set and efficiency of recognition was evaluated for each letter.The following bar graph displays the result of the evaluation.The overall efficiency of the system was found to be 82% (Table 4.1).The evaluation cases consisted of ten samples which consist of 40 good samples, 40 misaligned samples and 20 extremely disfigured samples.This performance of our system is high considering the fact that the existing systems do not recognize disfigured inputs and misaligned inputs at all. SUMMARY In this chapter the recognition of Tamil Characters was improved using octal graph conversion to get the maximum possible efficiency. Segmentation and Normalization of handwritten characters has been proven to be efficient on application of octal graph conversion, which improves slant correction.Significant increase in accuracy levels has been found with comparison of our method with the others for character recognition.The experimental results show that the accuracy is really improved than the previous study.With the addition of sufficient pre processing the approach offers a simple and fast structure for fostering a full OCR system.The experimental results show that the accuracy is really improved. Figure 4.1 shows the octal graph representation of a Tamil character.An octal graph unlike a normal g r a p h h a s a n o d e w i t h e i g h t p o i n t e r s a n d a d a t a f i e l d .B a s e d o n t h e neighboring pixels the pointer values are assigned to the various fields of the octal node.These octal nodes are connected to the other nodes based on the threshold value.Figure 4.2 shows the Octal Node Representation of a Tamil Character. Figure Figure 4.1 Octal Graph Representation of a Tamil Character line handwriting recognition in many Optical Character Recognition (OCR) systems.It is an important step because inaccurately segmented text lines will cause errors in the recognition stage.Text line segmentation of the handwritten documents is still one of the most complicated problems in developing a reliable OCR Likforman-Sulem et al (2007).Handwriting text line segmentation approaches can be categorized according to the different strategies used Nicolas et al (2004).These strategies are projection based, smearing, grouping, Hough-based Louloudis et al (2006), graph-based and Cut Text Minimization (CTM) approach Shi et al (2004). Figure 4.4 show the segmentation representation of a character. Figure Figure 4.4 Segmentation Figure 4.5 shows the normalization of a character.It is done by following steps: Algorithm Normalization is the conversion of images of various dimensions into fixed dimensions.It is done by following steps: Figure Figure 4.5 Normalization Figure Figure 4.6 Octal graph Formation
2,415
2013-01-01T00:00:00.000
[ "Computer Science" ]
QoS-Aware Resource Allocation for Network Virtualization in an Integrated Train Ground Communication System Urban rail transit plays an increasingly important role in urbanization processes. Communications-Based Train Control (CBTC) Systems, Passenger Information Systems (PIS), and Closed Circuit Television (CCTV) are key applications of urban rail transit to ensure its normal operation. In existing urban rail transit systems, different applications are deployed with independent train ground communication systems. When the train ground communication systems are built repeatedly, limited wireless spectrum will be wasted, and the maintenance work will also become complicated. In this paper, we design a network virtualization based integrated train ground communication system, in which all the applications in urban rail transit can share the same physical infrastructure. In order to better satisfy the Quality of Service (QoS) requirement of each application, this paper proposes a virtual resource allocation algorithm based on QoS guarantee, base station load balance, and application station fairness. Moreover, with the latest achievement of distributed convex optimization, we exploit a novel distributed optimizationmethod based on alternating direction method of multipliers (ADMM) to solve the virtual resource allocation problem. Extensive simulation results indicate that the QoS of the designed integrated train ground communication system can be improved significantly using the proposed algorithm. Introduction With the city expansion and urban population explosion, the traditional road traffic facilities cannot satisfy the demand of modern society.Energetically developing urban rail transit system and improving the speed and capacity of rail transit have become desirable all over the world.Studies of urban rail transit have become a research focus among engineers and researchers all over the world. The train ground communication is a key technology to ensure the normal operation of urban rail transit [1].Most of the urban rail transit applications, such as Communications-Based Train Control (CBTC) Systems [2], Passenger Information Systems (PIS), and the Closed Circuit Television (CCTV), need train ground communication systems.In existing urban rail transit systems, CBTC, PIS, and CCTV adopt WLAN that use unlicensed spectrum as their train ground communication technology [3].The construction and management work of train ground communication systems for each application are independent in existing urban rail transit systems.It is a huge waste of limited wireless spectrum and other social resources to invest and build new communication infrastructures for each application.Maintaining these infrastructures will also become a great burden.In order to ensure the safety of urban rail transit operation, integrating all these communication system into a whole is quite desirable for urban rail transit systems. The major opportunities and challenges in train ground communication systems are summarized in [4].Lots of researchers have studied issues related to urban rail transit train ground communication recently.Literature [5] aims to present a comprehensive tutorial, as well as a survey of the state-of-the-art of CBTC and the role of radio communication in it.A summary of the evolution of the communication 2 Wireless Communications and Mobile Computing technologies used for modern railway signalling, best practices in the design of a CBTC radio network, and the measures to optimize its availability are discussed as well.In [6], a MIMO-assisted handoff (MAHO) scheme for CBTC systems is proposed to reduce transmission and handoff delay.In [7], the Markov model of redundant and nonredundant CBTC train ground communication system structure is established to analyze the system reliability and availability.The effect of different system redundancy and the relationship between the availability of CBTC train ground communication system and the speed of train are also discussed.Channel modeling in CBTC train ground systems is intensively studied in [8,9].Combining Artificial Intelligence-(AI-) based decisionmaking and learning algorithms, Amanna et al. [10] present a railroad-specific cognitive radio (rail-CR) with softwaredefined radio (SDR).Based on periodical signal quality changes, the authors of [11] propose a scheduling and resource allocation mechanism to maximize the transmission rate for LTE based train ground communication system.For the handoff problem in train ground communication system, a seamless handoff scheme based on a dual-layer and duallink system architecture is proposed in [12] to reduce communication interruption time.In our previous work, crosslayer handoff designs have been studied extensively in [13] for WLANs based CBTC train ground communication systems. These above works study urban rail transit train ground communication system performance and analyze the influence of rail transit environment on system performance.However, most of the works only focus on independent applications.Few studies take all the train ground applications into consideration.Our previous works test an LTE based integrated train ground communication system performance [14][15][16].We also study the handoff design in existing integrated train ground communication systems [3].However, the problem of improper system wireless spectrum allocation is largely ignored in these works. In the paper, we design a network virtualization based integrated train ground communication system for urban rail transit systems.With a variety of applications, the designed system can be updated from the existing system.This kind of design not only reduces construction and operational costs but also improves the spectrum utilization efficiency.In order to better meet the QoS requirement of applications in the designed system using wireless network virtualization technology [17], this paper proposes a virtual resource allocation algorithm based on QoS guarantee, base station (BS) load balance, and application station fairness.Meanwhile, we define a QoS satisfaction level (QoSL) parameter to reflect the application satisfaction.The final optimization goal is to ensure CBTC application reliability and maximize QoS satisfaction of all the application stations. In addition, with the further development of distributed convex optimization, we develop a distributed wireless virtual resource allocation algorithm based on alternating direction method of multipliers (ADMM) [18] to solve the virtual resource allocation problem.Simulation results indicate that the QoS of the designed integrated train ground communication system can be remarkably improved with the proposed method. The rest of the paper is organized as follows.In Section 2, the integrated train ground communication system architecture is introduced.Section 3 describes the system model and problem formulation.The virtual resource allocation problem transformation and solution using ADMM are discussed in Sections 4 and 5, respectively.Simulation results are given in Section 6.Finally, the conclusion is given in Section 7. The Designed Integrated Train Ground Communication System Architecture In this section, we first introduce the QoS requirement of different applications in train ground communication and then present the basic structure of the designed integrated train ground communication system.Next, we study how each of the virtualization characteristics are conducted in a physical BS.Finally, we depict the use of network virtualization in the designed system.As shown in Figure 1, in CBTC systems, continuous bidirectional wireless communications between the ground base station (BS) and each onboard application station are used instead of the traditional track circuit based train control system.Trains will get the state of the front train and other obstacles from the Zone Controller (ZC).It will compute a braking curve, so as to stop at a proper position.Theoretically, the distance between two trains can be just a few meters, if both trains can get the real time position of the front train and both trains have the same speed and braking capability. Applications in However, as explained in [19], when the train behind does not get the real time position of the front train due to train ground communication delay, it will trigger the brake to stop before entering a danger zone.This process will have a significant negative impact on CBTC system performance.Therefore, the most important QoS measure of train ground communication system is transmission delay.Typical values of required transmission delay and other suggested QoS measures in CBTC system are illustrated in Table 1. In urban rail transit system, the other two crucial applications are PIS and CCTV.Taking advantages of advanced communication and multimedia techniques, diverse multimedia information such as weather forecast, train arriving time, and advertisement will be provided to passengers on trains and in stations through PIS.CCTV is a crucial additive means to guarantee train secure operation.By using CCTV, the urban rail control center can monitor the train carriage, station, and other essential zones through continuous train ground video transmission.For the PIS and CCTV application, throughput and jitter delay are the direct performance measure, since the high quality video needs higher throughput and less jitter delay. The suggested values of transmission data rate and other suggested QoS measures in PIS and CCTV are illustrated in We need to point out that the proposed optimization algorithm in our designed integrated train ground communication system is not dependent on the data in Table 1.Once we get more authoritative performance requirement parameters, they can be used in our optimization model and more accurate simulation results can be obtained.The construction and management works of train ground communication system for each application are independent in existing urban rail transit system.It is a huge waste of limited wireless spectrum and other social resources to invest and build new communication infrastructures for each application.Recently, engineers try to design a system that combines all the applications together.The system architecture is shown in Figure 2. In order to improve CBTC system reliability, two independent ground infrastructures are used.There are two CBTC application stations on the train, which are installed on its nose and tail, and they are connected to different ground infrastructures.Two independent train ground infrastructures are allocated with constant spectrum.The PIS and CCTV application stations only connect to one of the ground infrastructures and share wireless spectrum with CBTC systems. One disadvantage of the above system is the improper spectrum resource allocation scheme.The designed system using two independent ground infrastructures guarantees the CBTC system reliability.However, the spectrum allocated to urban rail transit systems is limited, and all the channels used by different applications share the same spectrum.Channels needed by different applications are dynamically changing, and allocating constant channels to different applications will waste limited spectrum resource. In order to better satisfy the QoS requirement of different applications, we design an integrated train ground communication system for urban rail transit system using wireless network virtualization techniques, which will be introduced in the next subsections. Architecture of the Designed Integrated Train Ground Communication System.The designed system architecture is shown in Figure 3. Different from the existing system, in our designed system, the two infrastructures can be connected by PIS and CCTV application stations as well as both the two CBTC application stations. The proposed integrated train ground communication system architecture is shown in Figure 4.For a certain railway line, it is assumed that there is only one physical infrastructure provider (PiP), which provides three different network services to the train with three different application stations.According to the general wireless network virtualization definition, the proposed architecture can be divided into two separate layers: the control and management layer (CML) and the virtualization layer (VL). The main responsibility of CML is resource management.its own network controller, which is responsible for scheduling application stations, determining their QoS requirement, and informing the hypervisor of them.The hypervisor can flexibly allocate the virtual resources to virtual networks under different circumstances according to the feedback information (transmit power, e.g., and available spectrum) and different QoS requirements.The whole network has one hypervisor.By using wireless network virtualization, each application station could be served via the same PiPs and different spectrum resources. The VL is accountable for the abstraction, programmability, and isolation of physical resources in a certain physical base station (BS).Using various VL functions, the PiP will be able to broadcast beacons for virtual BSs of various applications.In addition, each of the virtual networks should have independent control of settings in their virtual BSs.They can set different attributes for virtual BSs, such as various security policies, broadcast domains, and IP settings.Furthermore, the virtual BSs can be isolated by different wireless spectrum. The VL also provides the CML with the interfaces needed to control virtualized resources (spectrums, transmission power, etc.).With VL, both the PiP and the wireless resources are virtualized and shared by various virtual networks. The virtual resource allocation is one key issue in the above system.Physical and wireless virtual resources should be dynamically allocated to the CBTC, PIS, and CCTV according to their requirement.If the virtual resource allocation scheme is not carefully designed, the normal CBTC system function will not be ensured.The video transmission quality of PIS and CCTV will degrade.This will have a significant negative impact on urban rail transit system.To this point, we will study virtual resource allocation schemes in the following sections. System Model and Problem Formulation In the designed system, we define N as the base station (BS) sets, N = {1, 2, . . ., }.The integrated system is virtualized into multiple virtual BSs (VBSs) for different services.The system has a set K of VBSs, K = {1, 2, . . ., }.For each VBS , ∈ K, V is the set of application stations of VBS , and V is one of application stations served by VBS , V ∈ V .In the integrated system, a wireless channel is a granularity of physical wireless resources for the hypervisor.Each VBS needs a certain amount of subchannels to complete the QoS requirement for applications.We define as a subchannel of BS.M is the set of all available channels of physical BS, given the legal frequency spectrum.We assume power is evenly distributed in each channel.The hypervisor can accurately obtain the Channel state Information (CSI), available spectrum, and the QoS requirement of application stations.In order to improve the utilization of spectrum resource, each subchannel can adopt different modulation mode according to the channel state information. The virtual resource allocation optimization can be described as maximizing the total application satisfaction on the condition of system constraints.The strictly concave, monotonically increasing, and continuously differentiable logarithmic utility function [20] is used to ensure proportionally fair resource allocation.The Opt-U1 formulations are given as follows: where Γ (), Γ the optimization functions can be transformed as follows: Opt-U2: For the PIS and CCTV application, our objective is to maximize their data transmission throughput and minimize their jitter delay.Therefore, the reward function for these two applications can be defined as where R V is the achievable data rate between the subchannel and user V .It is a function of the subchannel available bandwidth , the SNR , and the bit error rate BER and can be computed as follows [21]: Ξ V is the jitter delay when subchannel is used for user V . For the CBTC application, it is important to maintain the quick response time between the train and ground.Therefore, the reward function that reflects the transmission delay can be defined as where Υ V is the achievable data transmission delay.Combined with small-scale fading and large-scale fading, we get the received SNR as = − loss + + 10 log 10 () + + − noise , where is the transmitted power, loss is the large-scale path loss, is a Rayleigh random variable with a mean of 1 when we use Rayleigh distribution to describe the fading envelope, is a Gaussian random variable with a variance of and a mean of 0, and are the antenna gains for the transmitter and receiver, respectively, and noise is the noise power.The path loss value loss is dependent on the working frequency and transmission environment.In this paper, we use the path loss model described in [21]. The BER is determined by the suggested packet loss rate PLR given in Table 1.This is because, given the link BER, the Frame Error Rate FER and PLR are computed as where is the packet length and MR is the maximum transmission time. In this paper, we take LTE link layer as an example to compute the end to end transmission delay.LTE is a new generation of wireless communication technology, and it has become the dominant train ground communication technology for next-generation CBTC systems [15].In LTE systems, Hybrid Automatic Repeat Quest (HARQ) is used as an error control code.Given a retransmission time , the transmission delay can be computed as where data is the packet transmission time dependent on transmission rate. RTT is the Round Trip Time (RTT), which is approximately computed as where data and data are the uplink and downlink data transmission delay, and process is the process time at BS and application stations. Given the retransmission time with times retransmission, the average transmission time with maximum retransmission time MR can be computed as The jitter delay is considered as the standard deviation of transmission delay at any slot.Therefore, with the maximum retransmission time MR, the jitter delay can be computed as Problem Transformation It is hard to solve problem Opt-U1 based on the following reasons.First, too many constraints make the problem becomes complex.And next, due to the Boolean value of { V } and { V }, both the objective function and the feasible set of Opt-U1 are not convex. According to the method in [22], the binary variables of { V } and { V } can be relaxed (i.e., we assume that 0 ≤ V ≤ 1 and 0 ≤ V ≤ 1, for all , ).We define V = V V , and where V ∈ [0, 1] is used to denote the proportion of wireless resource allocated by BS to user V .Then the problem Opt-U2 obtains an equivalent transformation as follows: Opt-U3: Obviously, when V = 0, we have V = 0, which means that the application station is not associated with any BS.Literature [23] gives the proof of the convexity for problem (14). Virtual Resource Allocation Using ADMM As a general solution, the CVX tool can be used to solve the convex program in (14).Given the optimal association indicator matrix * = { V } and the optimal resource allocation indicators matrix * = { V } at time , the corresponding allocation scheme can be described as Observed from the above two formulas, to get the optimal allocation scheme, the centralized algorithm must obtain the achievable rate R V of all users at time and the average satisfaction level V (SL V )/SL V of all users at time − 1.This results in a relatively large amount of calculation for the high speed urban rail transit system.In order to overcome it, we use ADMM to solve the convex problem.ADMM is a computing framework for optimization.It is suitable for solving distributed convex optimization problem, especially the statistical learning problems [18]. In order to use ADMM to solve the convex optimization, local copies of the global assignment indicators are introduced.Roughly speaking, each local variable can be interpreted as the information owned by each BS about the corresponding global assignment indicators variable. To drive the local copies into consensus, we use distributed consensus ADMM method [18].Let Δ = { V , ∀V, , } denote the vector of assignment indicators and denote the local copy of Δ at BS.To the consensus constraints, we introduce an auxiliary variable V which represents the local copies of our assignment indicators as equality constraints: Given the local vectors Ω = { , ∀} and ℓ = { , ∀}, we define a feasible local variable set for each BS ∈ N. The constraints in (15b) can be decomposed into independent convex sets as and an associated local utility function as Using ( 17) and ( 18) and the auxiliary variable V , we can compactly write the global consensus problem (14) as Then the augmented Lagrangian function for (19) can be rewritten as where V is the Lagrange multipliers related to the constraints of consensus in problem (19) and > 0 is a penalty parameter for adjusting the convergence speed of the ADMM [18].The basic idea of ADMM is that convex optimization is broken into smaller partitions, each of which are then easier to handle.The ADMM method is composed of successive optimization steps by updating the primal and dual variables alternately.For optimization, at iteration we need to take the following steps: Simulation Results and Discussions In this section, we use MATLAB 2015b to carry out simulation.Simulation results are presented to illustrate the optimal performance of the proposed algorithm. In order to simplify the simulation model, we that there are four physical base stations in the integrated train ground communication system and each physical BS can be virtualized into three virtual base stations, providing three services, CBTC, PIS, and CCTV, respectively, as shown in Figure 5.Among them, the red network base stations BS1 and BS2 belong to the infrastructure InP1, and the blue network base stations BS3 and BS4 belong to the infrastructure InP2.BS1 and BS3 or BS2 and BS4 cover the same geographic area, which forms the redundant coverage and ensures the reliability of CBTC systems.We assume wireless virtualization can be used between different InPs.Wireless spectrum resources can be shared by multiple virtual base stations virtualized from BS1 and BS3 or BS2 and BS4.For application stations, there is no obvious difference between different infrastructures as if all resources are within the same resource pool (e.g., CBTC , PIS , and CCTV are within the same resource pool). In order to illustrate the performance improvement of our proposed algorithm, we compare it with the existing algorithm.In the existing algorithm, the application stations connect base stations providing the maximum received signal strength (RSS), and each BS carries out wireless spectrum resource allocation with proportional fairness.We name the existing scheme as Max-RSS. As we can observe from Figure 6(a), under the Max-RSS scheme, some of the application station satisfaction level is less than zero, which makes the QoS of these application stations not guaranteed.However, the QoS requirement of all application stations can be satisfied with the proposed WVRA scheme as shown in Figure 6(b).This is because there are more than one application station associated with the same base station at the same time, but the application stations connect base stations providing the maximum received signal strength (RSS) when Max-RSS scheme is adopted, and the QoS guarantee is not considered.On the contrary, the WVRA scheme fully considers the QoS guarantee, base station (BS) load balance, and application station fairness.By taking this scheme, the QoS requirement of each application station is guaranteed. Next, we assess the fairness performance of different algorithms using fairness index described in literature [24].If the fairness index is close to 1, it means the algorithm has a higher degree of fairness, and vice versa.The fairness index is defined as follows: As we can observe from Figure 7, with the gradual increase of the application stations in the cell, the Max-RSS algorithm cannot guarantee the fair distribution of virtual resources.It is mainly because the wireless resources are limited, and strong competition between applications leads to the decrease of fairness.However, our proposed algorithm WVRA effectively ensures the fairness of the virtual resource allocation.Although application stations continue to increase, the fairness index keeps unchanged, which means the virtual resources can still be fairly allocated. In order to verify the jitter delay performance improvement of PIS and CCTV applications, we illustrate the transmission delay of CCTV application in Figure 8.The transmission delays of our proposed WVRA scheme are more volatile compared with the existing scheme, which means the proposed WVRA scheme performs better in terms of jitter delay.This is due to the fact that the WVRA scheme fully considers the QoS requirement of all applications, and one of the direct optimization objectives is to minimize the jitter delay of PIS and CCTV.We also notice that WVRA scheme sacrifices part of transmission delay performance to realize its optimization objective. We study the spectrum allocation between virtual base station of physical base stations and compare each BS load fluctuation in Figures 9 and 10, respectively.As illustrated in Figure 9, the spectrum allocated to VBS1 with the CBTC application is approximately unchanged in each time slot.This is because the optimization objective of CBTC application is not the transmission data rate, and the required spectrum is relatively stable.As for the other two VBSs that carry the PIS and CCTV application traffic, we can notice that the spectrum allocated to them changes at each time slot with the traffic load.This is due to the fact that maximizing the transmission throughput needs large amount of spectrums. In order to verify the load balance performance, we set up the simulation environment where the BS1 and BS3 give a higher received signal strength in the overlap zone.Figure 10 shows the change of base station load when the number of application stations increases in its coverage area.The red line in the figure represents the effect of the Max-RSS scheme on the base station load.The blue line represents the effect of the WVRA scheme on the base station load.The green oval represents BS1 load fluctuations.The aquamarine blue oval represents BS2 load fluctuations.The yellow oval represents BS3 load fluctuation.The final oval represents the BS4 load fluctuations.As shown in Figure 10, we can observe that, by using the Max-RSS scheme, the loads of BS1 and BS3 increase constantly, while the loads of BS2 and BS4 do not change with the increase of the number of the application stations.This is due to the fact that application station fairness is not considered under this scheme.On the contrary, the WVRA scheme successfully separates part of the load of BS1 and BS3 to the more lightly loaded BS2 and BS4, although BS2 and BS4 offer a lower instantaneous received signal strength than BS1 and BS3. Conclusions In this paper, we have proposed a framework of using network virtualization in an integrated train ground communication system.We have formulated and transformed the QoS-aware virtual resource allocation problem in the integrated system to a convex optimization problem.We define the QoS salification level parameter to reflect the application satisfaction.The final objective is fairness driven optimization function based on QoS guarantee, base station load balance, and application station fairness.We use the distributed method based on ADMM to solve the convex problem.Simulation results indicate that our algorithm can guarantee the QoS requirement of all application stations.Meanwhile, the traffic load of different base stations can be balanced to achieve better performance of the whole system. Figure 3 : Figure 3: The proposed integrated train ground communication system. Virtual network controller for CBTC Virtual network controller for PIS Hypervisor Virtual network controller for IMS Control and management layer Figure 4 : Figure 4: A framework of using wireless network virtualization in the proposed system. Figure 5 :Figure 6 :Figure 7 :Figure 8 :Figure 9 : Figure 5: System model with four base stations and three virtual base stations for CBTC, PIS, and CCTV, respectively. Figure 10 : Figure 10: BS load using different resource allocation algorithms. Table 1 : The QoS requirement of different applications in urban rail transit systems. The main functions of CML are realized by several virtual network controllers and a hypervisor.Every virtual network has and V and Ψ , Ψ , and Ψ denote the suggested performance value for different applications.V and V Wireless Communications and Mobile Computing are assignment indicators.If application station V is assigned to BS and subchannel is assigned to user V , V = 1, and V = 1; otherwise V = 0, and V = 0.An application station is only served by one BS, and one subchannel is not assigned to multiple application stations.The inequality reflects the fact that the transmission delay V of CBTC application station cannot exceed its requirement threshold req .When we define QoS satisfaction level (QoSL) of application station SL V as (), and Γ () are the reward function for applications with virtual resource association strategies V
6,340.6
2018-05-03T00:00:00.000
[ "Engineering", "Computer Science" ]
Paul Drude's Prediction of Nonreciprocal Mutual Inductance for Tesla Transformers Inductors, transmission lines, and Tesla transformers have been modeled with lumped-element equivalent circuits for over a century. In a well-known paper from 1904, Paul Drude predicts that the mutual inductance for an unloaded Tesla transformer should be nonreciprocal. This historical curiosity is mostly forgotten today, perhaps because it appears incorrect. However, Drude's prediction is shown to be correct for the conditions treated, demonstrating the importance of constraints in deriving equivalent circuits for distributed systems. The predicted nonreciprocity is not fundamental, but instead is an artifact of the misrepresentation of energy by an equivalent circuit. The application to modern equivalent circuits is discussed. Introduction The German physicist Paul Drude (1863Drude ( -1906 contributed significantly to many fields of science during the late 19th and early 20th centuries [1]. In particular, he remains well known for pioneering work in optics and solid-state physics. Less familiar is that late in life Drude published a series of articles [2][3][4][5] on the physics of Tesla transformers (or Tesla coils), which at the time were important for early radio communication [6,7]. While these articles are mainly of historical interest today, the article from 1904 is still cited as a primary reference for the conventional equivalent circuit of a Tesla transformer (e.g., [8][9][10]). Such equivalent circuits (or lumped-element models) are ubiquitous in the study of physical systems, from acoustic resonators [11] to coupled qubits [12]. Importantly, these circuits are widely used to model not only lumped systems that are small compared to the wavelengths of interest, but also distributed systems like Tesla transformers that may not be. This has long been a standard practice in radio and microwave engineering, especially with resonant transmission lines, microwave networks, and inductors [7,[13][14][15]. Most systems modeled by circuits satisfy some form of reciprocity, or broadly, symmetry under the exchange of source and response [16]. For these reciprocal systems, a common assumption today is that their equivalent circuits must also be reciprocal. However, there is a startling prediction in Drude's 1904 article [4]: Drude predicts that the mutual inductance for a Tesla transformer should be nonreciprocal (i.e., M 12 =M 21 ). Though nearly forgotten, this prediction seems to have been well known in the early 20th century [17]. Today, it has every appearance of being a mistake. After all, there are no clear sources of nonreciprocity in a Tesla transformer, such as magnetic materials, so how could this prediction possibly be correct? Despite its appearance, we will see that Drude's prediction is indeed true, although for an unexpected reason. This Article explains the physics behind Drude's overlooked prediction. To proceed, we will not focus on Drude's original derivation of an equivalent circuit for a Tesla transformer. This is because the original unfortunately contains errors and a distracting treatment of inductance. It also neglects to explain the phenomenon behind the prediction. For the interested reader, an English translation and discussion of the original derivation in German has been provided in Ref. 18. Instead, this Article presents a modern treatment of the phenomenon behind Drude's prediction. We will see how reciprocal systems, paradoxically, may have nonreciprocal equivalent circuits in rare applications. Besides historical interest, this phenomenon highlights the boundary between lumped and distributed systems and, in particular, the potential for confusion when modeling the latter with the former. Drude's Prediction To illustrate Drude's prediction, consider the following specific example of an aircore transformer sketched in Fig. 1(a), which could be part of a Tesla transformer. A standard equivalent circuit is sketched in Fig. 1(b) that is valid for direct current (dc) and low-frequency alternating current (ac), assuming the transformer is much smaller than the shortest ac wavelength. For an ideal lumped transformer there are various ways to show that the primary and secondary inductors share the same mutual inductance, M ps~Msp , such as reciprocity [19], symmetry [20], and conservation of energy [21][22][23]. In particular, the latter requires this equality because otherwise energy would be lost or gained during transfer between the inductors. However, what about at higher frequencies? Now let the secondary be a singlelayer solenoid, just as in a Tesla transformer. While real solenoids are quite complex [24], they often act very nearly as transmission lines [14,15,25]. Following Drude [4], let us then model the secondary as a distributed transmission line. As arranged the solenoid is a quarter-wave resonator. For frequencies near the fundamental self-resonance it will have the current and voltage spatial profiles sketched in Fig. 1(c). While these profiles suggest otherwise, the solenoid in a Tesla transformer is typically much smaller in size than the corresponding free-space wavelength of the fundamental self-resonance. This is because these solenoids are slow-wave structures [15], and near this resonance it is the coiled winding length, which is often enhanced by a large number of turns (e.g., *1000), that typically becomes comparable to a quarter wavelength. Nevertheless, the standard "lumped'' circuit in Fig. 1(b) predicts no resonances, and is no longer valid at frequencies near or above the fundamental self-resonance of the solenoid. We may still derive a lumped-element model (or equivalent circuit) for the transformer, however, by starting with a distributed-element model for the solenoid, just as for a resonant transmission line. Doing this, we will find that for frequencies near the fundamental self-resonance, we may model the voltages and currents in Fig. 1(a) with the equivalent circuit sketched in Fig. 1(d). As derived below, the mutual inductances in this circuit are no longer equal, but satisfy Surprisingly, conservation of energy requires this result. While Drude's original derivation is incomplete, it may be corrected to give the above result as shown in Ref. 18. This phenomenon predicted by Drude is an artifact of modeling transmission lines with lumped equivalent circuits. To explain it, we will treat the general case of a uniform transmission line coupled to an external system. We will derive an exact equivalent circuit for the specific example described above, and obtain the simplified circuit in Fig. 1(d) by keeping only the part most important near the fundamental self-resonance, following Drude [4]. The specific result (1) then comes not from any fundamental nonreciprocity, but instead from the subtle choice to model the same voltage and current as in Fig. 1(a), namely the voltage drop across and current into a resonant inductor. It is one example of an artificial nonreciprocity originating from the misrepresentation of energy, or equivalently, from the "lumped'' circuit parameters retaining a distributed character. Finally, we will extend this phenomenon to other equivalent circuits for lines, further examine its application to solenoids and Tesla transformers, and conclude with a discussion. Equivalent Circuits for Transmission Lines Consider the transmission line sketched in Fig. 2(a), and described by the four parameters of series resistance r, series inductance l, shunt conductance g, and shunt capacitance c, each with units distributed per length. The voltage V(x,t) and current I(x,t) at any position x along the line then obey the Telegrapher's equations, which correspond to the distributed-element model sketched in Fig. 2(b). The additional terms v sp (x,t) and i sp (x,t) are distributed sources that model coupling with external systems, such as the primary inductor in Fig. 1(a). By convention, positive I(x,t) flows towards increasing x in the solenoid. Before we continue, note that the distributed-element model sketched in Fig. 2(b) is itself a form of equivalent circuit for a line, and that today, unlike with Paul Drude in 1904, there are many numerical methods [26,27] available to directly use such a model for a line or more complicated systems. To generate a lumped-element equivalent circuit, we first expand the voltage and current along the line with spatial Fourier series. For the geometry of Fig. 1(a), a convenient choice is the pair of quarter-wave Fourier series For a line of length H, this series is complete in the interior (0,H) of the line, and the wavenumbers k n~( 2n{1)p=(2H)~p=(2H),3p=(2H),5p=(2H), etc. Next, we introduce a set of lumped circuit parameters R n ,L n ,G n , and C n for each spatial mode n from the corresponding distributed parameters r,l,g, and c, by using the series and shunt scaling lengths A n~Rn =r~L n =l and B n~Gn =g~C n =c: ð4Þ Any equivalent circuit must preserve the natural resonant frequencies v n 5 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (k 2 n zrg)=(lc) of its modes n, so these scaling lengths must satisfy The lumped parameters R n ,L n ,G n , and C n for the mode n are then determined if we specify the ratio which controls how the circuit represents impedance. From this ratio, A n~xn =k n and B n~1 =(x n k n ). For a given line, there is no unique choice of x n or the resulting parameters R n ,L n ,G n , and C n . Without loss of generality, however, we can choose the ratio x n~1 ,which sets e R n , e L n , e G n , and e C n : Here and subsequently, a tilde marks this choice. How to transform to the case of x n =1 is described below. Finally, using Eqs. (3)(4)(5)(6)(7), the Telegrapher's equations (2) separate to a system of equations for the Fourier amplitudes V n (t) and I n (t) of each spatial mode n, Here, the lumped sources that represent coupling with an external system are V sp,n (t) which we will see below is the natural choice from conservation of energy. The angle w n is half the electrical length k n H of the line for the spatial mode n, w n~kn H=2: For the expansion (3), w n~p =4,3p=4,5p=4, etc. Together, the set of circuits defined by the system (8), which are sketched in Fig. 3(a), comprise an exact equivalent circuit for the line. Fourier series different than (3) produce similar results, though the nonresonant (dc) terms in some are special cases with A n or B n~0 . Importantly, note that these separate circuits may stitch together to form one combined circuit depending on the relationship of the sources V sp,n (t) and I sp,n (t) between the modes n. Misrepresentation of energy Before treating coupling in detail, we can explain the nonreciprocity (1) as follows. First, note that the energy stored by the mode n along the line is for real-valued V n (t) and I n (t). Here, the brackets denote a time average, and the effective parameters are given by L U =l~C U =c~H=2. In contrast, the energy modeled by the equivalent circuit (8) is not that of (11), but instead Therefore, the equivalent circuit (8) misrepresents the energy stored (and power dissipated) along the line by a factor of 1=w n =1. That is, the equivalent circuit (8) models the energy stored per w n radians along the line. An additional lengthening argument for this misrepresentation is sketched in Fig. 4. This is the origin of the nonreciprocity (1). Since the equivalent circuit for the line misrepresents energy, its representation of coupling with an external system, such as the primary inductor in Fig. 1, must convert any transferred energy (or power) to this incorrect representation. Assuming the equivalent circuit for the external system represents energy correctly, this conversion requires a directional amplification (or gain) to represent the coupling, which is accomplished by the two factors of 1=w n in (8). Amplifiers are nonreciprocal circuit elements, so this representation is nonreciprocal [16]. Distributed character of circuit parameters Intuitively, this phenomenon results from the "lumped'' parameters in the circuit (8) still retaining a distributed character: note that e L n~Ls =(2w n ) is an inductance per radian just as l~L s =H is an inductance per length, where L s~l H is a dc selfinductance. Thus, for a fixed wavenumber k n , the parameters e R n , e L n , e G n , and e C n are properties of the line and independent of its length H, just like r,l,g, and c. However, for fixed amplitudes V n (t) and I n (t), lengthening the line increases its stored energy (11), as sketched in Fig. 4. Therefore, to conserve energy, the equivalent circuit for the mode n must have a coupling parameter that scales with For r~g~0 and no coupling, note that two constraints determine e L n and e C n : (i) the resonant frequency v n~( e L n e C n ) {1=2 , and (ii) the impedance V n (t)=I n (t)~( e L n = e C n ) 1=2 . As sketched for n~1, lengthening by one or more wavelengths does not change (i) or (ii), thus neither e L n or e C n . For fixed V n (t) and I n (t), the energy (12) modeled by the circuit in Fig. 2(a) also does not change. However, the stored energy (11) must increase, so this circuit misrepresents energy. Transformation to other equivalent circuits So far, we have focused on one particular equivalent circuit. This phenomenon, however, is modified by the choice of circuit, which is not unique. The circuit (8) is constrained to model V n (t) and I n (t), which is appropriate for the specific example because V s (t)<V 1 (t) and I s (t)<I 1 (t) for frequencies near the fundamental v 1 . To relate this phenomenon to other equivalent circuits, consider substitutions of the form which lead to circuits modeling the variables V' n (t) and I' n (t). Here, the matrix represents an ideal transformer with turns ratio n n . Using (13) with (8) shows that this transformer replaces the parameters e R n , e L n , e G n , and e C n with those set by x n~n 2 n . Noting this and using (13) with (12) shows that only substitutions (13) with a n~1 = ffiffiffiffi ffi w n p may represent energy correctly. Consequently, all others lead to circuits that have some form of the artificial nonreciprocity described above. That is, equivalent circuits formed by constraints other than to model energy may exhibit some form of the phenomenon outlined above. Conversely, a reciprocal circuit that models energy may be incompatible with other desirable constraints. As (13) shows, this is the case with the specific example, because modeling energy is incompatible with modeling both V s and I s together near resonance. We will treat a more common example with solenoids after finishing the specific example below. Coupling with External Systems Let us now return to model coupling. Consider an external system that is equivalent to a two-terminal port with well-defined voltage V p (t) and current I p (t), such as that sketched in Fig. 3(b), which could be part of a lumped circuit, for example, or a point on another line. To model coupling with this system, we must treat both the forward and reverse directions, or to and from the line, respectively. To model many common types of coupling simultaneously, let the distributed operators A,B,C, and D specify the forward coupling as For the inductive coupling of the specific example, where the function m(x) describes the coupling between the inductors, such that M ps~Msp~Ð H 0 m(x)dx. Likewise, the single operator C~c(x) L Lt describes capacitive coupling. Direct (or wired) coupling at the bottom x~0 is described by the pair A~D~d(x), where d(x) is a Dirac d function, and at the top x~H by A~{D~{d(x{H). A direct tap at an interior point may be treated by splitting the line into two separate lines with direct couplings at the shared endpoint. Note that direct couplings may modify the boundary conditions modeled by the Fourier series (3). We may determine the reverse coupling in terms of the coupling operators in (14) as follows. Note that the lumped sources for the port in Fig. 3(b) are sums over contributions from the entire line, where the distributed sources v ps (x,t) and i ps (x,t) are distinct from v sp (x,t) and i sp (x,t) in (2). Assuming the coupling is lossless, passive, and quasistatic, the forward and reverse powers transferred should balance at all x, where T denotes transposition. Using (14), and noting that the coupling operators described above are symmetric for harmonic signals, this gives the reverse The off-diagonal operators, which act as a mutual impedance and admittance, are the same as those of (14), as expected from reciprocity. Additionally, A~+D for the couplings described above, so the relations (14) and (18) are equivalent under the exchange of source and response, up to a diagonal sign. Using (14), the forward lumped sources (9) for each mode n in the expansion where the coupling operators for the mode n are Likewise, we may write the reverse coupling (16) as a sum over lumped sources from each mode n, Using (18) and (20), these lumped sources are V ps,n (t) Following (17), we may use (19) to verify that both sets of lumped sources (9) and (22) conserve power locally, which justifies an earlier assertion for the form of (9). The discussion about reciprocity following (18) also applies here. Note that A n =+D n unless both are zero for the couplings considered above. Together, Eqs. (19)(20)(21)(22) specify how the equivalent circuits (8) sketched in Fig. 3(a) couple to the external port, such as sketched in Fig. 3(b). In all cases, the two factors of 1=w n in (8) may be modeled by a directional amplifier (or a nonreciprocal ideal transformer). For inductive and capacitive couplings, this gain can combine with other parameters to simplify the circuit, leading to nonreciprocal mutual inductances or capacitances. For other equivalent circuits, the coupling is given by using (13) with (8) and (19)(20)(21)(22), and often involves an ideal transformer. Coupling for the specific example For the specific example, the set of coupling operators (15) leads to a single nonzero mode operator (20), From Fig. 3(b) and (21)(22), we see that M n is the reverse mutual inductance for the mode n. However, from Fig. 3(a), (8), and (19), we see that the forward mutual inductance for the mode n is not M n , but e M n~Mn =w n : ð25Þ The ratio (1) follows for n~1. The analogous ratio of forward-to-reverse mutual inductances for other equivalent circuits generated by (13) is M sp,n =M ps,n~1 =(a 2 n w n ): The exact equivalent circuit for the specific example is sketched in Fig. 5, and is the result of the couplings above stitching together the circuits in Fig. 3. The circuit in Fig. 1(d) is then an approximation that ignores losses (r~g~0) and the contributions of the modes n §2. For a spatially uniform current, I(x,t)~I s (t) with g~c~0, one may use P ? n~1 (k n w n ) {1~H to show that the full circuit in Fig. 5 simplifies to the dc circuit in Fig. 1(b), giving Standard equivalent circuits for lines The approach outlined above differs from those commonly found in textbooks to derive similar circuits. The main difference is that to study Drude's prediction, we did not implicitly assume reciprocity. Nevertheless, one can recover many standard equivalent circuits for lines and their microwave analogs [13][14][15] from the above approach using substitutions (13) with a n~1 = ffiffiffiffi ffi w n p . For convenience, these circuits are sketched in Fig. 6 (c.f. Figure 11.12 of Ref. 15). The seemingly unrelated topologies of these various circuits may be graphically understood by noting that they each originate from the circuits of Fig. 3, which stitch together differently depending on coupling with external systems. A direct bottom coupling with n n~ffi ffiffiffi ffi w n p , and a direct top coupling with n n~1 = ffiffiffiffi ffi w n p simplify to typical Foster-form circuits for the input impedances of short-and open-ended lines ( Fig. 6(a) and (b), respectively). (Half-wave Fourier series are more convenient than (3) here.) Simultaneous top and bottom direct couplings reproduce a segment of line, such as a length of coaxial cable, although this circuit is not standard (Fig. 6(c)). One can show that this circuit reproduces a quarterwave impedance transformer near resonance. Additionally, inductive coupling with n n~ffi ffiffiffi ffi w n p and capacitive coupling with n n~1 = ffiffiffiffi ffi w n p simplify to circuit forms typical for loop-and probe-coupled microwave cavities (Fig. 6(d) and (e), respectively) [13,28]. Applications While the phenomenon described above is likely a rare curiosity, it may be present with resonant single-layer solenoids and Tesla transformers, as first predicted by Drude. However, the conventional modeling of both of these systems has changed since 1904. To describe how this phenomenon may still apply in modern equivalent circuits today, both of these applications are discussed in the next two sections. Before continuing, it is important to note that the phenomenon behind Drude's prediction is not essential to the modeling of coupling with external systems, or scattering through the line when there is coupling with multiple external systems. For example, the nonreciprocities in Fig. 5 are not required to model the input . . . Fig. 5. Exact equivalent circuit for the specific example. The coupling with the primary inductor L p of Fig. 3(b) stitches together the lumped-element models of Fig. 3(a) into this single circuit. The narrowband circuit in Fig. 1(d) is an approximation of this circuit that ignores losses and the contributions of the modes n §2, which are nonresonant for frequencies near the fundamental v 1 . impedance of the primary inductor in the specific example. Instead, one may use (13) to remove the nonreciprocities in Fig. 5 and produce Fig. 6(e), but at the cost of modeling different voltages and currents than originally intended. Additionally, note that the scope of the approach above is restricted to systems that behave as uniform transmission lines. Other issues with reciprocity may arise in more complex systems such as microwave waveguides [29,30]. Single-layer solenoids To account for stray capacitance, single-layer solenoids and other inductors have been modeled with circuits similar to Fig. 1(d) for over a century [7,[31][32][33]. In the early 20th century, a typical constraint was to set e L 1~( 2=p) L s in these circuits, the same as (7) in the specific example [34]. Drude, for example, derived this constraint in an article from 1902 [2], but did not recover it in 1904 [4] because of errors, as shown in Ref. 18. Perhaps this constraint may partly explain why some early texts, such as Hund [32], made a greater allowance for nonreciprocity than is customary today. Since then, however, the standard constraint has been to use the dc selfinductance, L 1~Ls (or x 1~n 2 1~2 w 1 ), because this conveniently leads to an empirical "self-capacitance'' for a solenoid that is nearly constant over a wide frequency range, after the effects of a capacitive load are included (e.g., following Miller [35]) [36][37][38]. This constraint does not require energy to be modeled correctly or uniquely determine the circuit, except in the low-frequency limit of a spatially uniform current (i.e., infinite load). Thus such circuits may require nonreciprocity as described above. For example, the substitution (13) with n 1~1 =a 1~ffi ffiffiffiffiffiffi 2w 1 p leads to one such circuit that models the current I s (t) near resonance, for which the ratio (26) of mutual inductances is 1/2. In practice, note that capacitive loads will attenuate or suppress this phenomenon, and again that lines are only approximate models for real solenoids [24]. Tesla transformers The conventional equivalent circuit for a Tesla transformer contains a circuit with the same form as Fig. 1(d). Today, this circuit by default uses the dc-inductance constraint described above, despite it not being part of Drude's derivation in 1904 [4]. Importantly, this circuit is nearly always assumed both to be reciprocal and to model the base current I s and output voltage V s of the secondary solenoid before any spark discharge [8,9]. Were the current spatially uniform in the solenoid, these three constraints would be compatible. Instead, the current is often nonuniform because typically only a weak external capacitive load is present across the solenoid. The conventional circuit is thus usually overconstrained. Interestingly, this has been observed numerically by enthusiasts who predict nonunique circuit parameters, but did not consider reciprocity [39]. Depending on which of the three constraints are kept, the phenomenon described above may be present. To show what effects this may have, note that the traditional procedure to calculate the maximum possible output voltage follows from conservation of energy and the assumption of a reciprocal mutual inductance, and gives jV max j~ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2U in =C s p for an energy U in input during operation [40]. Here, C s is the sum of the empirical self-capacitance for the secondary solenoid with the capacitance of any loads, such as an output electrode. This traditional procedure will be inaccurate for weakly loaded or unloaded Tesla transformers, because the standard dc-inductance constraint misrepresents energy in circuits that model V s . Instead, the ratio of mutual inductances must be included to give the correct result: jV max j~ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (M sp,1 =M ps,1 )(2U in =C s ) p . For the unloaded case, using (13) with n 1~a1~ffi ffiffiffiffiffiffi 2w 1 p leads to such a circuit that models V s near resonance. Using (26), the ratio M sp,1 =M ps,1 <0:81 for this circuit, which produces a correction of about {10% to the traditional estimate of V max . Note that the same correction results if instead the dc-inductance and reciprocity constraints are kept, because of a misrepresentation of output voltage. Using (13) with n 1~ffi ffiffiffiffiffiffi 2w 1 p and a 1~1 = ffiffiffiffi ffi w 1 p leads to such a circuit, for which V s <0:90 V' 1 . In practice, increasing the capacitive load will quickly reduce the size of this correction, as the current along the secondary solenoid becomes more uniform. For weak loads, this correction may also be obscured by the nonlinear dependence of C s with the capacitive load [31,33,35]. Discussion As illustrated above, Drude's prediction in 1904 that the mutual inductance should be nonreciprocal for an unloaded Tesla transformer is correct. However, this nonreciprocity assumes that the secondary solenoid acts as a transmission line, and is only present when the current is nonuniform in the solenoid. Even then, it seems that this nonreciprocity will have a relatively small effect, one that may be difficult to measure. Perhaps this is another reason why Drude's prediction is nearly forgotten today. The phenomenon behind Drude's prediction is a fascinating artifact of modeling distributed transmission lines with lumped equivalent circuits. The resulting nonreciprocity is purely artificial and results only from constraints imposed on equivalent circuits that are incompatible with representing energy correctly. In the specific example, which follows Drude, the incompatible constraint was to model the voltage drop across and current into a resonant inductor-a choice that at first glance may seem straightforward and reasonable. Even today, this constraint is still used in the equivalent circuits of Tesla transformers (e.g., [8][9][10]). Therefore, some care is required to check that the constraints imposed or assumptions made about an equivalent circuit are compatible, otherwise this phenomenon may occur. On the other hand, one may always avoid this phenomenon by constraining a circuit to model energy correctly, as is common today, with the possible cost of breaking other desirable constraints as shown in the specific example. In summary, distributed systems are not lumped. Modeling transmission lines and analogous systems with lumped equivalent circuits thus creates an opportunity for confusion if the lumped perspective is over emphasized. As Paul Drude predicted in 1904 for Tesla transformers, such systems may require an artificial nonreciprocity to model couplings with other systems when their constraints lead to a misrepresentation of energy, despite all components being reciprocal. This curious, long overlooked prediction is indeed correct, despite its modern appearance.
6,540.8
2014-12-26T00:00:00.000
[ "Physics" ]
Locking Local Oscillator Phase to the Atomic Phase via Weak Measurement We propose a new method to reduce the frequency noise of a Local Oscillator (LO) to the level of white phase noise by maintaining (not destroying by projective measurement) the coherence of the ensemble pseudo-spin of atoms over many measurement cycles. This scheme uses weak measurement to monitor the phase in Ramsey method and repeat the cycle without initialization of phase and we call,"atomic phase lock (APL)"in this paper. APL will achieve white phase noise as long as the noise accumulated during dead time and the decoherence are smaller than the measurement noise. A numerical simulation confirms that with APL, Allan deviation is averaged down at a maximum rate that is proportional to the inverse of total measurement time, tau^-1. In contrast, the current atomic clocks that use projection measurement suppress the noise only down to the level of white frequency, in which case Allan deviation scales as tau^-1/2. Faraday rotation is one of the possible ways to realize weak measurement for APL. We evaluate the strength of Faraday rotation with 171Yb+ ions trapped in a linear rf-trap and discuss the performance of APL. The main source of the decoherence is a spontaneous emission induced by the probe beam for Faraday rotation measurement. One can repeat the Faraday rotation measurement until the decoherence become comparable to the SNR of measurement. We estimate this number of cycles to be ~100 cycles for a realistic experimental parameter. Introduction Many applications and experiments use the electromagnetic (EM) field to manipulate the quantum state of two-level systems (TLS). Improving the frequency stability of the EM field and TLS are of great importance because the ability to precisely and coherently control the population and phase of TLS plays a crucial role in many applications, such as nuclear magnetic resonance (NMR) spectroscopy and imaging, atomic clocks, magnetometers, quantum computers, and quantum simulators. The frequency (or phase) of the EM field is usually referenced to the oscillator called the local oscillator (LO). A no-feedback approach, known as spin echo, has been developed in NMR to suppress the dephasing error [1]. The spin echo uses the phase of the LO as a reference and averages out the phase error of TLS by inserting π-pulses. A similar technique can also be applied to isolate non-classical effects, such as entanglement with the noise environment, and is called "dynamical decoupling" [2]. No-feedback approaches are very useful and easy to implement because the phase difference between LO and TLS does not need to be monitored. However, it does not suit applications such as atomic clocks and magnetometers, whose goal is to stabilize LO by using TLS as a reference. Spin echo also cannot maintain the phase error in the long term. Since we are pursuing the long-term stability of LO, we chose to use the feedback approach. Different methods are used depending on the kind of target and reference oscillators and whether the noise is suppressed to white frequency noise or to white phase noise. A summary of various feedback methods is shown in Table 1. Target Reference White frequency noise White phase noise σ y ∝ τ −1/2 σ y ∝ τ −1 Laser Laser(LO) Transfer cavity Laser phase lock TLS LO NMR lock [3] Coherent magnetometry [4] LO TLS (Conventional) atomic clock "Atomic phase lock" Table 1. Various ways to match the frequencies of oscillators. τ is the total measurement time, and σ y is the Allan deviation (deviation of the target frequency from the reference frequency) of the normalized frequency (explained in Section 3). When both the target and reference oscillators are lasers, white frequency noise is often achieved by locking the target laser to the reference laser by matching the frequency of the resonant peaks. For example, the transfer cavity length is locked to the reference laser frequency, and then the target laser frequency is locked to one of the transfer cavity's resonances. The NMR lock looks at the NMR signal of the deuterated solvents, and the magnetic field is feedback controlled to keep the atomic spin resonant frequency constant [3]. An atomic clock is a typical example of an LO frequency being locked to the atomic spin resonance frequency. In both cases, noise was suppressed to white frequency noise, and σ y was reduced at a rate of τ −1/2 as derived in Section 2.1. Conventional atomic clocks use the Ramsey method with projection measurement. Although this Ramsey method compares the phases of LO and TLS, it cannot achieve white phase noise over many measurement cycles because the phase of the atomic spin is destroyed and initialized at each cycle due to projection measurement. As these measurement cycles are repeated, the measurement noise accumulates at each cycle and the phase uncertainty grows as δ φ ∝ τ 1/2 . As a result, frequency stability decreases at a rate of σ y ∝ τ −1/2 , which is characteristic of white frequency noise. To achieve white phase noise, we need to monitor the phase of the atomic spin over many cycles without destroying it. We propose an experimental method called "atomic phase lock (APL)" to achieve white phase noise. This method combines weak measurements [5,6] with the Ramsey method to monitor the phase difference while having the least effect on the coherence of the spin over many cycles. With APL, σ y can be reduced at a faster rate up to τ −1 as noise is suppressed to white phase noise. This σ y ∝ τ −1 is achieved when the phase of the target oscillator is locked to the phase of the reference oscillator. Although white phase noise is routinely observed when locking a target laser phase to a reference laser phase, feedback control of phase matching between TLS and LO has not been achieved. This is because TLS is a passive oscillator and monitoring the phase of TLS is difficult without affecting (destroying) the phase itself. This paper is organized as follows. In Section 2, atomic clocks that run with projection measurement are reviewed, and APL is introduced. In Section 3, we estimate the Allan variance for atomic clocks with projection measurement and with APL. In Section 4, we propose the 171 Yb + ion trap with microwave transition as a clock transition for demonstration of the proof-of-principle. Then, the signal strength of Faraday rotation is estimated. The decoherence rate is calculated to estimate how many cycles of Faraday rotation measurement can be performed. In Section 5, we compare the stability of different types of atomic clocks using numerical simulation. In Section 6, we discuss systematic shifts of the 171 Yb + microwave clock, a comparison between APL and the spin-squeezed Ramsey clock, and the extension of APL to the optical clock transition. Review of Ramsey method with projection noise The phase of the spin (relative to the LO phase) cannot be measured directly. The Ramsey double pulse method, in essence, measures the spin phase by mapping the phase information to the population ratio. Projection measurement of the population ratio gives us the phase. Note that the Ramsey method is often interpreted as a measurement of the average frequency shift during the time between double pulses. We instead view the Ramsey method as a measurement of the phase accumulated during the time between two pulses. This is because viewing it as a phase measurement is important for understanding the advantage of our proposed APL. If the Ramsey signal is expressed as a function of frequency, φ = ∆ω(t) dt can be used to change the dependence on φ. Conventionally, the phase information represented by the population ratio has been measured via projection measurement. The Ramsey method with projection measurement is called "projection Ramsey" in this paper. This is to clarify the difference to APL, whose signal detection is via weak measurement. In this section, we will review the projection Ramsey method using Bloch representation [7]. The dynamics of a two-level system interacting with an EM field are the same as the dynamics of a spin-1 2 particle in a magnetic field [8], and we will only use this "spin" picture in the rest of this paper. The Bloch representation is useful for discussing the time evolution of the spin interacting with the field. For a more detailed explanation of the Bloch sphere in the context of atomic clocks, see, for example, Ref. [9]. The projection Ramsey sequence is shown in Figure 1. The Cartesian coordinates Step-by-step description of the Ramsey method with Bloch sphere pictures. The red arrow is the spin vector, and the blue arrow is the torque vector that rotates the spin. The angle in degrees is the phase of the EM field, and the angle in radians is the rotation angle of the spin by the EM field. T C is the cycle time. of the Bloch sphere are designated as the u, v, and w axes. We limit our argument to the case where the phase of the spin is coherent (pure state). The ground and excited state correspond to the south and north poles respectively, and the phase of the spin corresponds to longitude. We use the frame that rotates at ω LO and the rotating-wave approximation. Then we consider the following two types of rotation. A spin is rotated around a torque vector that lies on the u−v plane when the resonant EM field is applied. A spin is also rotated along the longitudinal direction at a rate given by ∆ω, which is the frequency difference of LO and atoms. For clarity, we will express the phase of the EM field in degrees and the rotation angle in radians. Ramsey measurement proceeds as follows. (1) Repump all the atoms to |g to correspond to aligning the spin to the coherent spin state pointing along −w. (2) Apply the strong resonant EM field to rotate the spin around -v by π/2 to point u. We define this phase of the EM field (LO) as 0 • and assume the power is strong enough that the time taken for this rotation is negligible. (3) Wait for free precession time T F P , and the frequency difference between LO and the atom rotates the spin around w. The angle φ between u and the spin corresponds to the phase difference of LO and atomic spin that is accumulated during T F P . (4) Apply the second EM field with 90 • phase shift to rotate the spin around u by π/2. Now the angle φ is represented by the value along w, which is a population ratio of the superposition state. Again, we assume that a negligible amount of time is taken for this step. (5) Projection measurement signal Q gives the measure of the population ratio between |g and |e . The mean detected signal of a Q is shown in Figure 2. Signal Q(φ) is a sinusoidal function of the phase φ and is given by The passive atomic clock uses this slope near φ = 0 in Figure 2 as an error signal to lock the frequency. The projection measurement in step (5) is normally done with electron shelving, first proposed by Dehmelt [10,11]. Applying a laser beam appropriately polarized and tuned to a transition will scatter many photons if the atom is in one of the two level's states (a "cycling" transition) but will scatter no photons if the atom is in the other state. The population ratio can be measured by counting the number of bright ions. The value of the population ratio has a fundamental fluctuation of the quantum projection noise (QPN) [12]. QPN originates from the randomness in quantum projection and follows the binomial distribution (for the mathematical expression, see Eq. (B.5)). This QPN is then reflected in the measurement of φ through Eq. (1). We shift the phase of the EM field by 90 • to obtain the greatest sensitivity (largest slope) at around φ = 0. Applying two EM field pulses with the same phase is common, and this minor modification of the phase itself will not affect the performance of the atomic clock. The atomic clock's performance is normally improved by extending the free precession time (T F P ) of the Ramsey method, but T F P is limited by the stability of the LO. In other words, the T F P can be longer only as long as the phase difference accumulated in T F P is guaranteed to be within ±π/2. In the aim for a longer T F P time, the stability of the laser (LO) has been improved by locking the laser frequency to a high-finesse cavity with ultra-low-expansion (ULE) glass as a spacer. However, the thermal noise of the mirror coating [13] is a hard limit to break through with the present technology, and improving the T F P by an order of magnitude would be difficult. In such a situation, APL could provide the alternate path to break through this limit to achieve a longer T F P because APL is equivalent to lengthening the T F P , as long as the atom coherence is maintained. Atomic phase lock The APL method we propose is structured as follows ( Radiation field Weak measurement Step-by-step description of APL with Bloch sphere pictures. The differences from the projection Ramsey method are that the projection measurement is replaced by weak measurement and that, after the weak measurement, the spin is rotated back to where it was before step (4) and the process returns to step (3) without initialization. The point of APL is to monitor the phase without destroying the coherence of the spin so that the measurement noise does not accumulate after many cycles. A dispersion measurement serves as a weak measurement, and there are two ways to perform it: Mach-Zender interferometry [14] and Faraday rotation [15]. In Section 4, we will discuss the Faraday rotation on spins of trapped ions. Allan deviation and stability of atomic clocks In this section, we will estimate the frequency stability of atomic clocks. For this, we use the Allan deviation, a widely used measure for evaluating the frequency noise of the LO, which is in general non-stationary and correlated. We first define terms and symbols. Fractional frequency deviation is defined as where ν 0 is the clock frequency of atoms and assumed to be a constant. The Allan deviation of the fractional frequency σ y (τ ) is defined as Here, y means the time average of y over time τ . The Allan deviation is the deviation from the previous data rather than from the mean. Now, we will estimate the Allan deviation of the atomic clock with the Ramsey method, using Figure 2. We follow the basic argument in Ref. [9] with a modification to view the Ramsey method as phase measurement. When there is a measurement uncertainty σ Q , the corresponding uncertainty in φ, σ φ is linked by the slope of the spectrum as follows: From Eq. (1), the slope at φ = 0 is and Eq. (4) is given as where SNR≡ Q max /σ Q is the signal-to-noise ratio of a single measurement. We define φ(τ ) as the total phase difference between LO and spin after time τ . In contrast, ∆φ represents the phase difference accumulated during the free precession time of the projection Ramsey method. Allan deviation σ y (τ ) (τ is total measurement time) is linked to the RMS deviation δ φ (τ ) ≡ [φ(τ ) − φ(τ ) ] 2 as follows: The τ dependence of δ φ (τ ) is characterized by the type of noise (i.e., white frequency noise, white phase noise, etc.). For example, the repetition of the projection Ramsey measurement and feedback results in random walk phase noise because the spin phase is randomized by decoherence due to the projection measurement. Random walk phase noise is also called white frequency noise, and its deviation grows as δ φ (τ ) ∝ √ τ . This growth of the noise is due to "resetting (projection and initialization)" of the spin phase at each Ramsey cycle. In this case, φ(τ ) can be estimated as a sum of the phase measurement at cycles, where N c is the number of cycles given by N c ∼ τ /T c and T c is the time taken for a single Ramsey measurement cycle. Since ∆φ i are non-correlated due to resetting of the phase at each cycle, the ensemble average of the deviation δ φ (τ ) is expressed using Eq. (6) as Here we consider only the case where τ > T C , and we used φ(τ ) = 0 because the LO frequency does not drift when it is stabilized to an atom. This growth of the deviation is analogous to the case of measuring a 400 m race track with a 10 cm scale stick. Every time 10 cm is measured and added to the total length, noise is added due to the finite width of the marker line. Finally, we get the Allan deviation of the atomic clock with the Ramsey cycle as where ∆ν ≡ 1/T F P ∼ 1/T c . Equation (10) is the same as in Ref. [9], and we call this the "white frequency noise limit." When the SNR is limited by the quantum projection noise SNR QP N = 1/ √ N a (N a is the number of atoms), Eq. (10) is sometimes called the "QPN limit." From Eq. (10), we can see that σ y (τ ) decreases as 1/ √ T C . Thus, it is better to use a longer free precession time, but normally, the length of T F P is limited by the noise of LO. In other words, when T F P is too long, the phase difference after T F P becomes larger than ± π 2 , and whether the signal is, for example, 1/4π or 3/4π cannot be distinguished (see Figure 2). Therefore, in the regular Ramsey sequence, T F P is kept sufficiently small that the phase signal is guaranteed to be within ± π 2 . APL aims to overcome this limitation due to the noise of LO by use of the weak measurement. In essence, APL "peeks" at the phase with the least amount of decoherence, and feedback will keep the phase within ± π 2 . As long as the coherence of the spin is sufficiently maintained, the APL achieves a single Ramsey measurement with long T c , that is equal to the total measurement time τ . As a result, T c = τ is achieved and Eq. (10) now becomes, We call this ideal case of the APL as the "white phase noise limit" in this paper. 4. Possible experimental setup to achieve atomic phase lock Experimental setup We will first specify an experimental setup for a quantitative discussion. We chose a microwave atomic clock, using 171 Yb + ions trapped in a linear rf-trap, as a possible proof-of-principle experiment. We will see the white phase noise limit until decoherence degrades the performance of APL. We chose an ion trap for its long trapping time and chose the magnetic transition of the ground-state hyperfine splitting as a clock transition for its long life time and long coherence time. For weak measurement, we will focus on Faraday rotation because it is somewhat easier to set up in an experiment. For simpler design and implementation of the experiment, we chose Ytterbium 171 ions that had the smallest nuclear spin of 1/2. An energy diagram of the Faraday rotation experiment setup is shown in Figure 4(a). We assigned 2 S 1/2 (F = 0, m f = 0) as the spin-down state | ↓ and 2 S 1/2 (F = 0, m f = 0) as the spin-up state | ↑ . We use the transition between these two states as the clock transition. We assume that one of the clock levels | ↑ has two dipole-allowed optical transitions of | ↑ ↔ |e± . This structure is common for the species of atomic clock including the optical frequency domain. Figure 4(b) shows a setup for Faraday rotation measurement. The probe beam is linearly polarized and off-resonant with the detuning of ∆ ± for | ↑ ↔ |e± transition, respectively. The probe beam uses the 2 S 1/2 ↔ 2 P 1/2 transition, whose wavelength λ is λ = 369.5 nm, resonant atomic scattering cross-section σ 0 is σ 0 = 3λ 2 0 2π = 6.5×10 −14 m 2 , and natural line width Γ is Γ = 20 MHz [16]. In the following calculations, we assume that the cross-section of atom distribution in the probe beam A p is A p = π(200 µm) 2 = 1.3 × 10 −7 m 2 and the atom number N a is N a = 10 6 . For these parameters, on-resonance optical depth OD is OD≡ N a σ 0 /A p = 0.52. We set the Zeeman splitting δ to be δ = Γ. This detuning corresponds to applying 14.3 Gauss to trapped ions. We aim to measure the spin phase φ, which is related to w at step (5) of the APL, as where N ↑ is the number of atoms with spin up and is defined as N ↑ ≡ Na 2 (w + 1). We need to be careful with the concept of N ↑ . Each atom is in a superposition state, not in the eigenstate, so N ↑ means "the expected number of atoms with spin up if it were projected" and has the uncertainty given by quantum projection noise. Faraday rotation with atoms We will estimate the Faraday rotation angle and generally follow the notation in Ref. [17] in this section. The phase shift for each circularly polarized mode (σ ± ) by a | ↑ atom can be written as Here we modeled the resonance with the Lorentzian spectrum. When ϕ + and ϕ − are different, the polarization plane of a linearly polarized wave is rotated. This effect is known as Faraday rotation. The evolution operator of the Faraday rotation is expressed as a combination of the phase shift operators (see Appendix A), where we define and γ z ≡ −(ϕ + − ϕ − ). Since the common shift between σ ± modes does not affect the rotation angle, we neglect the factor between U ′ FR and U FR and use Eq. (16) as the evolution operator of the Faraday rotation in the following. For convenience in the following discussions, we introduce a general rotation operator of the polarization plane as where θ is equal to the rotation angle of the linear polarization plane (see Appendix A). U FR = R(−γ z /2), and the rotation angle of U FR is −γ z /2. Signal of polarimeter The rotation angle is measured using a polarimeter that outputs the photon number difference between two orthogonal linearly polarized modes. When we set these modes h and v, the output of the polarimeter becomes where η is the quantum efficiency of the detector. Since the photons are randomly branched at the polarizing beam splitter, we need a finite photon number to distinguish in which direction and to what angle the polarization is rotated. To obtain the change from w = 0, the polarization angle of the incident pulse is set to π/4 + βN a /2 from the h-axis. where |α h is a coherent state of h-mode with a complex amplitude of α. Note that |α| 2 is the mean number of photons in a probe beam pulse. The pulse after the transmission becomes The mean value, the square mean value, and the variance of the error signal become Q 2 (w) = η 2 e 2 |α| 2 1 + |α| 2 sin 2 (βN a w) , Since the average of the error signal is proportional to the w component for βN a w ≪ 1, Q(w) can be used as an error signal. Eq. (25) is known as the shot noise. Note that we neglect the quantum projection noise in a collective spin system, in Eq. (25). If the spin noise were concerned, w should be treated as a q-number and the variance would become ∆Q 2 = η 2 e 2 |α| 2 (1 + β 2 N 2 a |α| 2 ∆w 2 ). If the photon number is large enough to satisfy Q(w) 2 > ∆Q 2 , we can recognize that the unbalance from Q = 0 is caused by the atomic dephasing of w rather than the shot noise. For example, is required for the sensitivity of w = 0.1 (see Figure 5). We define the photon number N ph (SNR) as a number of photons that satisfies Q( 1 SNR ) 2 = ∆Q 2 , so N ph (SNR) ≡ 1 sin 2 (βN a /SNR) . Decoherence There are three types of decoherence mechanism: spontaneous emission, light shift due to a probe beam, and backaction. The dominant decoherence is due to spontaneous emission and is discussed here. Decoherence due to light shift and backaction is discussed in Appendix B. For a linearly polarized pulse with a complex amplitude α, n ± = |α| 2 /2. The total absorption probability by both circularly polarized modes for an atom due to a pulse illumination is where we set For ∆ = 0, ε = 1.0 × 10 −7 . Once an atom absorbs the photon, it will spontaneously emit the photon and is projected to | ↑ . This is the main source of decoherence with our experimental setup. In APL, we repeat the measurement with N ph (SNR) for N rep times. The total probability of decoherence is given from the total number of photons, as in Eq. (28), and we obtain P total a (SNR) = εN ph (SNR)N rep . Decoherence is safely permitted to be 1 SNR , so N rep is given by N rep as a function of ∆ is shown in Figure 6. We can see that N rep is maximum at ∆ = 0 and has the value of 1.1 × 10 2 when SNR= 10. This means that we can repeat APL for 100 cycles. Numerical simulation To confirm that the Allan deviation is reduced at rate τ −1 , we ran a simple numerical simulation. We first generated a free-run LO noise of the flicker frequency noise type. We used a noise level of σ f reerun y (τ ) = 1 × 10 −12 to model the noise of a crystal oscillator. The feedback schematics are shown in Figure 4 (b). For the projection Ramsey method, we applied the following feedback scheme: where y (i) projection is the updated normalized frequency at the end of the i-th cycle, G P is the "proportional" gain, and ∆φ (i) is the phase accumulated during the i-th cycle. We modeled the effect of projection measurement by resetting the phase at every cycle. For APL, feedback frequency is given by where φ (i) is the (total) phase measured via weak measurement. Note that this φ (i) cannot be obtained unless the phase is maintained over (i) cycles. The measurement noise that corresponds to σ Q of the given SNR is included in ∆φ or φ. We assumed that the dead time was zero for both projection Ramsey and APL. For APL, we included the effect of the decoherence by resetting the phase every 100 cycles. The results of the numerical simulation are shown in Figure 7. This graph show that the Allan deviation is averaged down at a rate of ∝ τ −1/2 for a feedback cycle with the projection Ramsey method. For APL, the stability decreases as τ −1 , from τ = 0.5 to 50 s. The phase is reset every 50 s (100 measurement cycles), and thus the stability decreases as τ −1/2 from 50 s, which is the same slope as that of the projection Ramsey method. SNR=10 was chosen to minimize the decoherence rate during APL. For comparison, the SNR of the projection Ramsey method was also set to 10, representing a case when SNR is limited by technical noise [12] at the 10 % level. Step time width dt = 50 ms, carrier frequency ν 0 = 12.6 GHz, SNR=10, cycle time T c = T F P = 500 ms, and proportional gain G P = 1. Discussion For APL to achieve white phase noise, we need to trap a large number of ions in an optical path and also keep the dead time sufficiently small compared to T F P . Dead time is the time taken by steps (2), (4), (5), and (6) in Figure 3. For the number of atoms, we chose N a = 10 6 to have a decent OD. If we try to confine ions in A p = 1.3 × 10 −7 m 2 and assume the inter-distance of ions to be 30 µm, then the trap length needs to be at least 20 cm to have 10 6 ions. We can avoid making a long trap by forming a bad cavity along the probe beam, around the ions of finesse ∼100, and reduce the number of ions to 10 4 and the size of the trap to 2 mm. As for the dead time, its effect in a case with projection noise has been thoroughly investigated by Dick [18]. For APL, we need to at least keep the dead time low enough not to be limited by the so-called Dick limit. In other words, APL will work only as long as the noise accumulated during the dead time is negligible. A quantitative discussion would require careful evaluation and simulation, but this is beyond the scope of this paper. We have discussed the stability of APL based on a simulation assuming that atomic clock frequency does not change over time. When atomic clock frequency is unstable, the stability of the atomic clock will be limited by the stability of the atomic clock frequency itself. The systematic uncertainty of the 171 Yb + ions, cooled by helium buffer gas, is reported to be 6 mHz [19]. Even with this rather high systematic uncertainty, we can still expect the Allan deviation to decrease to σ y = 5 × 10 −13 or lower without further cooling. The experimental setup for APL coincides with that of spin squeezing. Wineland proposed using the spin-squeezed state to overcome quantum projection noise [20,21], and spin squeezing for microwave clock transitions has been demonstrated [22,23]. The comparison between APL and spin squeezing is an interesting topic and will be published elsewhere. Here, we briefly mention that both use the weak (dispersion) measurement, but squeezing aims to improve the SNR in one measurement while APL initially uses a lower SNR but aims to improve the stability over many cycles by preserving the phase longer and avoid the QPN limit in the long term. In a similar spirit, to improve the (short term) stability of the atomic clock, an optical active clock has been proposed [24,25]. However, the design of the optical active clock consists of an optical cavity, and long-term stability is again limited by the thermal noise of the mirror coating. Note that APL primarily contributes to improving the stability, and not directly to the accuracy of the atomic clock. However, improvement of the stability would enable the experimental estimation of the accuracy to be much shorter and indirectly contribute to improving the accuracy. Once APL is demonstrated with the microwave clock, the same principle should be applied to the neutral atom clock trapped in the optical lattice. We propose to use rf-ion traps for proof-of-principle demonstration with a microwave clock, but there will be difficulties in applying APL to such traps. First, buffer gas cooling will no longer be sufficient for ions to stay within the Lamb-Dicke limit because the wavelength is much shorter. Second, AC stark shift due to a trap rf-field will shift the optical clock frequency. An optical lattice clock is a candidate for avoiding those two problems. The time scale for applying APL will be shorter (up to ∼1 sec) because the trapping time of the optical lattice clock is short (∼1sec) compared to an rf-ion trap (∼days). corresponds to the amplitude of the EM field in classical electrodynamics, it can be said that the phase shift operator exp(iϕ ± n ± ) gives the phase shift. An evolution operator that gives the rotation of polarization plane takes the form of Eq. (19). This can be seen by the following equation. We introduce two linear polarization modes h and v, which are related to the σ ± mode, as After the unitary conversion with the rotation operator R(θ), the photon-annihilation operator a h,v becomes where Eq. (A.1) is used. Thus, it can be said that the rotation angle of the polarization plane is θ. Appendix B. Light shift and backaction The probe beam affects the spin state in two ways: light shift (AC Zeeman shift) and imbalance fluctuation of the right and left circularly polarized component. We call the latter case "backaction." Both cases rotate the spin along the longitudinal u−v direction, but the difference is that light shift rotates the spin direction at a constant rate while backaction rotates randomly, resulting in increase of the deviation along the longitudinal direction. Light shift is zero for the case with ∆ = 0 because the contributions from 2 P 1/2 (F = 0, m F = ±1) cancel each other. In the following, we will show that the effect of backaction is also negligible compared to the decoherence due to absorption, which has already been discussed. (B.5) In the following discussion, we consider û = û 2 = 1. The minimum uncertainty state ∆v 2 = ∆ŵ 2 = 1/N a corresponds to the quantum projection noise limit. After a probe pulse illumination, the atomic state operator evolves from to ′ aŝ In the second term of Eq. (B.14) on the right side, 4β 2 |α| 2 represents the backaction noise. For our experimental parameters, variance is increased along the longitudinal direction by 4β 2 |α| 2 = 3.7 × 10 −10 per measurement of SNR=10. The backaction is negligible under the condition of 4β 2 |α| 2 N a ≪ 1. This condition is satisfied as long as the sensitivity of the Faraday rotation is well below the quantum projection noise limit.
8,094.8
2011-07-26T00:00:00.000
[ "Physics" ]
Association rule mining algorithm based on Spark for pesticide transaction data analyses With the development of smart agriculture, the accumulation of data in the field of pesticide regulation has a certain scale. The pesticide transaction data collected by the Pesticide National Data Center alone produces more than 10 million records daily. However, due to the backward technical means, the existing pesticide supervision data lack deep mining and usage. The Apriori algorithm is one of the classic algorithms in association rule mining, but it needs to traverse the transaction database multiple times, which will cause an extra IO burden. Spark is an emerging big data parallel computing framework with advantages such as memory computing and flexible distributed data sets. Compared with the Hadoop MapReduce computing framework, IO performance was greatly improved. Therefore, this paper proposed an improved Apriori algorithm based on Spark framework, ICAMA. The MapReduce process was used to support the candidate set and then to generate the candidate set. After experimental comparison, when the data volume exceeds 250 Mb, the performance of Spark-based Apriori algorithm was 20% higher than that of the traditional Hadoop-based Apriori algorithm, and with the increase of data volume, the performance improvement was more obvious. Introduction Smart agriculture is a modern agricultural mode supported by Internet of Things technology and data science.It is the outcome that combines information technology and agriculture compared with precision agriculture, smart agriculture focuses on how to make the most efficient use of various agricultural resources and minimize agricultural energy consumption, including smart production, smart circulation, smart sale, smart community and smart management.As of 2018, Chinese digital economy [1] ranks second in the world and Chinese agriculture has entered the age of digitalization.China has gradually achieved that information perception, quantitative decision-making, intelligent control and personalized service in the whole process of agricultural production.In this context, agricultural operators have a huge demand for agricultural-related information services in order to achieve the accurate investment of agricultural inputs [2] .Pesticide is an important agricultural input.China's pesticide application ranks first in the world, far higher than the average level of pesticide application in the world, posing a serious threat to wildlife, soil and water resources [3] .At present, the agricultural supervision data of China has accumulated a certain scale, and the pesticide transaction data collected by the pesticide national data center only produces more than 10 million records daily.In order to solve the problem of pesticide abuse, it is urgent to mine the hidden relationship in the pesticide circulation data.In turn, so as to provide data support for the supervision and management and healthy development of the pesticide industry. Spark is a parallel computing framework based on In-memory cluster computing, which has a one hundred times better performance than the popular Hadoop MapReduce algorithm.It ensures the real-time performance of data processing in the big data environment with high fault-tolerance and high scalability.Thus, this framework is commonly used for analyzing mass data because of its excellent performance [4] .Spark's memory computing is based on a new distributed memory abstract resilient distributed dataset (RDD).For RDD, Spark has many built-in operations that can convert one RDD to another.Memory calculations are made up of this series of RDD operations.In particular, the RDD persistence operation can cache the RDD in the memory of the working node [5] , so that when the subsequent operations reuse the data, they can be directly read from the memory.This is another factor affecting the computing speed of Spark.In addition, Spark's fault-tolerant approach is also very different from Hadoop, which is fault-tolerant through multiple copies of data.Spark does not need to back up data.It records a series of operations performed on the RDD and constructs a directed acyclic graph (DAG).If the data is in error or lost, it is recalculated according to the DAG. Spark was originally developed as a cluster-computing framework by University of California, Berkeley in 2009, then became open-source next year.There was a lack of data mining framework at that time, and most of these frameworks available were insufficient in optimization.Apriori algorithm, which was proposed by Agrawal in 1993, is mainly used for association analysis.The algorithm is able to obtain frequent itemsets by generating candidate itemsets and testing downward closure lemma [6] .Some modifications were proposed to develop Apriori algorithm.Lin et al. [7] proposed an improved Apriori algorithm based on array vector, which reduces the number of connection and unnecessary traversing, improves the utilization efficiency of memory.Similarly, the improved Apriori algorithm based on vector matrix was proposed by Cao et al. [8] Zhao et al. [9] used orthogonal linked list to improve the storage process of Apriori algorithm.This algorithm simplifies the Scala process and the pruning process, thus simplifying the generation process of frequent itemsets and improving the time efficiency of Apriori algorithm. TF-IDF is another Association Rules Mining algorithm.It is used for feature extraction in text based on Vector Space Model by calculation the weight of each feature item for the text, to extract the key words and core content in the article.In addition, TF-IDF is able to be used of dimension reduction of features for text preprocessing [10] .Therefore, based on the existing research, this paper proposed an optimal Apriori algorithm and implemented parallelization based on Spark.From the experiments on data sets of millions of orders and analysis of the algorithms idea and performance, we find that there are three deficiencies in the Apriori algorithm.(1) The method of filtering out non-frequent itemsets in processing of generating the L k+1 needs further improvement. (2) There is simplified margin for excessive connections in itemsets during the processing of L k connection.(3) Apriori algorithm will access many redundant data items and transactions when traversing the database.In view of the weakness of Apriori algorithm mentioned above, the corresponding improvement methods, and more efficient algorithm ICAMA were proposed in this paper.The ICAMA algorithm uses the idea of MapReduce to improve the two stages of Apriori.The first stage: the data structure is changed while reading the data set to be processed from the HDFS, and the first frequent item set is filtered, and the finally obtained data set is stored in the RDD form in the memory of each node of the cluster.The second stage: frequent k itemsets are directly generated based on frequent k-1 item sets.Repeat this process until no more frequent itemsets are generated [11] They all include the processing that data transformation and statistics.A comparison experiment between ICAMA algorithm and MapReduce based Apriori algorithm shows a result of 20% performance improvement for B-Apriori. And the high performance is maintained even dealing with a million-level dataset.In addition, this paper implements ICAMA algorithm based on Spark framework, which fills the gap that there is no algorithm for Association Rules Mining in Spark's scalable machine learning library (MLlib) [12] . Introduction of Spark framework Spark is a parallel computing framework originally developed by the Berkeley AMP laboratory, which is based on In-memory cluster computing.This framework has the advantage of in-memory computing based on Resilient Distributed Dataset (RDD), so it is faster than Hadoop MapReduce computing framework.In-memory computing is the key to the high efficiency of Spark framework, which refers to loading useful data onto the database into the memory of the computing node when Spark is working.RDD is the implementation of in-memory computing. Persist operation and fault tolerance are two significant characters of RDD [13] .The effect of persist operation is to cache the RDD to memory of the computing node.So, persist operation provides more inefficient procession in reuse data.Unlike Hadoop, Spark builds DAG by recording historical operations on RDD to improve fault tolerance instead of data backup.When data is wrong or lost, Spark gets correct RDD by original RDD and tracing the DAG. Spark improves performance by 100 times compares to traditional Hadoop MapReduce method.The architecture of Spark shown as figure 1can be divided into four modules: Spark SQL-RDD (for unit of data execution), MLlib (for Machine leaning), Graphx (for graphs computation) and Spark Streaming (for real-time processing).Meanwhile, Spark is highly efficient because it's able to store intermediate results of iterations in memory rather than in hard disk.The modules of Spark will be described in following [14] . Algorithmic details of Apriori It only requires traversing the data set twice for Spark to implement the matrix-based Apriori algorithm.Combining with technological architecture, Spark improves the efficiency of Association Rules Mining by using global and local support-based pruning.Its transaction data sets and frequent itemsets are stored in HDFS file system based on Hadoop.In order to save memory space and reduce traversing times, the matrix stores Boolean values, and each row as a transaction, each column as a differential item.The support counts of itemsets can be got by doing "and' operations between corresponding matrices [15] . Apriori is an important algorithm for Association Rules Mining.It can be divided into two steps: the first step is to find all the frequent itemsets and the second step is to generate association rules based on frequent itemsets.When the number of sets is greater than 0, a list of candidate itemsets consisting of k items is generated, and then to keep frequent itemsets and generate a list of candidate itemsets consisting of k+1 items [16] . Implementation of distributed Apriori based on Spark This paper implements a distributed Apriori algorithm using Scala programming language, which mainly combines Spark framework and RDD operator.The implementation of the algorithm is divided into the following two parts. The first part is to generate frequent itemsets L 1 , which is shown in Figure 2. Including: 1) Use flatMap to let transaction set T be distributed to parallel computing system in the form of RDD<String and Number>. 2) Accumulate number of items with reduceByKey. 3) Use filter to filter down the item set less than the support. The second part,is to get L K from L K+1 .Including: 1) L K Self-join to C K+1 . 2) Traverse the database, compare CK by the method in first part. The idea of improving the algorithm In the first phase of the classic Apriori Spark-based implementation YAFIM, the data set to be processed directly into the HDFS [17] in the first stage is stored in the RDD form in the memory of each node of the cluster, and then each map task reads in and processes several rows.Each item contained in these lines is transmitted with a value of 1, and the reducer sums and filters all the value values of each item to obtain an item whose number of occurrences is not less than the minimum support.Since the data set is read directly from HDFS, its organization in memory remains the same: each row represents a transaction T, and T consists of the TID [18] and all the items contained in the transaction T. Therefore, it is necessary to calculate the number of occurrences of an item set, and only the data set can be traversed as a whole to count the results.This process is repeated for each iteration, which is a considerable time consumption. In order to solve the above problems, ICAMA adopts a data structure conversion method to read data sets from HDFS and realize data structure conversion [19] .Each map task reads in and processes several rows.The processing method is different from YAFIM [20] , but is included in the transaction.All items emit the key-value pairs of the item and the corresponding transaction number, and the reducer combines each corresponding transaction number.The transaction number is then counted and filtered to obtain a converted data set F containing only a frequent set of 1 items.The structure conversion process of the data set [21] . A xy in F represents whether Ix is included in TID y .If it is included, a xy is 1, otherwise it is 0. At this time, if you need to calculate the number of occurrences of a k item set in the entire data set, you only need to find the value corresponding to the k items in data set F. The result of the operation is all the transaction numbers containing the k item set, and the count of occurrences can be obtained. The ICAMA algorithm uses the idea of MapReduce to improve the two stages of Apriori algorithm.( 1) ICAMA proposed a suitable data structure for simplify the number of occurrences of the item set from traversing the entire data set to summing the bit set of the corresponding item.And then discarded the generation process of the candidate to further improve the efficiency of the algorithm [22] .(2) ICAMA make the frequent k-1 item sets stored in Fk-1 are directly connected to the same two types as YAFIM [23] .If they are connectable, their corresponding Bit Sets are summed, and the Bit Set operation and operation are performed.The processing will be terminated while the transaction number of all connected k itemsets is recorded in the Bit Set.Next, it is determined whether the number of transaction numbers in the Bit Set is greater than the minimum support degree.If it is greater than, the connected item set is a frequent k item set, and the result Bit Set is stored as a key value pair in Fk.The first stage start with convert each row of the dataset into multiple (item, TID) key-value pairs by flat Map(), then reduce By Key() to connect the TIDs of the same key into a string and filter the string at the same time using filter() The number of transaction numbers included in the transaction number is less than the minimum support value [24] , and then map() is used to construct a string of each TID into a Bit Set, so the first frequent itemsets will be stored in the form of (item, Bit Set) key-value pairs. The second stage is to obtain k-item sets through the iterative process which is start from k-1 item sets.First, the candidate k item set is obtained from the frequent k-1 item set self-joining and pruning.In order to make the search candidate set faster, YAFIM stores the candidate k item set in the hash tree.Then start the map task, each map task processes several transactions, to obtain all k pairs in the transaction and searches the hash tree and determine whether it is a candidate set.If it is a k candidate set, it is transmitted as a Key (key, 1) key-value pairs, the reducer counts and filters the parts of the frequent k-item set.This is the implementation of the most classic Apriori algorithm, but it is often because the most time-consuming process of generating candidate sets in this process makes the efficiency of the algorithm constrained. Time complexity It is necessary to make assumptions about some values and then express the analysis results in this form in the form of mathematical expressions.Suppose T represents the number of transactions in the data set to be processed, M represents the number of map tasks in the work, and f represents the number of frequent 1 item sets [25] .The asymptotic time complexity of Experimental data The experimental data is the transaction information of agricultural inputs products collected by the Institute for the Control of Agrochemicals in China Pesticide Digital Supervision & Management Platform [26] .Every day, more than 100000 pesticide operators across the country upload their business information to this platform [27] , including price, trading location, the varieties of agricultural products inputs and scale of transactions.The supervision platform generates more than 10 million data records per day, so we took one day's data generated from this platform for analysis and testing the performance of the spark-based Apriori algorithm [28] . Holistic description of the experiment 1) Experimental environment The computer cluster of the experimental platform consists of eight servers.Each server installs two same Linux systems (Ubuntu, Version.12.04) with exception of computing framework, and the computing frameworks they install respectively are Spark + YARN, Spark + Mesos and Hadoop [29] . 2) Experimental steps The Spark based parallel computing is implemented by Mesos.So, it is necessary that set the host and port of the Spark-Mesos before the experiment.Deploying Spark on YARN to deploy Spark frameworks on YARN requires first installation of Maven3.0.4 [30] and configuration of its environment variables.Subsequently, Maven is used to compile and package the Spark kernel separately into an independent jar package.Copy the jar package into the other machines in the cluster complete configuration [31,32] . 3 ) Results The experiment compares among the performances of single machine Apriori algorithm, Hadoop based parallel computing Apriori algorithm and Spark based parallel computing Apriori algorithm on different size data sets.The results are shown in Figures 3 and 4. Analysis Experiments show that the scale of dataset to be processed is positively related to computation.So single machine cannot complete Association Rules Mining for large amounts of data limited by computing resources [33] . Although the ICAMA algorithm described in this paper consumes additional running time due to process communication and data transmission, this consumption will not increase greatly due to the expansion of datasets.The larger the amount of data, the smaller the consumption ratio is.Besides, the algorithm has the advantages of parallel computing, such as making full use of computing resources on different machines [34] and reducing the demand for the performance of a single machine [35] . Conclusions This paper briefly summarizes the performance bottlenecks of the classic Apriori algorithm, and improves these aspects, especially the candidate set generation process, and obtains a more optimized algorithm.Then, based on the Spark platform's efficient support for the iterative algorithm, it will improve.The Apriori algorithm is parallelized on Spark and implemented.Then, the detailed analysis and comparison of the existing classic Apriori Spark implementation YAFIM and the improved Apriori algorithm Spark implementation ICAMA are described, and how to improve the algorithm is described.Finally, the efficiency of ICAMA is fully proved theoretically and experimentally.Especially when the amount of data continues to increase, the ICAMA performance improvement will be more obvious.Therefore, the algorithm described in this article has the effecter clustering and has better computational performance on large-scale data.In summary, this algorithm can effectively mine agricultural inputs information, provides the basis for the market regulation of agricultural inputs product markets, and realizes the precise investment of agricultural inputs.And then it provides algorithm basis for achieving the supervision and traceability management of the agricultural inputs market. Figure 1 Figure 1 Architecture of Spark Figure 2 Figure 2 Flowchart of distributed Apriori 3 Improvement of ICAMA algorithm 3.1 The idea of improving the algorithmIn the first phase of the classic Apriori Spark-based implementation YAFIM, the data set to be processed directly into the HDFS[17] in the first stage is stored in the RDD form in the memory of each node of the cluster, and then each map task reads in and processes several rows.Each item contained in these lines is transmitted with a value of 1, and the reducer sums and filters all the value values of each item to obtain an item whose number of occurrences is not less than the minimum support.Since the data set is read directly from HDFS, its organization in memory remains the same: each row represents a transaction T, and T consists of the TID[18] and all the items contained in the transaction T. Therefore, it is necessary to calculate the number of occurrences of an item set, and only the data set can be traversed as a whole to count the results.This process is repeated for each iteration, which is a considerable time consumption.In order to solve the above problems, ICAMA adopts a data structure conversion method to read data sets from HDFS and realize data structure conversion[19] .Each map task reads in and processes several rows.The processing method is different from YAFIM[20] , but is included in the transaction.All items emit the key-value pairs of the item and the corresponding transaction number, and the reducer combines each corresponding transaction number.The transaction number is then counted and filtered to obtain a converted data set F containing only a frequent set of 1 items.The structure conversion process of the data set[21] .A xy in F represents whether Ix is included in TID y .If it is included, a xy is 1, otherwise it is 0. At this time, if you need to calculate the number of occurrences of a k item set in the entire data set, you only need to find the value corresponding to the k items in data set F. The result of the operation is all the transaction numbers containing the k item set, and the count of occurrences can be obtained.The ICAMA algorithm uses the idea of MapReduce to improve the two stages of Apriori algorithm.(1) ICAMA proposed a suitable data structure for simplify the number of occurrences of Figure 3 Figure 4 Figure 3 Running time of different data block
4,835.6
2019-10-14T00:00:00.000
[ "Computer Science" ]
Calculation of Stopping Power and Range of Nitrogen Ions with the Skin Tissue in the Energies of ( 11000 ) MeV The use of heavy ions in the treatment of cancer tumors allows for accurate radiation of the tumor with minimal collateral damage that may affect the healthy tissue surrounding the infected tissue. For this purpose, the stopping power and the range to which these particles achieved of Nitrogen (N) in the skin tissue were calculated by programs SRIM (The Stopping and Range of Ions in Matter),(SRIM Dictionary) [1],(CaSP)(Convolution approximation for Swift Particles )[2]which are famous programs to calculate stopping power of material and Bethe formula , in the energy range (1 1000) MeV .Then the semi empirical formulas to calculate the stopping power and range of Nitrogen ions in the skin tissue were founded from fitting for average the values which calculated by using these programs using (MATLAB2016) program,the maximum value of energy for Nitrogen ions can lose along its path in skin tissue is founded and the range correspond to this value, the maximum range for Nitrogen ions can be reached in the skin tissue is founded too .The importance of the research is to know the data of thes particles and their use in treatment without causing a bad effect on the patient. Introduction Of the important quantities that must be known when studying the particles therapy field or interaction of heavy particles with matter is the stopping power and range.The stopping power is the ability of the medium to stop the particles which penetrate it,the mass stopping power of a material is obtained by dividing the stopping power by the density (ρ).Unit for mass stopping power is Mev.cm g 3,4 , the range of the particle is the average distance travelled before the particle loses all its original kinetic energy [5],which computed by numerical integration of the stopping power.These quantities are studied to determine the energy needed for the particle to penetrate a specific dimension in the tissue,the energy range (1-1000)MeV has been chosen because in low energies heavy ions cannot penetrate the materials as the light particles. The interaction of charged particles with materials enters a number of medical fields and nuclear interactions; generally, the interaction of heavy charged particles such as Nitrogen (N) is different from that of other particles such as protons and alpha particles, which makes it more positive in the treatment of cancer diseases because of the ionic density in the latter orbit, making the damage in the RNA molecule in one cell more effective and this increases the biological efficiency of the dose factor (1.5-3) compared with the use of protons [6]. The aim of this work is to study appropriate database can be chosen by selecting the appropriate energy of Nitrogen ions to irradiate the infected skin tissue and prevent the passage of these particles into adjacent non-infected tissues, which may cause adverse medical effects on the patient, tables (1) & (2) show some properties of both Nitrogen ion and skin tissue respectively [18,19].Thus, the range and stopping power of the skin tissue for Nitrogen ions will be studied using several programs which are CaSP, SRIM and SRIM Dictionary and Bethe formula, also we calculate the range of this particle.Previous research in this area has been carried out on this ion and carbon such as shown in references [7,8].In the present work we compared some famous and available theoretical (Casp 5.2) [13] and semi-empirical (SRIM 2008)is a software package concerning the stopping and range of ions in matter.Since its introduction in 1985, major upgrades are made about every six years.Currently, more than 700 scientific citations are made to SRIM every year [14 & 15], procedures of stopping power calculation has been checked using statistical analysis of deficits between computed and experimental data of (skin tissue) for (Nitrogen) ions in energy range of (1-1000) MeV.The range formulas were obtained by directly integrating the reciprocal of stopping power for (Nitrogen) ions and the values of the range for the (skin tissue) are calculated and compared with (SRIM). Theory Stopping power The Bethe -Bloch formula is used to calculate the stopping power of heavy charged particles derived using relative quantitative e mechanics given by the relationship [9,10] Where (dE/dx): stopping power Z: charge of the incident particle n: number of electrons per unit volume.m0: rest mass of electron.Ѵ: velocity of incident particle.e: electron charge.K0: 1/4 ЛЄ 0 I: mean excitation or the ionization energy of the medium.This equation shows dependence of (dE/dx) on the velocity of interact particle but (ln ) almost does not give any little change in (v) .Many researchers have reported different tables and relationships for stopping power.For mixture or compound the stopping power can be calculated from the following relation [11]: Where :Wight fraction of I the element, (dE/dx):stopping power Range of Nitrogen ions Because the stopping power depends on the energy of particle, thenit is possible to calculate the path of the particle which corresponds to the lower particle energy from the initial value to some smaller value .Based on the definition of the stopping power, that path is equal to the integral [12] Where dx: path length variable of integration.(dE/dx): stopping power.T0: the initial kinetic energy of the charge particles.T1: some limit of energy below the calculation cannot be performed. Results and discussion In the present work the calculations of the mass stopping power and range of Nitrogen ions in the elements of skin tissue (C=20.4%,Cl=0.3%,H=10%,K=0.1%,N=4.2%,Na=0.2%,O=64.5% ,P= 0.1%andS=0.2%)[14] 1-Using MATLAB 2016 we obtained the following semi-empirical formula for mass stopping power for Nitrogen ions by calculation of the weighted average for mass stopping power.1.794 , c = 0.002386 R-square: 1 3-We note that the maximum value of mass stopping powers found in Hydrogen element, because Hydrogen was gas molecules in the traversing path of the heavy ions and hence the more probability of interaction and more energy loosed, this conclusion is identical to what is stated in the reference [15]. 4-From table(3)we found that the maximum value of energy the Nitrogen ions can lose along its path in skin tissue are ( 9989.691218MeV.cm 2 /g ) which correspond to the energy ( 6.5 MeV) .Figure (4), illustrates this. 5-We note that the range corresponds to energy which can lose along its path of Nitrogen ions in skin tissue is (0.002518656cm) in energy (6.5 MeV). Figure (5), illustrates this. Conclusions The menders can be provided with a database of Nitrogen ions to enable them to deal with these ions to treat tumors in the skin tissue,we conclude that the atoms most responsible for loss of energy in the skin tissue are Hydrogen atoms,the maximum value of energy the Nitrogen ions can lose along its path in skin tissue are ( 9989.691218MeV.cm 2 /g ) which correspond to the energy ( 6.5MeV) ,the Range correspond to energy which can lose along its path of Nitrogen ion in skin tissue is (0.002518656cm) in energy ( 6.5 MeV).Nitrogen ions can penetrate skin tissue a distance (1.115039336 cm) in the energy (1000MeV),which allows the processor to take into account this energy and the corresponding range of the reach of this particle when used in treatment. 2 - The following semi-empirical equation were obtained from same program for range of Nitrogen ions in skin tissue: equation constants which are as follows: a = 4.617e-06 ,b = 6 - From figures (4) note that the theoretical values are consistent with the present work values of the mass stopping power values for Nitrogen ions in skin tissue indicating the validity of the present results. 7 - From figures(5) note that the theoretical values correspond to the present work values of the range for Nitrogen ions in skin tissue, this indicates the validity of the present results.8-Figure(6) shows the of the Bruuge peak the range to which the Nitrogen ions loses most of its energy in skin tissue. Figure ( 3 )Figure ( 4 ) Figure (3): Mass stopping power for N with skin tissue using SRIM ,SRIM Dictionary, CaSP programs, Bethe formulae and average for these values ): Mass stopping power for nitrogen in elements presented in skin using SRIM program Figure (2): Mass stopping power for nitrogen in elements presented in skin using CaSP program https ://doi.org/10.30526/31.2.1945 Physics | 76
1,927.8
2018-09-12T00:00:00.000
[ "Physics", "Medicine" ]